Search Results
Found 79 results
510(k) Data Aggregation
(179 days)
The BD CTGCTV2 assay incorporates automated DNA extraction and real-time polymerase chain reaction (PCR) for the direct, qualitative detection of DNA from:
- Chlamydia trachomatis (CT)
- Neisseria gonorrhoeae (GC)
- Trichomonas vaginalis (TV)
The assay may be used for detection of CT, GC and/or TV DNA in patient- or clinician-collected vaginal swab specimens (in a clinical setting) and male and female urine specimens. The assay may also be used for the detection of CT and GC DNA in endocervical swab and Liquid-Based Cytology (LBC) specimens in ThinPrep® PreservCyt® Solution using an aliquot that is removed prior to processing for the ThinPrep® Pap test. The assay may also be used for the detection of CT and GC DNA in clinician-collected rectal and oropharyngeal swab specimens.
The assay is indicated for use with asymptomatic and symptomatic individuals to aid in the diagnosis of chlamydial, gonococcal, and/or trichomoniasis urogenital disease and chlamydial and gonococcal extragenital infection.
The BD CTGCTV2 assay is available for use with the BD MAX™ System (urogenital specimens) or the BD COR™ System (urogenital and extragenital specimens), as described above.
The BD CTGCTV2 assay, performed on the BD COR™ System (hereafter referred to as BD CTGCTV2), is designed for use with the applicable BD Molecular specimen collection and transport devices for male and female urine, rectal swabs, oropharyngeal swabs, vaginal swabs, endocervical swabs, and LBC specimens (PreservCyt®). Specimens are collected and transported to the testing laboratory using their respective transport devices under conditions of time and temperature that have been determined to maintain the integrity of the target nucleic acids.
The BD COR™ MX Instrument, when combined with the BD COR™ PX Instrument, is to be used for automated sample preparation, extraction, and purification of nucleic acids from multiple specimen types, as well as the automated amplification and detection of target nucleic acid sequences by fluorescence-based real-time PCR for simultaneous and differential detection of Chlamydia trachomatis, Neisseria gonorrhoeae, and Trichomonas vaginalis.
The BD CTGCTV2 assay extraction reagents are dried in 96-well microtiter plates that contain binding magnetic affinity beads and Sample Processing Control (SPC). Each tube is capable of binding and eluting sample nucleic acids. The SPC monitors the integrity of the reagents and the process steps involved in DNA extraction, amplification and detection, as well as for the presence of potential assay inhibitors.
The BD CTGCTV2 assay liquid reagent plate includes Wash, Elution and Neutralization buffers. The beads (described above), together with the bound nucleic acids, are washed and the nucleic acids are eluted by a combination of heat and pH. When performed on BD COR™ System, there is an additional buffer to rehydrate the dried extraction mix. Eluted DNA is neutralized and transferred to the Amplification reagent (described below) to rehydrate the PCR reagents. After reconstitution, the BD COR™ System dispenses a fixed volume of PCR-ready solution containing extracted nucleic acids into the BD PCR Cartridge. Microvalves in the BD PCR Cartridge are sealed by the system prior to initiating PCR in order to contain the amplification mixture and thus prevent evaporation and contamination.
The BD CTGCTV2 assay is comprised of two targets for Chlamydia trachomatis (detected on the same optical channel), two targets for Neisseria gonorrhoeae (detected on two different optical channels) and one target for Trichomonas vaginalis (detected on one optical channel). Only one Chlamydia trachomatis target is required to be positive in order to report a positive result. Both Neisseria gonorrhoeae targets are required to be positive in order to report a positive result.
The amplified DNA targets are detected using hydrolysis (TaqMan®) probes, labeled at one end with a fluorescent reporter dye (fluorophore), and at the other end, with a quencher moiety. Probes labeled with different fluorophores are used to detect the target analytes in different optical channels of the BD COR™ System. When the probes are in their native state, the fluorescence of the fluorophore is quenched due to its proximity to the quencher. However, in the presence of target DNA, the probes hybridize to their complementary sequences and are hydrolyzed by the 5'-3' exonuclease activity of the DNA polymerase as it synthesizes the nascent strand along the DNA template. As a result, the fluorophores are separated from the quencher molecules and fluorescence is emitted. The BD COR™ System monitors these signals at each cycle of the PCR and interprets the data at the end of the reaction to provide qualitative test results for each analyte (i.e., positive or negative).
The provided FDA 510(k) clearance letter and summary for the BD CTGCTV2 assay detail its performance in detecting Chlamydia trachomatis (CT) and Neisseria gonorrhoeae (GC) in extragenital specimens (rectal and oropharyngeal swabs).
Here's an analysis based on your request:
Acceptance Criteria and Reported Device Performance
The acceptance criteria for the BD CTGCTV2 assay are implicitly demonstrated through its clinical performance results, where the assay's sensitivity (Positive Percent Agreement - PPA) and specificity (Negative Percent Agreement - NPA) for extragenital specimens are compared against a Composite Comparator Algorithm (CCA). The FDA's clearance indicates that these performance metrics met the necessary standards for substantial equivalence.
Table of Acceptance Criteria and Reported Device Performance:
While explicit numerical acceptance criteria (e.g., "PPA must be >= X%") are not directly stated in the provided text, the reported performance measures are the ones that met the FDA's requirements for clearance.
| Metric | Target/Condition (Implicit Acceptance Criteria) | Reported Device Performance (BD CTGCTV2) |
|---|---|---|
| Chlamydia trachomatis (CT) - Oropharyngeal | ||
| Sensitivity (PPA) | Sufficiently high for diagnostic use | 100% (86.2–100% CI) |
| Specificity (NPA) | Sufficiently high for diagnostic use | 99.8% (99.5–99.9% CI) |
| Neisseria gonorrhoeae (GC) - Oropharyngeal | ||
| Sensitivity (PPA) | Sufficiently high for diagnostic use | 92.8% (85.8–96.5% CI) |
| Specificity (NPA) | Sufficiently high for diagnostic use | 99.5% (99.1–99.7% CI) |
| Chlamydia trachomatis (CT) - Rectal | ||
| Sensitivity (PPA) | Sufficiently high for diagnostic use | 97.7% (93.5–99.2% CI) |
| Specificity (NPA) | Sufficiently high for diagnostic use | 99.4% (99.0–99.7% CI) |
| Neisseria gonorrhoeae (GC) - Rectal | ||
| Sensitivity (PPA) | Sufficiently high for diagnostic use | 95.8% (89.7–98.4% CI) |
| Specificity (NPA) | Sufficiently high for diagnostic use | 99.8% (99.5–99.9% CI) |
| Non-Reportable Rate (Total CT and GC) | Reasonably low for clinical utility (e.g., <5%) | 0.6% (0.4–0.9% CI) initial test |
Study that Proves the Device Meets the Acceptance Criteria:
The key study proving the device meets the acceptance criteria is the Clinical Performance Study for Extragenital Specimens.
1. Sample Size Used for the Test Set and Data Provenance:
- Total Subjects Consented: 2,439
- Eligible Subjects: 2,375
- Total Eligible Specimens (test set): 4,652 (oropharyngeal and rectal swabs, considered for randomization)
- Eligible Specimens for Testing: 4,579 (2,303 oropharyngeal swabs and 2,276 rectal swabs)
- Data Provenance:
- Country of Origin: Not explicitly stated, but the mention of "eight geographically diverse sites" suggests a multi-site clinical study, likely within the US, given FDA clearance.
- Retrospective or Prospective: Prospective, as specimens were collected from subjects enrolled in the study. The study involved consenting subjects and collecting specimens specifically for this evaluation.
2. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts:
- This information is not provided in the document.
- The ground truth was established by a Composite Comparator Algorithm (CCA), not direct human expert interpretation of images or other subjective data.
3. Adjudication Method for the Test Set:
- The adjudication method used for the ground truth (CCA) was based on the concurrence of at least 2 out of 3 commercially available NAAT assays. This means it was a "2 out of 3" agreement model for establishing the positive or negative status of a specimen.
4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not done. This device is a fully automated in vitro diagnostic (IVD) assay that detects nucleic acids, not an AI-assisted imaging device requiring human interpretation. Therefore, the concept of "human readers improving with AI assistance" does not apply.
5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- Yes, this was a standalone performance study. The BD CTGCTV2 assay is an automated system (BD COR™ System) that performs sample preparation, extraction, amplification, and detection. Its performance "without human-in-the-loop" is precisely what the clinical performance study measures, specifically in comparison to a CCA, which also represents an objective, algorithm-based comparator.
6. The Type of Ground Truth Used:
- The ground truth used was a Composite Comparator Algorithm (CCA), consisting of results from three commercially available Nucleic Acid Amplification Tests (NAATs).
7. The Sample Size for the Training Set:
- The document does not specify the sample size for the training set. The provided clinical performance data (4,579 specimens) pertains to the test set used for validation. Given that this is an IVD assay, the "training" analogous to machine learning would relate to the development and optimization of the assay's reagents, protocols, and analytical parameters, rather than a distinct "training set" of patient data in the typical sense of AI/ML model development.
8. How the Ground Truth for the Training Set Was Established:
- As the document does not explicitly discuss a "training set" in the context of machine learning, it also does not describe how a ground truth for such a set was established. For an IVD assay, the "ground truth" during development (analogous to training) would typically involve well-characterized reference strains, spiked samples with known concentrations, and analytical validation studies to optimize assay parameters, which are alluded to in the "Analytical Performance Evaluation" section (e.g., LoD, Inclusivity, Cross-Reactivity, Microbial Interference).
Ask a specific question about this device
(178 days)
The BDC Dental Unit is intended to supply power to and serve as a base for dental devices and accessories. This device includes a dental chair and is intended for use in the dental clinic environment and is designed for use by trained dental professionals, dentists and/or dental assistants.
The BDC dental unit is intended to supply air, water, vacuum and electrical power to allow the dental practitioner an intuitive control center for all common and normal patient treatment procedures performed in the dental operatory. The BDC dental unit consists of a patient chair, dentist element (tray table), assistant element (3-way syringe, high volume evacuator and saliva ejector), side cabinet support center, foot control, cuspidor and a dental operating light, which for patient to sit during the dental diagnosis, treatment and/or operation.
The provided text appears to be an FDA 510(k) clearance letter and summary for a dental unit. It describes a medical device (BDC Dental Unit) that is essentially a dental chair with integrated controls for air, water, vacuum, and electrical power for dental devices. The device's primary function is to serve as a base and power supply for other dental instruments used by trained dental professionals.
Based on the provided document, there is NO information about an AI/ML component or study involving acceptance criteria related to AI/ML device performance. The document focuses on establishing substantial equivalence to a predicate device (A-dec 500) through non-clinical testing of mechanical, electrical, software (non-AI), and biocompatibility aspects.
Therefore, I cannot fulfill your request for:
- A table of acceptance criteria and reported device performance for an AI/ML component.
- Sample size and data provenance for an AI/ML test set.
- Number and qualifications of experts for AI/ML ground truth.
- Adjudication method for an AI/ML test set.
- Multi Reader Multi Case (MRMC) comparative effectiveness study for AI assistance.
- Standalone (algorithm only without human-in-the-loop performance) study for an AI/ML component.
- Type of ground truth for an AI/ML component.
- Sample size for the training set (for AI/ML).
- How ground truth for the training set was established (for AI/ML).
The document explicitly states: "Clinical data is not needed to characterize performance and establish substantial equivalence. The non-clinical test data characterize all performance aspects of the device based on well-established scientific and engineering principles. Clinical testing has not been conducted on this product." This further reinforces that there was no study like the one you are asking for, especially not one involving human readers or AI performance metrics.
The "Software Verification and Validation Testing" section mentions: "The software for this device was considered as a 'BASIC' level of concern, since a failure or latent flaw in the software would not directly result in serious injury or death to the patient or operator." This implies standard software validation, not necessarily AI/ML model validation.
In summary, the provided text does not contain the information required to answer your prompt regarding acceptance criteria and studies for an AI/ML medical device. The device described is a conventional dental unit, and its clearance is based on substantial equivalence to a predicate device through standard non-clinical performance and safety testing.
Ask a specific question about this device
(29 days)
The Denali™ Filter is indicated for use in the prevention of recurrent pulmonary embolism via placement in the vena cava in the following situations:
-Pulmonary thromboembolism when anticoagulants are contraindicated
-Failure of anticoagulant therapy for thromboembolic disease
-Emergency treatment following massive pulmonary embolism where anticipated benefits of conventional therapy are reduced
-Chronic, recurrent pulmonary embolism where anticoagulant therapy has failed or is contraindicated.
The Denali™ Filter may be removed according to the instructions supplied under the Section labeled: Optional Procedure for Filter Removal.
The Denali™ Vena Cava Filter is a venous interruption device designed to prevent pulmonary embolism. The Denali™ Filter can be delivered via the femoral and jugular/subclavian approaches. A separate delivery system is available for each approach. The Denali™ Filter is designed to act as a permanent filter. When clinically indicated, the Denali™ Filter may be percutaneously removed after implantation according to the instructions provided under the "Optional Procedure for Filter Removal" section in the Instructions for Use.
The Denali™ Filter consists of twelve shape-memory laser-cut nickel-titanium appendages. These twelve appendages form two levels of filtration with the legs providing the lower level of filtration and the arms providing the upper level of filtration. The Denali™ Filter is intended to be used in the inferior vena cava (IVC) with a diameter less than or equal to 28mm.
The Denali™ Vena Cava Filter System consists of an introducer sheath, an 8 French dilator, and a preloaded Denali™ Filter in a storage tube with a pusher. The 8 French dilator accepts a 0.035" guidewire and allows for an 800 psi maximum pressure contrast power injection. Radiopaque marker bands on the end of the dilator aid in measuring the maximum indicated IVC diameter. They are spaced at a distance of 28mm (outer-to-outer). The 55cm, 8.4 French I.D. introducer sheath contains a radiopaque marker and hemostasis valve with a side port. The pusher advances the filter through the introducer sheath to the predeployment mark and is then used to fix the filter in place while the filter is unsheathed.
This document, K240257, pertains to a 510(k) premarket notification for the "Denali™ Vena Cava Filter System – Femoral and Jugular/Subclavian Delivery Kit." It explicitly states that no changes have been made to the subject Denali™ Filter or Delivery System and that no performance testing was performed. The purpose of the submission is solely to update the Instructions for Use (IFU) for inclusion of the PRESERVE study summary data.
Therefore, there is no information in this document regarding new acceptance criteria for the device or a study performed to prove the device meets new acceptance criteria. The approval is based on the substantial equivalence to a predicate device (K160866) and the clinical testing previously provided to the FDA under an Investigational Device Exemption.
As the request asks to describe the acceptance criteria and the study that proves the device meets the criteria, and this document states no new testing was performed, I cannot provide the requested information from this document.
Summary of available information:
- No new acceptance criteria or device performance reported in this document. The submission focuses on updating the Instructions for Use with summary data from the PRESERVE study, which was previously conducted for the predicate device.
- No new studies were conducted for this submission. The document explicitly states: "Therefore, no performance testing was performed." and "The clinical testing was previously provided to FDA under the Investigational Device Exemption."
All other questions regarding sample size, data provenance, number of experts, adjudication method, MRMC study, standalone study, type of ground truth, and training set details are not applicable or cannot be answered from the provided text, as no new studies or performance evaluations were conducted for this 510(k) submission.
Ask a specific question about this device
(116 days)
The BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive, and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.
Performance characteristics for influenza A and B were established during January through March of 2011 when influenza viruses A/2009 H1N1, A/H3N2, B/Victoria lineage, and B/Yamagata lineage were the predominant influenza viruses in circulation according to the Morbidity and Mortality Weekly Report from the CDC entitled "Update: Influenza Activity-United States, 2010-2011 Season, and Composition of the 2011-2012 Influenza Vaccine." Performance characteristics may vary against other emerging influenza viruses.
If infection with a novel influenza virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to the state or local health department for testing. Virus culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.
The BD Veritor™ System for Rapid Detection of Flu A+B CLIA Waived Kit is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral antigens from nasopharyngeal and nasal swabs of symptomatic patients. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. It is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single test device. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other management decisions. All negative test results should be confirmed by another methodology, such as a nucleic acid-based method.
BD Veritor™ System Flu A+B test devices are interpreted by a BD Veritor™ Plus Analyzer. When using the BD Veritor™ Plus Analyzer, workflow steps depend on the selected operational mode and the Analyzer configuration settings. In Analyze Now mode, the instrument evaluates assay devices after manual timing of their development. In Walk Away mode, devices are inserted immediately after application of the specimen, and timing of assay development and analysis is automated.
The BD Veritor™ System Flu A+B CLIA-Waived Kit is an immuno-chromatographic assay for detection of influenza A and B viral antigens in samples processed from respiratory specimens. The viral antigens detected by the BD Flu A+B test are nucleoprotein, not hemagglutinin (HA) or neuraminidase (NA) proteins. Flu viruses are prone to minor point mutations (i.e., antigenic drift) in either one or both of the surface proteins (i.e., HA or NA). The BD Flu A+B test is not affected by antigenic drift or shift because it detects the highly conserved nucleoprotein of the influenza viruses. To perform the test, the patient specimen swab is treated in a supplied reaction tube prefilled with a lysing agent that serves to expose the target viral antigens, and then expressed through a filter tip into the sample well on a BD Veritor"10 Flu A+B test device. Any influenza A or influenza B viral antigens present in the specimen bind to anti-influenza antibodies conjugated to colloidal gold micro-particles on the BD Veritor™ Flu A+B test strip. The antigen-conjugate complex then migrates across the test strip to the capture zone and reacts with either Anti-Flu A or Anti-Flu B antibodies that are immobilized on the two test lines on the membrane.
The BD Flu A+B test device shown in Figure 1 is designed with five spatially distinct zones including positive and negative control line positions, separate test line positions for the target analytes, and a background zone. The test lines for the target analytes are labeled on the test device as 'A' for flu A position, and 'B' for flu B position. The onboard positive control ensures the sample has flowed correctly and is indicated on the test device as 'C'. Two of the five distinct zones on the test device are not labeled. These two zones are an onboard negative control line and an assay background zone. The active negative control feature in each test identifies and compensates for specimen-related, nonspecific signal generation. The remaining zone is used to measure the assay background.
The BD Veritor™ Plus Analyzer is a digital immunoassay instrument that uses a reflectance-based measurement method and applies assay specific algorithms to determine the presence or absence of the target analyte. The Analyzer supports the use of different assays by reading an assay-specific barcode on the test device. Depending on the configuration chosen by the operator, the instrument communicates status and results to the operator via a liquid crystal display (LCD) on the instrument, a connected printer, or through a secure connection to the facility's information system.
In the case of the Flu A + B test, the BD Veritor™ Plus Analyzer subtracts nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant line signal is below the cutoff, the specimen scores as negative. Use of the active negative control feature allows the BD Veritor™ Plus Analyzer to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal. The measurement of the assay background zone is an important factor during test interpretation as the reflectance is compared to that of the control and test zones. A background area that is white to light pink indicates the device has performed correctly.
The provided text describes a 510(k) premarket notification for modifications to the BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit. The current submission (K232434) focuses on changes to the BD Veritor™ Plus Analyzer, not the assay itself. Therefore, the details about acceptance criteria and clinical performance studies relate to the predicate device (K223016) and the assay's original clearance (K180438), as the modifications in K232434 do not impact the assay's analytical or clinical performance.
Here's a breakdown based on the provided input:
1. Table of Acceptance Criteria and Reported Device Performance
The current submission (K232434) is for modifications to the analyzer, not a new or modified assay. Therefore, it does not present new acceptance criteria or device performance data for the assay's diagnostic accuracy. Instead, it relies on the previously established performance of the predicate device (K223016) which itself relies on the performance established in K180438. The acceptance criteria for the current submission are related to the safety and electromagnetic compatibility of the modified analyzer.
| Acceptance Criteria (for Analyzer Modifications in K232434) | Reported Device Performance (for Analyzer Modifications in K232434) |
|---|---|
| Compliance with Safety Requirements for Electrical Equipment (IEC 61010-1:2010, IEC 61010-1:2010/AMD 1:2016, IEC 61010-2-101:2018) | Demonstrated compliance with specified standards |
| Compliance with Electromagnetic Compatibility and Electrical Safety (EN IEC 61326-1:2020, EN IEC 61326-2-6:2021, EN 60601-1-2:2015 + A1: 2021 [equivalence of ANSI AAMI IEC 60601-1-2:2014 including AMD 1:2021]) | Demonstrated compliance with specified standards; No EMI nor ESD susceptibility observed during compliance testing. Analyzer functionalities remained the same, and operations and performance were not impacted. |
Performance Characteristics for Influenza A and B (as established for K180438 / K223016 and referenced here):
The document mentions that performance characteristics for influenza A and B were established during January through March of 2011. While specific sensitivity and specificity values are not provided in this document excerpt, the "Indications for Use" and "Intended Use" sections clearly state that the device is for "direct and qualitative detection of influenza A and B viral nucleoprotein antigens."
2. Sample Size Used for the Test Set and Data Provenance
The provided text does not include the specific sample size for the test set used to establish the clinical performance of the Flu A+B assay, nor does it explicitly state the data provenance (e.g., retrospective or prospective studies) in detail, beyond mentioning:
- "Performance characteristics for influenza A and B were established during January through March of 2011"
- This period corresponded to when "influenza viruses A/2009 H1N1, A/H3N2, B/Victoria lineage, and B/Yamagata lineage were the predominant influenza viruses in circulation" in the United States, according to CDC reports. This implies real-world, clinical data from symptomatic patients in the US during that influenza season was used.
Since the current submission is for analyzer modifications and explicitly states, "Clinical Performance: Clinical performance testing was not required because the changes made to the Analyzer do not have an impact on the assay-specific clinical performance," this information would be found in the original submission (K180438) that cleared the assay.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts
This information is not provided in the given text. It would be part of the detailed clinical study report from the original assay clearance (K180438).
4. Adjudication Method for the Test Set
This information is not provided in the given text. It would be part of the detailed clinical study report from the original assay clearance (K180438).
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This is not applicable to the BD Veritor System. This device is a rapid chromatographic immunoassay read by a digital analyzer (BD Veritor™ Plus Analyzer), not an AI system designed to assist human readers in interpreting complex images or data. The analyzer automatically interprets the test results based on its algorithms and provides a positive, negative, or invalid result, rather than providing input to a human reader for their interpretation.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) was done
Yes, in essence, the BD Veritor™ Plus Analyzer operates as a standalone algorithm without human-in-the-loop performance for result interpretation. The text states:
- "The Analyzer is a digital immunoassay instrument that uses a reflectance-based measurement method and applies assay specific algorithms to determine the presence or absence of the target analyte."
- "The BD Veritor™ Plus Analyzer subtracts nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant line signal is below the cutoff, the specimen scores as negative."
- "Use of the active negative control feature allows the BD Veritor™ Plus Analyzer to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal."
This clearly describes an automated interpretation process by the device's algorithms.
7. The Type of Ground Truth Used (expert consensus, pathology, outcomes data, etc.)
The document states, "A negative test is presumptive, and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay." This implies that viral culture or an FDA-cleared influenza A and B molecular assay served as the reference standard (ground truth) for establishing performance characteristics during the original clinical studies.
8. The Sample Size for the Training Set
This information is not provided in the given text, and it's less relevant for an immunoassay where "training data" for a machine learning model might not be explicitly defined in the same way as for complex AI algorithms. For immunoassays, the "training" involves optimizing the assay reagents and conditions, and setting cutoff values, which are then validated with clinical samples.
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the given text. As mentioned above, the concept of a "training set ground truth" might not apply directly in the traditional sense for this type of immunoassay. Instead, the ground truth for establishing performance (and implicitly for setting cutoffs) would be established using the reference methods mentioned in point 7.
Ask a specific question about this device
(118 days)
BD Respiratory Viral Panel for BD MAX™ System is an automated multiplexed real-time reverse transcriptase polymerase chain reaction (RT- PCR) test intended for the simultaneous, qualitative detection and differentiation of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), influenza A, influenza B, and/or respiratory syncytial virus (RSV) nucleic acid in nasopharyngeal swab (NPS) and anterior nasal swab (ANS) specimens from individuals with signs and symptoms of respiratory tract infection. Clinical signs and symptoms of respiratory tract infection due to SARS-COV-2, influenza, and RSV can be similar.
BD Respiratory Viral Panel for BD MAX™ System is intended for use as an aid in the differential diagnosis of SARS-CoV-2, influenza A, influenza B, and/or RSV infection if used in conjunction with other clinical and epidemiological information, and laboratory findings. SARS-CoV- 2, influenza A, influenza B, and RSV viral nucleic acid are generally detectable in NPS and ANS specimens during the acute phase of infection.
Positive results do not rule out co-infection with other organisms. The agent(s) detected by the BD Respiratory Viral Panel for BD MAX™ System may not be the definitive cause of disease.
Negative results do not preclude SARS-CoV-2, influenza B, and/or RSV infection.
The results of this test should not be used as the sole basis for diagnosis, treatment, or other patient management decisions.
BD Respiratory Viral Panel-SCV2 for BD MAX™ System:
BD Respiratory Viral Panel-SCV2 for BD MAX™ System is an automated multiplexed real-time reverse transcriptase polymerase chain reaction (RT-PCR) test intended for the simultaneous, qualitative detection of severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) viral nucleic acid in nasopharyngeal swab (NPS) and anterior nasal swab (ANS) specimens from individuals with signs and symptoms of respiratory tract infection. SARS-CoV-2 viral RNA is generally detectable in NPS and ANS specimens during the acute phase of infection.
The BD Respiratory Viral Panel-SCV2 for BD MAX™ System is intended for use as an aid in the diagnosis of SARS-CoV-2 infection if used in conjunction with other clinical and epidemiological information, and laboratory findings.
Positive results do not rule out co-infection with other organisms. The agent detected by the BD Respiratory Viral Panel-SCV2 for BD MAX™ System may not be the definitive cause of disease.
Negative results do not preclude SARS-CoV-2 infection.
The results of this test should not be used as the sole basis for diagnosis, treatment management decisions.
The BD Respiratory Viral Panel (BD RVP) and BD Respiratory Viral Panel-SCV2 (BD RVP-SCV2) along with the BD MAXTM System are comprised of an instrument with associated hardware and accessories, disposable microfluidic cartridges, master mixes, unitized reagent strips, and extraction reagents. The instrument automates sample preparation including target lysis, Total Nucleic Acid (TNA) extraction and concentration, reagent rehydration, target nucleic acid amplification and detection using real-time PCR. The assay includes a Sample Processing Control (SPC) that is present in the Extraction Tube. The SPC monitors RNA extraction steps, thermal cycling steps, reagent integrity and the presence of inhibitory substances. The BD MAX™ System software automatically interprets test results. For the BD Respiratory Viral Panel for BD MAX™ System and BD Respiratory Viral Panel-SCV2 for BD MAX™ System, a test result may be called as POS, NEG or UNR (Unresolved) based on the amplification status of the targets and of the Sample Processing Control. IND (Indeterminate) or INC (Incomplete) results are due to BD MAX™ System failure.
The provided text describes the analytical and clinical performance evaluation of two molecular diagnostic devices: the BD Respiratory Viral Panel (BD RVP) for BD MAX System and the BD Respiratory Viral Panel-SCV2 (BD RVP-SCV2) for BD MAX System. Both are real-time RT-PCR tests for detecting respiratory viruses in nasopharyngeal and anterior nasal swab specimens.
Here's an analysis of the acceptance criteria and study proving device performance, based on the provided text:
Important Note: The provided text is a summary of a 510(k) premarket notification. As such, it reports performance data but does not explicitly state "acceptance criteria" in the format of a pre-defined table with specific percentage goals for PPA and NPA. Instead, the reported performance itself serves as the demonstration that the device is "substantially equivalent" to predicate devices, implying that the observed performance would be considered acceptable by the FDA for the given intended use. For this response, I will interpret "acceptance criteria" as the performance metrics and results deemed sufficient for clearance.
1. Table of Acceptance Criteria (Interpreted) and Reported Device Performance
Given that this is a 510(k) submission, the "acceptance criteria" are not explicitly listed with numeric targets in the document. Instead, the reported performance is presented to demonstrate substantial equivalence to legally marketed predicate devices. The implicit acceptance criteria are the high percentages of Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) achieved by the device when compared to a composite reference method (ground truth).
The primary performance data comes from clinical evaluation studies for both the BD RVP and BD RVP-SCV2.
BD Respiratory Viral Panel (BD RVP) for BD MAX System - Clinical Performance Summary
| Analyte (Specimen Type) | Performance Metric | Reported Device Performance | Implicit Acceptance Criteria (based on reported) |
|---|---|---|---|
| SARS-CoV-2 (NP Swab) | Positive Percent Agreement (PPA) | Overall: 98.9% (517/523) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Overall: 97.7% (999/1022) | High NPA (e.g., >95%) | |
| SARS-CoV-2 (ANS Swab) | Positive Percent Agreement (PPA) | Overall: 98.4% (478/486) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Overall: 97.7% (1050/1075) | High NPA (e.g., >95%) | |
| Flu A (NP Swab) | Positive Percent Agreement (PPA) | Overall: 96.7% (59/61) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Overall: 99.5% (1494/1501) | High NPA (e.g., >95%) | |
| Flu A (ANS Swab) | Positive Percent Agreement (PPA) | Overall: 96.9% (62/64) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Overall: 99.7% (1496/1500) | High NPA (e.g., >95%) | |
| Flu B (NP Swab) | Positive Percent Agreement (PPA) | No data for PPA rate calculation (prospective); Retrospective: 100.0% (58/58) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Prospective: 99.9% (1561/1562); Retrospective: 98.9% (180/182) | High NPA (e.g., >95%) | |
| Flu B (ANS Swab) | Positive Percent Agreement (PPA) | No data for PPA rate calculation (prospective); Retrospective: 100.0% (12/12) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Prospective: 100.0% (1564/1564); Retrospective: 98.9% (172/174) | High NPA (e.g., >95%) | |
| RSV (NP Swab) | Positive Percent Agreement (PPA) | Overall: 100.0% (12/12) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Overall: 100.0% (1550/1550) | High NPA (e.g., >95%) | |
| RSV (ANS Swab) | Positive Percent Agreement (PPA) | Overall: 91.7% (11/12) | High PPA (e.g., >95%) |
| Negative Percent Agreement (NPA) | Overall: 99.9% (1551/1552) | High NPA (e.g., >95%) |
BD Respiratory Viral Panel-SCV2 (BD RVP-SCV2) for BD MAX System - Clinical Performance Summary
For BD RVP-SCV2 clinical performance, the text refers to the same clinical study results as BD RVP, but only for SARS-CoV-2.
- SARS-CoV-2 (NP Swab): PPA 34.6% (541/1562), NPA not explicitly listed but implied from overall subject data. This seems to be a positivity rate, not an agreement rate. The clinical performance summary tables (Table 28 and 29) for SARS-CoV-2 are the relevant ones.
- NP (Overall): PPA 98.9%, NPA 97.7%
- ANS (Overall): PPA 98.4%, NPA 97.7%
(These are the same as reported for the full BD RVP, which makes sense as SCV2 is a subset).
2. Sample Size Used for the Test Set and Data Provenance
BD Respiratory Viral Panel (BD RVP) for BD MAX System:
-
Prospective Clinical Evaluation:
- Sample Size:
- 2,005 subjects enrolled.
- Test Set: 1,562 nasopharyngeal swabs (NPS) and 1,566 anterior nasal swabs (ANS).
- For performance calculations, slightly fewer compliant specimens were used:
- NPS: 1,545 for SARS-CoV-2, 1,562 for Flu A, Flu B, and RSV.
- ANS: 1,561 for SARS-CoV-2, 1,564 for Flu A, Flu B, and RSV.
- Data Provenance:
- Country of Origin: United States (six geographically distinct U.S. study sites) and Europe (two geographically distinct sites).
- Retrospective or Prospective: Primarily Prospective. Specimens were collected between January and August 2022.
- January to early April 2022: Prospective archived/frozen (Category II).
- Mid-April to August 2022: Prospective fresh (Category I).
- Sample Size:
-
Retrospective Clinical Evaluation:
- Sample Size:
- NPS: 240 frozen retrospective nasopharyngeal swabs.
- Nasal Swabs: 187 frozen retrospective nasal swabs.
- Data Provenance:
- Country of Origin: Not explicitly stated within the retrospective section, but implied to be part of the broader US/Europe context from the prospective study.
- Retrospective or Prospective: Retrospective. Specimens collected between December 2019 and January 2022 (for NPS) and February 2021 and February 2023 (for nasal swabs) from external sources. These were "archived, frozen specimens".
- Sample Size:
BD Respiratory Viral Panel-SCV2 for BD MAX System:
- The text states: "In the BD Respiratory Viral Panel-SCV2 for BD MAX™ System clinical study, reportable results from specimens compliant at the specimen and PCR levels were obtained from 8 geographically diverse sites. Nasopharyngeal and nasal specimens totaled 1.566 respectively." This refers to the same clinical study data as the broader BD RVP, specifically focusing on SARS-CoV-2 results from that dataset.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The document does not specify the number of experts or their qualifications for establishing ground truth.
Instead, the ground truth for the clinical evaluation (test set) was established by comparison to highly sensitive molecular assays (NAATs).
- For SARS-CoV-2: A composite method of "two out of three highly sensitive molecular assays (NAATS) that are FDA authorized under EUA for SARS-CoV-2."
- For influenza B and RSV: "an FDA-cleared high sensitivity RT-PCR assay."
This indicates an analytical or "reference method" based ground truth rather than a human expert consensus.
4. Adjudication Method for the Test Set
The adjudication method relies on a "two out of three" rule for SARS-CoV-2 and a single FDA-cleared RT-PCR assay for influenza B and RSV.
- SARS-CoV-2: "Any specimen that tested positive by two EUA assays was considered positive for SARS-CoV-2, whereas any specimen that tested negative by two EUA assays was considered negative."
- Influenza B and RSV: Comparison to a single "FDA-cleared high sensitivity RT-PCR assay."
No explicit 2+1 or 3+1 human expert adjudication method is mentioned, as the ground truth derivation is assay-based.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done
No, an MRMC comparative effectiveness study was not done. This device is a diagnostic assay (RT-PCR test), not an AI-assisted imaging product. Therefore, the concept of "human readers improve with AI vs without AI assistance" does not apply. The performance is assessed on the device's ability to accurately detect specific viral nucleic acids.
6. If a Standalone (i.e. algorithm only without human-in-the loop performance) Was Done
While the term "algorithm" is used to describe the software interpreting PCR data, the entire device (BD MAX System with the viral panel) is an automated system. The performance data presented (PPA, NPA) is the standalone performance of the device without human interpretation of the raw PCR signals for diagnosis. The BD MAX™ System software "automatically interprets test results" (page 7). Therefore, the clinical performance described is the standalone performance.
7. The Type of Ground Truth Used
The ground truth used was reference method comparison / composite molecular assay results:
- SARS-CoV-2: Composite reference method using results from "two out of three highly sensitive molecular assays (NAATs) that are FDA authorized under EUA for SARS-CoV-2." (Page 42)
- Influenza A, Influenza B, and RSV: An "FDA-cleared high sensitivity RT-PCR assay." (Page 42)
This is a form of "analytical ground truth" established by other validated diagnostic tests, rather than pathology, expert consensus on images, or long-term clinical outcomes.
8. The Sample Size for the Training Set
The document does not explicitly state a separate "training set" sample size for the assay's development in the context of a statistical machine learning model.
However, it mentions: "As the product undergoes product development, the data is supplemented, and the algorithm is adjusted ('trained') using viral cultures spiked into clinical background matrices at levels surrounding the limit of detection and expected clinical range." (Page 30, and repeated on Page 40 for SCV2). This refers to internal development and optimization using contrived samples, not a distinct, pre-defined "training set" of patient data for a typical AI/ML model for regulatory submission.
Performance data for the submission is based on the "test set" (clinical evaluation studies described in section 2) and analytical studies.
9. How the Ground Truth for the Training Set Was Established
As noted above, there isn't a "training set" ground truth in the sense of independently verified patient data used to directly train a submitted AI model. Instead, the "algorithm" (how the BD MAX software interprets PCR signals to call positive/negative) was "adjusted ('trained')" using:
- Viral cultures spiked into clinical background matrices at various concentrations (e.g., around the Limit of Detection and expected clinical range).
- These are contrived samples with a known (positive or negative) status and precise viral concentrations.
The ground truth for these internal development/training stages would be the known concentration of spiked viral material.
Ask a specific question about this device
(580 days)
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is an in-vitro diagnostic software program that requires the BD Kiestra™ Laboratory Automation Solution in order to operate.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is applied to digital images of BD BBL™ CHROMagar™ MRSA II culture plates inoculated with anterior nares samples.
Algorithms are applied to digital images to provide a qualitative assessment of colony growth and colorimetric detection of target colonies for the detection of nasal colonization by MRSA and to serve as an aid in the prevention and control of MRSA infection. Applied algorithms provide the following results:
- "No growth", which will be manually released individually or as a batch (with other no growth samples) by . a trained microbiologist upon review of the digital plate images.
- . "Growth - other" (growth without mauve color), which digital plate images will be manually reviewed by a trained microbiologist.
- "Growth MRSA Mauve" (growth with mauve color), which digital plate images will be manually reviewed ● by a trained microbiologist.
The assay is not intended to guide, diagnose, or monitor treatment for MRSA infections. It is not intended to provide results of susceptibility to oxacillin/methicillin.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is indicated for use in the clinical laboratory.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application will be optional for the BD Kiestra™ Laboratory Automation Solution and will support laboratory technologists in batching no growth on the BD BBL™ CHROMagar™ MRSA II, growth with no key colony color detected for MRSA ("Growth – other"), and growth with key colony color detected for MRSA ("Growth MRSA Mauve"). These classifications will be characterized as "no growth" and "growth with mauve color" from BD BBLTM CHROMagar™ MRSA II media, from anterior nares samples.
The technologist has the ability to create work lists in BD Synapsys™ informatics solution based on the classifications (growth, no growth or growth with mauve color). These work lists will be used for followup work and batching of results, at the sample level.
The BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application will apply Image Algorithms to the digital images to determine if the plate contains "growth" or "no growth". At the individual plate level when the Image Algorithms detects colony growth and potential mauve color the classification will be "growth with mauve color".
When the BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application is not capable of automatically generating the outputs (visual attributes: growth with or without mauve color/no growth), the laboratory technologist will be required to read the digital image of the plate on the computer screen and decide on follow-up action as is the current standard laboratory practice.
Here's a summary of the acceptance criteria and the study details for the BD Kiestra™ Methicillin-resistant Staphylococcus aureus (MRSA) Application, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The acceptance criteria are not explicitly stated as distinct numerical targets in the provided document. However, the studies demonstrate performance metrics related to agreement with manual interpretation and reproducibility.
| Performance Metric | Acceptance Criteria (Implied by study results being presented) | Reported Device Performance (Combined) |
|---|---|---|
| Digital Quality Image Study | ||
| Agreement: No Growth | High percentage agreement with manual reading | 93.1% (148/159) |
| Agreement: Non-Mauve Growth | High percentage agreement with manual reading | 98.3% (169/172) |
| Agreement: Mauve Growth | High percentage agreement with manual reading | 98.9% (188/190) |
| Digital Image Reproducibility Study | ||
| Reproducibility: No Growth | High percentage agreement among microbiologists | 98.0% (150/153) |
| Reproducibility: Non-Mauve Growth | High percentage agreement among microbiologists | 100.0% (175/175) |
| Reproducibility: Mauve Growth | High percentage agreement among microbiologists | 98.9% (188/190) |
| Reproducibility Study (Seeded Samples) | ||
| Combined Growth (Saline) | High percentage detection of no growth | 99.7% (2082/2089) (99.3%, 99.8% CI) |
| Combined Color (Saline) | High percentage detection of no growth | 99.7% (2082/2089) (99.3%, 99.8% CI) |
| Combined Growth (MRSA strains) | 100% detection of growth for most dilutions | 100% (most dilutions) |
| Combined Color (MRSA strains) | 100% detection of mauve color for most dilutions | 100% (most dilutions) |
| Combined Growth (S. haemolyticus (non-mauve)) | High percentage detection of growth (without mauve) | 94.4% - 100% |
| Combined Color (S. haemolyticus (non-mauve)) | High percentage detection of growth (without mauve) | 94.4% - 100% |
| Clinical Performance Studies (Against Manual Read at Clinical Sites) | ||
| No Growth Percent Agreement | High percentage agreement with manual reading | 75.6% (773/1023) |
| Non-Mauve Percent Agreement | High percentage agreement with manual reading | 84.5% (207/245) |
| Mauve Percent Agreement | High percentage agreement with manual reading | 98.2% (319/325) |
2. Sample sizes used for the test set and the data provenance
- Digital Quality Image Study:
- Sample Size: 521 plate images (across 3 microbiologists, so 174, 172, and 175 plates respectively for each microbiologist's manual review).
- Data Provenance: Internal digital image quality study, using simulated surveillance samples (MRSA, non-MRSA, saline controls). Implies data was generated specifically for this study. The location or country of origin is not explicitly stated, but it's an "internal" study. The nature (simulated surveillance samples) suggests it was a prospective generation of samples for the study.
- Digital Image Reproducibility Study:
- Sample Size: 518 plate images (3 images excluded due to invalid results from at least one microbiologist).
- Data Provenance: Same as the Digital Quality Image Study, as it re-analyzed the results from that study. "Internal" study, likely newly generated for this purpose.
- Reproducibility Study (Seeded Samples):
- Sample Size: Variable, ranging from 55 to 1056 individual observations per dilution/organism combination across two sites. Total observations are in the thousands (e.g., 2089 for saline controls).
- Data Provenance: Internal reproducibility study using seeded samples (bacterial strains grown in saline). Conducted at two internal sites (BD Sparks, MD location). This is also prospective generation of data.
- Clinical Performance Studies:
- Sample Size: Approximately 1,800 clinical anterior nares specimens.
- Data Provenance: Clinical anterior nares specimens. Collected at "three clinical sites." The text does not specify the country of origin, but "clinical sites" generally refer to real-world healthcare settings. This is a prospective collection of real patient samples.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Digital Quality Image Study & Digital Image Reproducibility Study:
- Number of Experts: 3.
- Qualifications: "Three trained clinical microbiologists." Specific years of experience or board certifications are not provided.
- Reproducibility Study (Seeded Samples):
- The ground truth for seeded samples is inherently defined by the known organisms and dilutions used to create the samples. The study then assesses the automated system's agreement with this known "ground truth" for growth and color. Human experts here are likely involved in verifying the initial seeding and later in the interpretation of the results, but the definition of "mauve" or "non-mauve" for specific strains is pre-defined.
- Clinical Performance Studies:
- Number of Experts: Not explicitly stated as a fixed number, but the images were "manually read by trained microbiologists at those sites." This implies multiple microbiologists across the three clinical sites.
- Qualifications: "Trained microbiologists." Specific years of experience or board certifications are not provided.
4. Adjudication method for the test set
- Digital Image Reproducibility Study: The "final digital image result" was "determined by 2/3 majority microbiologist result." This indicates a form of consensus-based adjudication, specifically a majority vote among the three microbiologists.
- Other studies (Digital Quality Image, Reproducibility with Seeded Samples, Clinical Performance): The primary comparison for these studies appears to be between the device's output and individual manual reads by microbiologists or known characteristics of seeded samples. Explicit adjudication methods for generating a single "ground truth" reference for these specific studies are not detailed, though for the clinical study, the manual read at each site serves as the reference.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- A MRMC comparative effectiveness study, where human readers' performance with and without AI assistance is quantitatively measured and compared, is not explicitly described in the provided text.
- The studies focus on the agreement of the device with manual reads (Digital Quality Image, Clinical Performance) and the reproducibility of human reads of digital images (Digital Image Reproducibility). While these demonstrate the device's capability to produce similar results to human reads, they don't directly quantify the improvement of human readers due to AI assistance. The device's stated purpose is to "aid in the prevention and control of MRSA infection" and to support laboratory technologists in "batching" and "streamline and optimize the reading workflow," suggesting an assistive role. However, the magnitude of this assistance in terms of effect size on human reader performance is not presented.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Yes, a standalone performance analysis was conducted. The tables in the "Analytical Performance" and "Clinical Performance Studies" sections (e.g., Table 1, Table 6, and the final clinical performance table on page 17) directly compare the BD Kiestra™ MRSA App's output to the manual plate reads or ground truth. The "Percent Agreement" values (e.g., "No Growth Percent Agreement 75.6% (773/1023)") represent the algorithm's performance against those references.
- It's important to note that even when the app provides results, the indications for use state that "No growth", "Growth - other", and "Growth MRSA Mauve" classifications "will be manually reviewed by a trained microbiologist." This means that while the algorithm provides a classification, a human-in-the-loop is always part of the final release process. However, the data presented directly assesses the algorithm's standalone accuracy in classifying the images.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Expert Consensus: Used in the "Digital Image Reproducibility Study," where a 2/3 majority microbiologist result formed the ground truth for comparing individual microbiologist reproducibility.
- Manual Expert Reading: For the "Digital Quality Image Study" and the "Clinical Performance Studies," the ground truth was established by the manual interpretation of the plates (or digital images of plates) by trained microbiologists. This acts as the "gold standard" for comparison.
- Known Sample Characteristics: For the "Reproducibility Study (Seeded Samples)," the ground truth was defined by the known bacterial strains, dilutions, and expected growth/color characteristics of the seeded samples.
8. The sample size for the training set
- The document does not provide information on the sample size used for the training set. It focuses solely on the performance of the already-trained and developed algorithm.
9. How the ground truth for the training set was established
- The document does not describe how the ground truth for the training set was established. Information regarding the training data, annotation process, or expert involvement in labeling training data is not included in the provided text.
Ask a specific question about this device
(120 days)
The BD Veritor System for Rapid Detection of Flu A+B CLIA waived assay is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.
The BD Veritor™ System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral antigens from nasopharyngeal and nasal swabs of symptomatic patients. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. It is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single test device. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other management decisions. All negative test results should be confirmed by another methodology, such as a nucleic acid-based method.
BD Veritor™ System Flu A+B test devices are interpreted by a BD Veritor™ Plus Analyzer. When using the BD Veritor™ Plus Analyzer, workflow steps depend on the selected operational mode and the Analyzer configuration settings. In Analyze Now mode, the instrument evaluates assay devices after manual timing of their development. In Walk Away mode, devices are inserted immediately after application of the specimen, and timing of assay development and analysis is automated. Depending on the configuration chosen by the operator, the instrument communicates status and results to the operator via a liquid crystal display (LCD) on the instrument, a connected printer, or through a secure connection to the facility's information system.
This document (K223016) is a 510(k) Premarket Notification for a modified version of the BD Veritor System for Rapid Detection of Flu A+B CLIA-Waived Kit. The core assay (the rapid immunoassay for Influenza A and B detection) remains unchanged from previous clearances (K180438 and earlier). The modifications focus on the accompanying instrument, the BD Veritor™ Plus Analyzer.
Therefore, the document explicitly states "There have been no changes to the analytical performance of the BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit since the assay was last cleared in K180438. The modifications to the Analyzer do not have an impact on the assay-specific analytical performance. " and "There have been no changes to the clinical performance of the BD Veritor™ System for Rapid Detection of Flu A+B CLIA-Waived Kit since the assay was last cleared in K180438. The modifications to the Analyzer do not have an impact on the assay-specific clinical performance."
This means that no new performance studies (analytical or clinical) were conducted for the assay itself as part of this specific 510(k) submission. The acceptance criteria and performance data for the assay would refer to the studies presented in the previous 510(k)s, most recently K180438.
The testing performed for this K223016 submission focuses solely on the modifications made to the BD Veritor™ Plus Analyzer:
- Overvoltage Protection Circuitry: Testing confirms that the added circuitry does not affect the function of the trigger board (recognition of a cartridge and USB connection).
- Extended Lifetime: Verification that the Analyzer performs up to 10,000 cycles (tests) within current specifications (increased from 3,500 tests).
- InfoWiFi Module Functionality: Verification of the general functionalities of the new InfoWiFi module, which provides wireless communication capabilities.
Since the provided document does not contain the details of the analytical and clinical performance studies for the assay, and instead refers to previous submissions, I cannot extract the specific acceptance criteria and detailed study data for the assay itself from this text.
Assuming the request is for the acceptance criteria and study proving the changes to the analyzer meet the criteria, the following applies:
1. A table of acceptance criteria and the reported device performance:
| Feature Modified | Acceptance Criteria (Implicit from testing purpose) | Reported Device Performance (as stated in document) |
|---|---|---|
| Analyzer Trigger Board (Overvoltage Protection) | The added overvoltage protection circuitry must not affect the function of the trigger board, meaning it must still properly recognize an inserted cartridge and maintain USB connection functionality. | Testing was performed to confirm that the added overprotection circuitry does not affect the function of the trigger board (recognition of a cartridge and the USB connection). |
| Analyzer Lifetime | The Analyzer must perform up to 10,000 cycles (tests) while maintaining current specifications (an increase from the previous 3,500 test specified lifetime). | Testing was performed to confirm that the Analyzer performs up to 10,000 cycles within the current specifications. |
| InfoWiFi Module | The InfoWiFi module must operate according to its intended design, providing general functionalities such as wireless communication capability (same functional features as InfoScan, plus wireless). | Testing was performed to verify general functionalities of the InfoWiFi module. |
2. Sample size used for the test set and the data provenance:
- Sample Size: Not explicitly stated in the document for any of the new tests. The document refers to "testing was performed."
- Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). This would typically be non-clinical, in-house verification testing by the manufacturer.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not applicable as this is a device modification verification, not a clinical study requiring expert ground truth for diagnostic accuracy. The "truth" here is engineering functionality.
4. Adjudication method for the test set:
- Not applicable. This is not a study requiring adjudication of diagnostic results.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- No. This type of study is for evaluating human reader performance with or without AI assistance. The modifications here are to the hardware of an automated test reader, not an AI diagnostic algorithm.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The BD Veritor System itself operates as a standalone diagnostic device interpreted by the analyzer. However, the algorithm for interpreting the test results (analyzing line intensity) was established in previous submissions (K180438) and is stated as "Same" and "Original" in the comparison table, meaning it was not changed or re-evaluated in this submission. The testing done for this 510(k) relates to the hardware modifications of the analyzer, not a new or modified interpretation algorithm.
7. The type of ground truth used:
- For the hardware modifications, the "ground truth" is defined by engineering specifications and expected functionality. For example, for "overvoltage protection," the ground truth is "does the circuit protect against overvoltage without impairing core function?" For "lifetime," the ground truth is "does the device successfully complete 10,000 tests?" For "InfoWiFi," the ground truth is "does it perform the specified wireless functionalities?"
8. The sample size for the training set:
- Not applicable. This is hardware verification, not a machine learning model.
9. How the ground truth for the training set was established:
- Not applicable.
Ask a specific question about this device
(144 days)
BD Surgiphor™ Antimicrobial Irrigation System is intended to mechanically loosen and remove debris, and foreign materials, including microorganisms, from wounds.
The subject BD Surgiphor™ Antimicrobial Irrigation System is a terminally sterilized 450 mL aqueous solution for irrigation and debridement of wounds. The device includes one bottle of Surgiphor™ solution (0.5% Povidone Iodine) which is used to loosen and remove wound debris. The mechanical action of fluid moving across the wound provides the mechanism of action and aids in the loosening and removal of debris, and foreign materials, including microorganisms, from wounds. The povidone iodine in the Surgiphor™ solution serves as a preservative to ensure that no unwanted microbial growth occurs in the solution after the bottle is open.
The provided text describes a 510(k) premarket notification for a medical device called the "BD Surgiphor™ Antimicrobial Irrigation System." This submission is based on demonstrating substantial equivalence to a previously cleared predicate device (K213616).
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance:
The core of this 510(k) submission is to demonstrate that a modified device is substantially equivalent to a predicate device, not to establish new performance criteria for a novel device. Therefore, the "acceptance criteria" here are primarily tied to demonstrating that the changes made do not negatively impact the safety or effectiveness of the device compared to the predicate.
The main change in the subject device (K221504) from its predicate (K213616) is the removal of the SurgiRinse™ solution bottle. The device still includes one bottle of Surgiphor™ solution. The manufacturer asserts that the fundamental "mechanism of action" (mechanical loosening and removal of debris and foreign materials, including microorganisms, from wounds through fluid pressure) remains unchanged. The povidone iodine in the Surgiphor™ solution continues to act as a preservative.
Given this context, the acceptance criteria are implicitly met by demonstrating that the modified device performs comparably to the predicate device, specifically by showing that removing the rinse bottle does not introduce new safety or efficacy concerns.
1. Table of Acceptance Criteria and Reported Device Performance:
Since this is a 510(k) for a modified device, the "acceptance criteria" are not reported as specific performance metrics like sensitivity, specificity, or accuracy (as one might see for an AI/ML device). Instead, the acceptance criteria are met by demonstrating that the changes do not degrade performance or safety.
| Acceptance Criteria Category | Specific Criteria (Implicitly Met) | Reported Device Performance/Evidence Provided |
|---|---|---|
| Intended Use Equivalence | The modified device maintains the same intended use as the predicate. | "The BD Surgiphor™ Antimicrobial Irrigation System is unchanged from the legally marketed predicate BD Surgiphor™ Antimicrobial Irrigation System (K213616) in its intended use... specifically, to mechanically loosen and remove debris, and foreign materials, including microorganisms, from wounds." (Page 4, Comparison of Technological Characteristics) |
| Mechanism of Action | The modified device operates via the same mechanism of action as the predicate. | "The mechanism of action is defined by the fluid pressure of the solution dispensed upon a wound." (Page 4, Comparison of Technological Characteristics) This is consistent with the predicate. |
| Solution Composition | The primary active solution (Surgiphor™) remains chemically identical to that in the predicate. | "There is no change to the solution composition from the predicate to the subject Surgiphor™ solution." (Page 4, Comparison of Technological Characteristics) |
| Safety | The removal of the SurgiRinse™ bottle does not introduce new safety concerns (e.g., related to sterility, packaging integrity, or material compatibility). | This is addressed through the verification and validation testing, particularly in the areas of Sterilization, Packaging and Shelf-Life, and the statement that "the change does not raise new safety and effectiveness concerns." (Page 4, "Substantial equivalence has been demonstrated through standards compliance and design verification and validation testing.") |
| Effectiveness | The mechanical action for debris removal is not compromised by the absence of the separate rinse bottle, as the user is still instructed to use sterile saline for rinsing (which is "readily available"). The preservative function of PVP-I is maintained. | "Users are still instructed to use sterile saline to rinse the Surgiphor™ solution immediately after irrigation." (Page 4, Device Description) The effectiveness of the Surgiphor solution itself in loosening debris is inherent to the predicate and is stated to be unchanged. The lack of change to the solution composition and mechanism of action implies no change in effectiveness for the primary function. |
| Compliance with Standards | The manufacturing process and device characteristics continue to conform to relevant recognized standards for medical devices. | "Substantial equivalence has been demonstrated through standards compliance and design verification and validation testing." Specific standards listed include those for Sterilization (ANSI/AAMI/ISO 11137 series, 11737-1, TIR13004) and Packaging and Shelf-Life (ISO 11607-1, ASTM F1980, F2096, D4169, F2825). (Page 6) |
2. Sample Size Used for the Test Set and Data Provenance:
The document does not describe a "test set" in the context of an AI/ML algorithm or a clinical trial with human subjects. This 510(k) relies on design verification and validation testing and standards compliance to demonstrate substantial equivalence to a predicate device, given a minor change (removal of one component from a kit).
Therefore, there is no mention of data provenance (country of origin, retrospective/prospective) because the studies are primarily engineering and quality control tests (sterilization, packaging, shelf-life) rather than clinical performance studies on patient data.
3. Number of Experts Used to Establish Ground Truth and Qualifications:
This information is not applicable to this 510(k) submission. Ground truth establishment by experts (e.g., radiologists) is typically relevant for diagnostic AI/ML devices where a clinical reference standard is needed. This device is a physical irrigation system, and its "ground truth" for substantial equivalence is derived from a combination of:
- The established performance and safety of its predicate device.
- Laboratory testing (sterilization, packaging) against recognized standards.
- Engineering assessment that a structural change (removing a bottle) does not alter intended use or introduce new risks.
4. Adjudication Method for the Test Set:
This information is not applicable for the same reasons as #3. Clinical adjudication methods are not relevant for the type of testing described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
This information is not applicable. An MRMC study is relevant for evaluating the performance of diagnostic aids, particularly AI/ML algorithms, in how they affect human reader performance. This device is an irrigation system, not a diagnostic tool, and the submission is focused on physical and chemical equivalence and safety, not on human interpretation of outputs.
6. Standalone (Algorithm Only) Performance:
This information is not applicable. This submission is for a physical medical device, not an algorithm.
7. Type of Ground Truth Used:
The "ground truth" in this context is established through:
- Engineering specifications and design verification: Ensuring the physical and chemical properties of the device (solution composition, packaging, sterility) remain consistent with safe and effective operation as defined by standards.
- Predicate device's established safety and effectiveness: The fundamental "truth" is that the predicate device was already deemed safe and effective for its intended use, and the current submission argues that the modified device maintains this "truth" despite the change.
8. Sample Size for the Training Set:
This information is not applicable. There is no "training set" as this device does not involve an AI/ML component.
9. How the Ground Truth for the Training Set Was Established:
This information is not applicable. Since there is no training set, there is no ground truth established for it.
In summary:
The provided FDA letter and 510(k) summary pertain to a physical medical device (an irrigation system) undergoing a minor modification. The "acceptance criteria" and "study" described are primarily related to engineering validation, quality control testing (e.g., sterility, shelf-life), and a comparison to a legally marketed predicate device to demonstrate substantial equivalence. It does not involve the types of studies (e.g., clinical trials, AI/ML performance evaluations) that would typically require the detailed information on test sets, expert readers, or ground truth methodologies for diagnostic or AI-powered devices. The crucial point of this submission is the statement: "The changes do not impact the safety or effectiveness of the subject device [compared to the predicate device]."
Ask a specific question about this device
(105 days)
The surgical masks are intended to be worn by personnel during medical and surgical procedures to protect both the patient and the operating personnel from transfer of microorganisms, body fluids and particulate material. The mask is a single use, disposable device, provided non-sterile.
The Surgical Masks, Model IIR, are non-sterile, single use, three-layers, flat-pleated style with ear loop and nose piece.
- The inner and outer layers of the Surgical Mask are made of Non-woven Spunbond . Polypropylene for protection against fluid penetration that will not lint, teat or shred.
- . The middle layer is made of highest quality Melt Blown Polypropylene Filter for optimal filtration and breathability, meeting ASTM Level 3 performance requirements.
- . The sonically sealed ear loops are made of Polyester and Spandex to secure the mask over the user's face and mouth. They fit loosely and are attached to the mask to eliminate irritation.
- The adjustable nose piece is made of Aluminum forms strong seal for protection. .
- The Surgical Masks will be provided in Blue. The device is not made from any natural rubber latex.
The document is a 510(k) premarket notification for a Surgical Mask (K203425). It describes the device's acceptance criteria and the studies performed to demonstrate its performance.
1. Table of acceptance criteria and the reported device performance
| Performance Metric | Acceptance Criteria (ASTM Level 3) | Reported Device Performance (Average) | Result |
|---|---|---|---|
| Bacterial Filtration Efficiency (BFE) | ≥98% | 99.9% | Pass |
| Sub-micron Particulate Efficiency (PFE) at 0.1 µm | ≥98% | 99.74% - 99.86% | Pass |
| Resistance to Penetration by Synthetic Blood | @160 mm Hg | No Penetration at 160 mmHg | Pass |
| Differential Pressure (ΔP) | < 6.0 mm H2O/cm² | 2.63 - 3.28 mm H2O/cm² | Pass |
| Flame Spread | Class 1 | DNI (Did Not Ignite) | Pass |
| Cytotoxicity | Non-Cytotoxic | Non-Cytotoxic | Pass |
| Irritation | Non-Irritating | Non-Irritating | Pass |
| Sensitization | Non-Sensitizing | Non-Sensitizing | Pass |
2. Sample size used for the test set and the data provenance
For the performance tests (BFE, PFE, Resistant to synthetic blood, Differential pressure, Flame Spread), each lot tested (KZ200708005, KZ200801002, KZ200905006) had 32/32 passing results. The exact sample size per test within those 32 samples is not explicitly stated, but it implies all samples tested met the criteria. The data provenance is not explicitly stated in terms of country of origin, but the applicant is BDC Dental Corporation Ltd. in Guangzhou, Guangdong, China. The studies are non-clinical (bench testing).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
This information is not applicable as the studies are non-clinical bench tests of a physical device (surgical mask) and do not involve human interpretation or expert ground truth. The "ground truth" is established by the standardized test methods themselves.
4. Adjudication method for the test set
This information is not applicable as there is no human interpretation or subjective assessment that would require adjudication. The results are quantitative measurements against predefined criteria.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This information is not applicable. The device is a surgical mask, not an AI-powered diagnostic or assistive tool. Therefore, no MRMC study or AI assistance evaluation was performed.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
This information is not applicable. The device is a surgical mask and does not involve an algorithm.
7. The type of ground truth used
The "ground truth" for the non-clinical tests is established by standardized test methods (e.g., ASTM F2101-19 for BFE, ASTM F2299 for PFE, ASTM F1862 for synthetic blood resistance, EN 14683:2019+AC:2019 for Differential Pressure, 16 CFR part 1610(a) for Flame Spread, and ISO 10993 series for biocompatibility). These standards define the procedures and criteria for evaluating the physical and biological properties of the device.
8. The sample size for the training set
This information is not applicable. The device is a physical product (surgical mask) and does not involve a training set for an algorithm.
9. How the ground truth for the training set was established
This information is not applicable as there is no training set for an algorithm.
Ask a specific question about this device
(28 days)
The BD Veritor System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral nucleoprotein antigens from nasal and nasopharyngeal swabs of symptomatic patients. The BD Veritor System for Rapid Detection of Flu A+B (also referred to as the BD Veritor System and BD Veritor System Flu A+B) is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single device. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. A negative test is presumptive and it is recommended that these results be confirmed by viral culture or an FDA-cleared influenza A and B molecular assay. Outside the U.S., a negative test is presumptive and it is recommended that these results be confirmed by viral culture or a molecular assay cleared for diagnostic use in the country of use. FDA has not cleared this device for use outside of the U.S. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other patient management decisions. The test is not intended to detect influenza C antigens.
The BD Veritor System for Rapid Detection of Flu A+B is a rapid chromatographic immunoassay for the direct and qualitative detection of influenza A and B viral antigens from nasopharyngeal and nasal swabs of symptomatic patients. The test is to be used as an aid in the diagnosis of influenza A and B viral infections. It is a differentiated test, such that influenza A viral antigens can be distinguished from influenza B viral antigens from a single processed sample using a single test device. Negative test results do not preclude influenza viral infection and should not be used as the sole basis for treatment or other management decisions. All negative test results should be confirmed by another methodology, such as a nucleic acid based method. All BD Veritor System Flu A+B test devices are interpreted by a BD Veritor System Instrument, either a BD Veritor Reader or BD Veritor Plus Analyzer.
The BD Veritor Flu A+B test is an immuno-chromatographic assay for detection of influenza A and B viral antigens in samples processed from respiratory specimens. The viral antigens detected by the BD Flu A+B test are nucleoprotein, not hemagglutinin (HA) or neuraminidase (NA) proteins. Flu viruses are prone to minor point mutations (i.e., antigenic drift) in either one or both of the surface proteins (i.e., HA or NA). The BD Flu A+B test is not affected by antigenic drift or shift because it detects the highly conserved nucleoprotein of the influenza viruses 12. To perform the test, the patient specimen swab is treated in a supplied reaction tube prefilled with a lysing agent that serves to expose the target viral antigens, and then expressed through a filter tip into the sample well on a BD Veritor Flu A+B test device. Any influenza A or influenza B viral antigens present in the specimen bind to anti-influenza antibodies conjugated to colloidal gold micro-particles on the Veritor Flu A+B test strip. The antigen-coniugate complex then migrates across the test strip to the capture zone and reacts with either Anti-Flu A or Anti-Flu B antibodies that are immobilized on the two test lines on the membrane.
The BD Flu A+B test device shown in Figure 1 is designed with five spatially-distinct zones including positive and negative control line positions, separate test line positions for the target analytes, and a background zone. The test lines for the target analytes are labeled on the test device as 'A' for flu A position, and 'B' for flu B position. The onboard positive control ensures the sample has flowed correctly and is indicated on the test device as 'C'. Two of the five distinct zones on the test device are not labeled. These two zones are an onboard negative control line and an assay background zone. The active negative control feature in each test identifies and compensates for specimen-related, nonspecific signal generation. The remaining zone is used to measure the assay background.
The Veritor System is made up of assay kits with analyte specific reagents and an optoelectronic interpretation instrument.
The BD Veritor System instruments use a reflectance-based measurement method and apply assay specific algorithms to determine the presence or absence of the target analyte. In the case of the Flu A + B test. the BD Veritor System instruments subtract nonspecific signal at the negative control line from the signal present at both the Flu A and Flu B test lines. If the resultant line signal is above a pre-selected assay cutoff, the specimen scores as positive. If the resultant line signal is below the cutoff, the specimen scores as negative. Use of the active negative control feature allows the BD Veritor System instruments to correctly interpret test results that cannot be scored visually because the human eye is unable to accurately perform the subtraction of the nonspecific signal. The measurement of the assay background zone is an important factor during test interpretation as the reflectance is compared to that of the control and test zones. A background area that is white to light pink indicates the device has performed correctly. Sample preparation is the same for use with both instruments, and both can utilize the same kit components. Neither instrument requires calibration.
The Veritor Reader and the Veritor Plus Analyzer use the functional components and decision algorithm in the firmware. The BD Veritor Plus Analyzer has the flexibility of an optional bar code scanning module and cellular connectivity designed to facilitate record keeping as well as the addition of a "Walk Away" work flow mode. Depending on the configuration chosen by the operator, the Veritor Plus Analyzer communicates status and results to the operator via a liquid crystal display (LCD) on the instrument, a connected printer, or through a secure connection to the facility's information system.
Here's a breakdown of the acceptance criteria and study information for the BD Veritor System for Rapid Detection of Flu A + B CLIA Waived Kit, based on the provided document:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state "acceptance criteria" numerical targets. Instead, it presents performance data compared to a reference method (PCR) which implies these are the achieved performance metrics considered acceptable for substantial equivalence. The key performance indicators are Positive Percent Agreement (PPA) and Negative Percent Agreement (NPA) for influenza A and B.
| Performance Metric | Acceptance Criteria (Implied) | Reported Performance (All Swabs - All Sites) |
|---|---|---|
| Influenza A | ||
| PPA | Not explicitly stated | 83.6% (95% CI: 76.1%, 89.1%) |
| NPA | Not explicitly stated | 97.5% (95% CI: 95.7%, 98.5%) |
| Influenza B | ||
| PPA | Not explicitly stated | 81.3% (95% CI: 71.1%, 88.5%) |
| NPA | Not explicitly stated | 98.2% (95% CI: 95.7%, 99.3%) |
2. Sample Size and Data Provenance for the Test Set
The reported performance data in the table (PPA and NPA) are derived from a test set with the following characteristics:
- Sample Size for Influenza A: 736 total samples (226 PCR positive, 510 PCR negative).
- Sample Size for Influenza B: 736 total samples (171 PCR positive, 565 PCR negative).
- Data Provenance: The document states the performance characteristics were established "during January through March of 2011" and summarizes data "across all age groups, clinical testing sites and sample types." This indicates a prospective clinical study involving collection of symptomatic patient samples. The country of origin is not explicitly stated for the "All Sites" data, but given it's an FDA submission, it's highly likely to include data from the United States. It is a prospective study during the influenza season.
3. Number of Experts and Qualifications for Ground Truth
The document does not mention the use of human experts to establish ground truth for the primary clinical performance data. The reference method for ground truth was a Molecular Assay (PCR).
4. Adjudication Method for the Test Set
Not applicable, as the ground truth was established by a molecular assay (PCR), not expert consensus.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
The document does not describe an MRMC comparative effectiveness study involving human readers with and without AI assistance. The device is a rapid chromatographic immunoassay interpreted by an instrument (BD Veritor Reader or Veritor Plus Analyzer), not an AI-assisted diagnostic for human readers.
The device itself is an automated system for interpreting rapid tests, not a tool to assist human readers in interpreting complex images or data.
6. Standalone (Algorithm Only) Performance
Yes, the study focuses on the standalone performance of the BD Veritor System (the rapid immunoassay device interpreted by the Veritor Reader or Veritor Plus Analyzer). The reported PPA and NPA values represent the performance of the device itself against a reference standard (PCR) without human interpretation.
The "Principle of the Test" section explains: "All BD Veritor System Flu A+B test devices are interpreted by a BD Veritor System Instrument, either a BD Veritor Reader or BD Veritor Plus Analyzer." The instrument's algorithms make the determination.
7. Type of Ground Truth Used
The ground truth used for the clinical performance evaluation was PCR (Polymerase Chain Reaction), which is a molecular assay for detecting influenza A and B. It is referred to as "Reference PCR" in the performance tables.
8. Sample Size for the Training Set
The document does not provide specific details on the sample size used for the training set of the device's inherent algorithms or cutoff thresholds. It mentions that "performance characteristics for influenza A and B were established during January through March of 2011," implying a dataset used for development and validation. For the comparison between Veritor Reader and Veritor Plus Analyzer, the following samples were assessed:
- 102 Flu A-/B- samples
- 52 Flu A+ samples
- 52 Flu B+ samples
These samples were used to confirm equivalency between the interpreting instruments, not necessarily as a "training set" for the assay itself.
9. How the Ground Truth for the Training Set Was Established
The document does not explicitly describe how the ground truth for any "training set" was established. However, given the context, it's highly probable that if a training set was used for algorithm development, the ground truth would also have been established by a highly sensitive and specific reference method like PCR or viral culture, similar to how the ground truth for the test set was established.
Ask a specific question about this device
Page 1 of 8