Search Results
Found 3 results
510(k) Data Aggregation
(181 days)
Mesa Biotech, Inc.
The Accula™ Strep A Test performed on the Accula Dock is a molecular in vitro diagnostic test utilizing polymerase chain reaction (PCR) and lateral flow technologies for the qualitative, visual detection of Streptococcus pyogenes (Group A B-hemolytic Streptococcus, Strep A) bacterial nucleic acid. It is intended to aid in the rapid diagnosis of Group A Streptococcus bacterial infections from throat swabs of patients with signs and symptoms of pharyngitis.
All negative test results should be confirmed by bacterial culture because negative results do not preclude infection with Group A Streptococcus and should not be used as the sole basis for treatment.
The Accula™ Strep A Test is a semi-automated, colorimetric polymerase chain reaction (PCR) nucleic acid amplification test to qualitatively detect Streptococcus pyogenes (Group A Bhemolytic Streptococcus, Strep A) bacterial nucleic acid from unprocessed throat swabs that have not undergone prior nucleic acid extraction. The system integrates nucleic acid extraction, a novel Mesa Biotech PCR nucleic acid amplification technology named OscAR™, and hybridization-based visual detection into a completely self-contained and automated system. The Accula Strep A system consists of a small reusable Dock to drive the automated testing process, and a single-use disposable test cassette that contains all the enzymes and reagents.
The Mesa Biotech Accula Strep A Test is an in vitro diagnostic test for the qualitative, visual detection of Streptococcus pyogenes (Group A B-hemolytic Streptococcus, Strep A) bacterial nucleic acid from throat swabs. The device integrates nucleic acid extraction, OscAR™ PCR amplification technology, and hybridization-based visual detection.
Acceptance Criteria and Device Performance:
The primary performance metrics for the Accula Strep A Test were Sensitivity, Specificity, Positive Percent Agreement (PPA), and Negative Percent Agreement (NPA). These were evaluated against a reference bacterial culture and an FDA-cleared molecular comparator.
Metric | Acceptance Criteria (Implied) | Reported Device Performance (vs. Blood Agar Culture) | Reported Device Performance (vs. Molecular Comparator) |
---|---|---|---|
Sensitivity | High, expected to be comparable to or better than predicate | 96.2% (126/131) (95% CI: 91.4%-98.4%) | N/A (PPA used for molecular comparison) |
Specificity | High, expected to be comparable to or better than predicate | 97.5% (510/523) (95% CI: 95.8%-98.5%) | N/A (NPA used for molecular comparison) |
Positive Percent Agreement | High, expected to be comparable to or better than predicate | N/A | 93.8% (137/146) (95% CI: 88.7%-96.7%) |
Negative Percent Agreement | High, expected to be comparable to or better than predicate | N/A | 99.8% (501/502) (95% CI: 98.9%-100%) |
Reproducibility (Low Positive) | High agreement (e.g., >95%) across sites, operators, and days | 98.9% (89/90) (95% CI: 94.0%-99.8%) | N/A |
Reproducibility (Moderate Positive) | High agreement (e.g., >95%) across sites, operators, and days | 97.8% (87/89) (95% CI: 92.2%-99.4%) | N/A |
Reproducibility (Negative) | High agreement (e.g., >95%) across sites, operators, and days | 97.8% (88/90) (95% CI: 92.3%-99.4%) | N/A |
Limit of Detection | Expected to detect Strep A at low concentrations | BAA-946: 75 CFU/mL, ATCC 19615: 10 CFU/mL | N/A |
Analytical Reactivity | 100% detection of tested Strep A strains at appropriate levels | 100% detection for 3/4 strains at 1.5x LoD, 100% for all at 3.0x LoD | N/A |
Analytical Specificity | No cross-reactivity with common respiratory pathogens and flora | All 47 tested organisms showed 0/3 positive when Strep A absent; 3/3 positive when Strep A present for all but two cases subsequently resolved by lower concentration | N/A |
Interfering Substances | No interference from common substances found in throat samples | 100% agreement with expected results for most tested substances at specified concentrations | N/A |
Study Details:
-
Sample Size and Data Provenance:
- Test Set (Clinical Study):
- Evaluable for Accula vs. Culture: 654 samples from 669 enrolled subjects.
- Evaluable for Accula vs. Molecular Comparator: 648 samples from 669 enrolled subjects.
- Provenance: Prospective clinical study conducted at nine U.S. sites from May 2019 to January 2020.
- Reproducibility/Near-Cutoff Study: 90 samples per condition (Low Positive, Moderate Positive, Negative) tested across three CLIA-waived sites. These were contrived throat swabs.
- Limit of Detection (LoD): Replicates of 20 for confirmatory testing of two Strep A strains.
- Analytical Reactivity: 3 replicates per strain (4 strains total) at two concentrations.
- Analytical Specificity (Cross-Reactivity): 3 replicates per organism (47 organisms total), both in presence and absence of Strep A.
- Interfering Substances: 3 replicates per substance, positive and negative Strep A samples.
- Test Set (Clinical Study):
-
Number of Experts and Qualifications for Test Set Ground Truth:
- The document does not explicitly state the "number of experts" used to establish the ground truth for the clinical test set. However, for the reference methods:
- Bacterial Culture (Blood Agar Culture): Performed at a "central laboratory" according to "instructions from the reference laboratory." This implies trained laboratory personnel, but specific qualifications are not detailed.
- FDA-cleared molecular test (comparator): Performed at the central laboratory.
- Second FDA-cleared molecular test (discrepant analysis): Used for all discrepant results.
- The document does not explicitly state the "number of experts" used to establish the ground truth for the clinical test set. However, for the reference methods:
-
Adjudication Method for the Test Set:
- For the clinical study, a discrepant analysis method was used. "All specimens generating discrepant results between the Accula Strep A Test and Blood Agar Culture, or between Accula and the molecular comparator test, were tested with a second FDA-cleared molecular test." This effectively acts as a "tie-breaker" or confirmatory method for unusual results.
-
Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- No MRMC comparative effectiveness study was specifically described in terms of human readers' improvement with AI vs. without AI assistance.
- However, the Reproducibility/Near-Cutoff Study and the CLIA Waiver Studies involved "non-laboratory personnel" at "CLIA-waived sites" (Point of Care sites) to demonstrate that the device could be accurately used by intended users in the intended environment. This indirectly assesses the effectiveness of the device in the hands of typical users, rather than an AI assistance to human readers.
-
Standalone (Algorithm Only) Performance:
- Yes, the clinical performance described (Sensitivity, Specificity, PPA, NPA) represents the standalone performance of the Accula Strep A Test against established reference methods (Blood Agar Culture and FDA-cleared molecular tests). The device itself is a semi-automated system; these metrics evaluate its diagnostic accuracy independent of a human interpretation layer (beyond reading the visual result in the lateral flow).
-
Type of Ground Truth Used:
- For the clinical study, the primary ground truth was bacterial culture (Blood Agar Culture) for Streptococcus pyogenes.
- A second FDA-cleared molecular test was used as a confirmatory ground truth for discrepant results.
- Additionally, an FDA-cleared molecular comparator method served as another reference standard for direct comparison.
- For analytical studies (LoD, Reactivity, Specificity, Interfering Substances), the ground truth was established by contriving samples with known concentrations of specific organisms or substances.
-
Sample Size for the Training Set:
- The document describes performance studies (validation). It does not provide information on a "training set" in the context of machine learning, as this is a nucleic acid amplification test, not an AI-driven image analysis or algorithm that would typically require a training set. The device's components (enzymes, reagents, PCR technology) are developed and optimized rather than "trained."
-
How Ground Truth for the Training Set Was Established:
- Not applicable as described in item 7. The device operates on molecular principles and does not involve a machine learning training phase with a labeled dataset in the traditional sense.
Ask a specific question about this device
(176 days)
Mesa Biotech, Inc.
The Accula RSV Test performed on the Accula Dock is a molecular in vitro diagnostic test utilizing polymerase chain reaction (PCR) and lateral flow technologies for the qualitative, visual detection of respiratory syncytial virus (RSV) viral RNA. The Accula RSV Test uses a nasal swab specimen collected from patients with signs and symptoms of respiratory infection. The Accula RSV Test is intended as an aid in the diagnosis of RSV infection in children and adults in conjunction with clinical and epidemiological risk factors.
Negative results do not preclude RSV virus infection and should not be used as the sole basis for treatment or other patient management decisions.
The Accula RSV Test is a semi-automated, colorimetric, multiplex reverse-transcription polymerase chain reaction (RT-PCR) nucleic acid amplification test to qualitatively detect respiratory syncytial virus (RSV) viral RNA from unprocessed nasal swabs that have not undergone prior nucleic acid extraction. The system integrates nucleic acid extraction, reverse transcription, a novel Mesa Biotech PCR nucleic acid amplification technology named OscARTM, and hybridization-based visual detection into a completely self-contained and automated system. The Accula RSV system consists of a small reusable Dock to drive the automated testing process, and a single-use disposable test cassette that contains all the enzymes and reagents.
Accula RSV Test - Acceptance Criteria and Performance Study Analysis
This document outlines the acceptance criteria and the performance of the Accula RSV Test based on the provided 510(k) summary (K181443).
1. Table of Acceptance Criteria and Reported Device Performance
The 510(k) summary primarily focuses on clinical performance characteristics and analytical performance rather than explicitly stating pre-defined "acceptance criteria" numerical targets in the same way a device specification might. However, based on the studies conducted, the implicit acceptance criteria are that the device demonstrates comparable or superior performance to the FDA-cleared predicate device and meets predefined thresholds for analytical validity.
Here's a summary of the observed performance:
Acceptance Criteria Category (Implicit) | Specific Performance Metric | Stated Acceptance Criteria (Implicit from Context) | Reported Device Performance | Study Type |
---|---|---|---|---|
Clinical Performance | Sensitivity (vs. FDA-cleared molecular comparator) | High sensitivity, comparable to predicate | 90.2% (95% CI: 84.2% - 94.1%) | Prospective Clinical Study |
Specificity (vs. FDA-cleared molecular comparator) | High specificity, comparable to predicate | 95.6% (95% CI: 93.6% - 97.1%) | Prospective Clinical Study | |
Reproducibility | Inter-site, inter-operator, intra-run agreement for Low Positive, Moderate Positive, and Negative samples | High agreement (e.g., >95%) | 100% agreement across all sites, operators, and days for all sample types | Reproducibility Studies |
Limit of Detection (LoD) | Ability to consistently detect RSV at low concentrations | At least 19/20 positive results at the LoD level | 19/20 to 20/20 positive results for various RSV strains at specified LoD levels | Limit of Detection |
Analytical Reactivity (Inclusivity) | Detection of various RSV-A and RSV-B subtypes | 100% detection of tested RSV strains at 2X LoD | 100% detection (3/3) for all 9 tested RSV strains | Analytical Reactivity |
Analytical Specificity (Cross-Reactivity) | No false positives with common respiratory pathogens and flora | 0/3 RSV positive results for all tested non-RSV organisms | 0/3 RSV positive results for all 41 tested organisms | Analytical Specificity |
Interfering Substances | No negative impact from common interfering substances | 100% agreement with expected results in presence of interferents | 100% agreement with expected results for all tested interferents | Interfering Substances |
Performance Near Cut-off (Untrained Users) | Consistent detection of low positive samples by untrained users | High agreement (e.g., >95%) | 100% agreement for Low Positive and True Negative samples across all sites | Near-Cutoff Study (CLIA Waiver) |
2. Sample Size Used for the Test Set and Data Provenance
-
Clinical Performance Test Set:
- Sample Size: 694 evaluable specimens (out of 749 subjects enrolled).
- Data Provenance: Prospective clinical study conducted in the U.S. during the 2017-2018 RSV season. Specimens were collected from patients presenting at ten investigational sites with RSV-like symptoms.
-
Reproducibility Test Set:
- Sample Size: For each sample type (RSV Negative, RSV Low Positive, RSV Moderate Positive), there were 30 observations per site (2 operators x 1 run x 3 swabs x 5 non-consecutive days). With 4 sites, this totals approximately 120 observations per sample type.
- Data Provenance: Contrived nasal swabs tested at three CLIA-waived sites and one moderately complex site based in the United States.
-
Limit of Detection Test Set:
- Sample Size: 20 replicates for each virus strain and concentration tested.
- Data Provenance: Contrived samples prepared by spiking RSV strains into a pooled negative clinical matrix.
-
Analytical Reactivity Test Set:
- Sample Size: 3 replicates for each of the 9 RSV strains.
- Data Provenance: Contrived samples prepared by diluting virus in pooled clinical matrix and spiking onto a swab.
-
Analytical Specificity Test Set:
- Sample Size: 3 replicates for each of the 41 potentially cross-reacting organisms.
- Data Provenance: Contrived samples prepared by diluting organisms in a clinical matrix.
-
Interfering Substances Test Set:
- Sample Size: 3 replicates for two RSV strains (RSV A2, RSV B1) with each of the 13 interfering substances, plus negative controls.
- Data Provenance: Contrived samples with virus diluted into pooled negative clinical matrix and interfering substances added.
-
Near-Cutoff Study Test Set (CLIA Waiver):
- Sample Size: 60 samples per site (30 replicates of RSV Low Positive, 30 replicates of True Negative). With 3 sites, total of 180 samples.
- Data Provenance: Contrived samples (RSV Low Positive spiked into negative clinical matrix; True Negative from negative clinical matrix with no RSV virus) handled by untrained intended operators at three CLIA-waived sites in the U.S.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
The 510(k) summary does not explicitly state the number or specific qualifications of experts used to establish the ground truth.
For the primary clinical performance comparison, the ground truth was established by:
- A FDA-cleared molecular RSV assay as the initial comparator method.
- An alternative FDA-cleared molecular RSV assay used to resolve all discrepant results (e.g., false positives and false negatives from the initial comparison).
This approach relies on the established accuracy of commercially available and FDA-cleared molecular assays as the "expert" or gold standard for diagnostic truth.
4. Adjudication Method for the Test Set
For the prospective clinical study:
- The Accula RSV Test results were compared to a primary FDA-cleared molecular comparator.
- All discrepant results between the Accula RSV Test and the primary comparator were adjudicated using an alternative FDA-cleared molecular RSV assay at a reference laboratory. This serves as a 2+1 adjudication method, where the two molecular assays determine the final truth.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done.
This device, the Accula RSV Test, is a semi-automated, colorimetric nucleic acid amplification test for the qualitative detection of RSV viral RNA. It is a diagnostic test where the result (presence or absence of colored lines on a test strip) is visually interpreted directly by a human, but the "reading" is of a molecular reaction, not an image requiring nuanced expert interpretation. The provided studies focus on the analytical and clinical accuracy of the device itself and its agreement with established molecular methods, and its reproducibility across different users and sites (including untrained users in the CLIA waiver study). There is no "AI assistance" component to human readers, as the output is a direct visual detection by a human of the test strip post-reaction, not an interpretation of complex data or images aided by AI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
Yes, the primary clinical and analytical performance studies can be considered standalone performance for the device's diagnostic capability.
While the final interpretation of the colored lines is visual (human-in-the-loop for reading the result), the underlying molecular amplification and detection mechanism (OscAR™ technology, hybridization-based visual detection) operates as a standalone algorithm/system to determine the presence or absence of RSV RNA. The "Performance Near the Cut-off" study with untrained users further investigates the robustness of this standalone performance even when subjected to diverse human operation, demonstrating that the device consistently performs at the Limit of Detection. The analytical studies (LoD, inclusivity, cross-reactivity, interfering substances) also represent standalone performance of the test's ability to detect or not detect specific targets under controlled conditions.
7. The Type of Ground Truth Used
The ground truth for the clinical performance evaluation was established using FDA-cleared molecular RSV assays. Specifically:
- An initial FDA-cleared molecular RSV assay as the primary comparator.
- An alternative FDA-cleared molecular RSV assay for adjudication of discrepant results.
For analytical studies (LoD, inclusivity, cross-reactivity, interfering substances, reproducibility, near-cutoff), the ground truth was based on known spiked concentrations of purified RSV virus strains or other organisms in a negative clinical matrix.
8. The Sample Size for the Training Set
The 510(k) summary does not explicitly describe a separate "training set" in the context of machine learning model development. This is typical for traditional in vitro diagnostic devices like the Accula RSV Test, which rely on established biochemical and molecular principles rather than AI/ML algorithms that require large training datasets.
The development and optimization of the Accula RSV Test (e.g., reagent concentrations, reaction conditions, LoD determination) would have involved extensive laboratory experimentation and optimization, which could be conceptually seen as internal "training" or development phases, but these are not datasets reported in the same way as a machine learning training set for regulatory submission.
9. How the Ground Truth for the Training Set Was Established
As noted above, a distinct "training set" for an AI/ML model is not described in this 510(k) summary. The ground truth for the analytical and clinical validation studies (which support the device's claims) was established as described in section 7:
- For clinical performance: FDA-cleared molecular RSV assays.
- For analytical performance: known spiked concentrations of purified virus/organisms.
Ask a specific question about this device
(249 days)
Mesa Biotech, Inc.
The Accula Flu A/Flu B Test performed on the Accula Dock is a molecular in vitro diagnostic test utilizing polymerase chain reaction (PCR) and lateral flow technologies for the qualitative, visual detection and differentiation of influenza A and influenza B viral RNA. The Accula Flu B Test uses a nasal swab specimen collected from patients with signs and symptoms of respiratory infection. The Accula Flu A/Flu B assay is intended as an aid in the diagnosis of influenza infection in conjunction with clinical and epidemiological risk factors. The assay is not intended to detect the presence of influenza C virus.
Negative results do not preclude influenza virus infection and should not be used as the sole basis for treatment or other patient management decisions.
Performance characteristics for influenza A were established during the 2016-2017 influenza season. When other influenza A viruses are emerging, performance characteristics may vary.
If infection with a novel influenza A virus is suspected based on current clinical and epidemiological screening criteria recommended by public health authorities, specimens should be collected with appropriate infection control precautions for novel virulent influenza viruses and sent to state or local public health department for testing. Viral culture should not be attempted in these cases unless a BSL 3+ facility is available to receive and culture specimens.
The Accula Flu A/Flu B Test is a semi-automated, colorimetric, multiplex reverse-transcription polymerase chain reaction (RT-PCR) nucleic acid amplification test to qualitatively detect influenza A and B viral RNA from unprocessed nasal swabs that have not undergone prior nucleic acid extraction. The system integrates nucleic acid extraction, reverse transcription, a novel Mesa Biotech PCR nucleic acid amplification technology named OscARTM, and hybridization-based visual detection into a completely self-contained and automated system. The Accula Flu A/Flu B system consists of a small reusable Dock to drive the automated testing process, and a single-use disposable test cassette that contains all the enzymes and reagents.
The Accula Dock is an electronic module which executes in vitro diagnostic tests on compatible Mesa Biotech Test Cassettes. It consists of an electro-mechanical interface to a single Test Cassette. The Dock contains all electrical systems, controls and logic necessary to orchestrate in vitro diagnostic tests within the inserted Test Cassette.
Upon insertion of a Test Cassette, the Dock will detect and identify the Cassette type. After the user transfers a clinical test sample into the Cassette and closes the Dock lid, embedded firmware in the Dock will control fluid flow of the sample into the various chambers of the Cassette, apply controlled voltage signals to the various Cassette heaters (monitored by sensors within the Dock), and provide visual status to the user with critical information such as estimated time to read, and various error states, should they be encountered.
Here's a breakdown of the acceptance criteria and study details for the Accula Flu A/Flu B Test, based on the provided document:
Acceptance Criteria and Device Performance
Acceptance Criteria Category | Specific Metric | Acceptance Criteria (Implicit) | Reported Device Performance (Accula Flu A/Flu B Test) |
---|---|---|---|
CLIA-Waived Clinical Study | Influenza A: | ||
Sensitivity | Not explicitly stated as a numerical acceptance criterion, but results are compared to an FDA-cleared molecular comparator. | 97% (95% CI: 94.4% - 98.4%) against comparator | |
Specificity | Not explicitly stated as a numerical acceptance criterion, but results are compared to an FDA-cleared molecular comparator. | 94% (95% CI: 92.0% - 95.1%) against comparator | |
Influenza B: | |||
Sensitivity | Not explicitly stated as a numerical acceptance criterion, but results are compared to an FDA-cleared molecular comparator. | 94% (95% CI: 88.7% - 97.0%) against comparator | |
Specificity | Not explicitly stated as a numerical acceptance criterion, but results are compared to an FDA-cleared molecular comparator. | 99% (95% CI: 97.9% - 99.3%) against comparator | |
Reproducibility | Percent Agreement (Overall) | Not explicitly stated as a numerical acceptance criterion, but "Agreement was 100% across all sites, operators and days." (for specific panel samples) | 100% (Overall for pre-defined panel samples: Flu A Low/Mod Positive, Flu B Low/Mod Positive, True Negative) |
Limit of Detection (LoD) | Detection Rate | ≥ 95% detection at the determined LoD concentration | Confirmed for Influenza A/California/07/2009 (H1N1), A/Texas/50/2012 (H3N2), B/Nevada/3/2011 (Victoria), B/Massachusetts/2/2012 (Yamagata) |
Analytical Reactivity (Inclusivity) | Detection Rate | 100% detection for 2x LoD concentration for all tested strains | 100% (All 23 influenza strains detected at approx. 2x LoD) |
Analytical Specificity (Cross-Reactivity) | No false positives | No false positives at tested concentrations | 100% (All 33 exclusivity organisms were negative) |
Interfering Substances | No negative effect on performance | 100% agreement with expected results in presence of interfering substances | 100% agreement (3/3) for all tested targets and interfering substances |
Performance Near Cut-Off (CLIA-Waived Study) | Agreement (Low Positive & Negative) | Not explicitly stated, but high agreement expected for untrained users | Low A Positive: 97% (58/60) |
Low B Positive: 97% (58/60) | |||
Negative: 100% (59/59) |
Study Details
-
Sample size used for the test set and the data provenance:
- Clinical Test Set: 1258 evaluable specimens (out of 1331 enrolled subjects).
- Provenance: Prospective clinical study conducted during the 2016-2017 flu season in the U.S. Nasal swabs were collected from patients presenting with flu-like symptoms.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document does not specify the number of experts or their qualifications for establishing the initial ground truth.
- The comparator method used for the clinical study was an "FDA-cleared molecular influenza assay" performed at two central laboratories.
- For discrepant results, an "alternative FDA-cleared molecular assay" was used at the reference laboratory for analysis. This implies the ground truth for clinical performance was established using the results of these molecular assays, rather than expert consensus on individual cases.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- The method described is: "All discrepant results were analyzed using an alternative FDA-cleared molecular assay at the reference laboratory." This is a form of discrepant analysis where a third, presumably more definitive, method is used to resolve conflicts between the investigational device and the primary comparator.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study involving human readers and AI assistance was not done.
- This device is an in vitro diagnostic test (a molecular assay) and does not involve human interpretation assisted by AI. The product itself processes samples and provides a qualitative, visual result. The "CLIA Waiver Studies" section refers to performance by "untrained intended operators" but this is to assess the usability and accuracy of the device itself by non-laboratory personnel, not human readers interpreting images with AI.
-
If a standalone (i.e. algorithm only without human-in-the loop performance) was done:
- Yes, the primary clinical performance and analytical studies (LoD, inclusivity, cross-reactivity, interfering substances) effectively represent "standalone" performance of the device's diagnostic capability, as it's a fully automated molecular test providing a visual result. The "CLIA Waiver Studies" assessed performance of the device when used by non-laboratory personnel, but the output is still generated by the device's inherent algorithms and reagents.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the clinical performance evaluation: The ground truth was established by an FDA-cleared molecular influenza assay, with discrepant results being resolved by an "alternative FDA-cleared molecular assay."
- For analytical studies (LoD, inclusivity, cross-reactivity, interfering substances): Ground truth was established by known concentrations of spiked viral strains or organisms into negative clinical matrix or buffer.
-
The sample size for the training set:
- The document does not provide information on a training set. For in vitro diagnostic devices like this one, development and validation typically involves analytical studies and clinical performance studies, rather than a machine learning "training set" in the conventional sense. The "training" for such a device is in its design and optimization of reagents and protocols.
-
How the ground truth for the training set was established:
- As no training set is mentioned or implied for this type of device, this question is not applicable based on the provided document. The device's "training" equivalent would be the R&D and optimization process, where ground truth would be based on well-characterized viral samples and synthetic controls.
Ask a specific question about this device
Page 1 of 1