(30 days)
Ascent Cardiorespiratory Diagnostic Software is intended to be used for measurements, data collection and analysis of lung function (PFT) parameters, and cardiopulmonary testing (CPET) parameters, aiding in the diagnosis of related conditions.
All the measurements are performed via a mouthpiece or a mask.
The results of the test can be viewed on-line with the help of a computer screen and can be printed after the test. The test results can be saved for further referral or report generation purposes.
For use of the Bronchial Challenge option, the medical director of the laboratory, physician, or person appropriately trained to treat acute bronchoconstriction, including appropriate use of resuscitation equipment, must be close enough to respond quickly to an emergency.
The product can be utilized for patients from 4 years old and older as long as they can cooperate in the performance -- no special limit to patient's sex or height.
Measurements will be performed under the direction of a physician in a hospital environment, physician's office or similar settings.
Ascent™ Cardiorespiratory Diagnostic Software ("Ascent") is a stand-alone software application which can be used with several hardware devices in the Medical Graphics Corporation product line.
The core purpose of the software for measurement, data collection and analysis of testing in patients who may be suffering from pulmonary illnesses like chronic obstructive pulmonary disease (COPD), asthma, exercise intolerance, heart failure and/or cardiorespiratory concerns where diagnosis and prognosis needs to be determined.
In conjunction with diagnostic hardware, Ascent is used to collect data pertaining to the patient's degree of obstruction, lung volumes, and diffusing capacity. It is also used to present the collected lung diagnostic information so that it can be checked for quality and interpreted by a qualified physician, often a pulmonologist or cardiologist.
All the measurements are performed via a mouthpiece or a face mask.
The provided text is a 510(k) summary from the FDA, which outlines the substantial equivalence determination for a medical device. This type of document focuses on comparing a new device to existing legally marketed predicate devices rather than providing detailed acceptance criteria and the results of a specific clinical study with granular performance metrics. As such, the document does not contain the specific information needed to fulfill all aspects of your request, particularly regarding detailed performance metrics, sample sizes for test sets (beyond general validation statements), expert qualifications, ground truth establishment methods for test sets, MRMC studies, or training set details.
However, I can extract the information that is present and highlight what is missing based on your request.
Missing Information:
- Detailed Acceptance Criteria Table with Specific Performance Metrics: The document states that the software was "extensively validated per medical device software standards and guidance" and that "Testing results support that Ascent fulfills its intended use/indications for use." It also mentions "Performance tests included FEV1, MVV, FRC, SVC, DLCO, VA, TGV, VO2, VCO2, and VE." However, it does not provide specific quantitative acceptance criteria (e.g., "FEV1 must be within X% of ground truth") or the reported performance for these metrics.
- Sample Size for Test Set and Data Provenance: The document does not specify the sample size for the test data used for performance validation, nor does it detail the provenance (country, retrospective/prospective) of this data.
- Number of Experts and Qualifications for Ground Truth: The document does not describe how ground truth for the test set was established, including the number or specific qualifications of experts involved.
- Adjudication Method for Test Set: No information is provided regarding adjudication methods (e.g., 2+1, 3+1).
- Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study: The document does not indicate that an MRMC study was performed or provide any effect size for human reader improvement with AI assistance.
- Standalone Performance (Algorithm Only): While the document states "Ascent Cardiorespiratory Diagnostic Software is a stand-alone software application," it describes validation as "Performance validation testing was done with the subject software device and recommended hardware devices working together." It does not provide specific performance metrics for the algorithm only without human interaction in a diagnostic capacity beyond its intended function of measuring, collecting, and analyzing parameters. The software aids diagnosis, implying human interpretation.
- Type of Ground Truth Used (for Test Set): The document implicitly refers to "performance tests" for various physiological parameters (FEV1, DLCO, etc.), suggesting comparison to a reference standard for these measurements. However, it does not explicitly state the nature of this "ground truth" (e.g., expert consensus, pathology, outcome data) beyond reference to ATS/ERS guidelines for standardization.
- Sample Size for Training Set: No information on training data or its size is provided. This is typical for a 510(k) for software that calculates and analyzes data from hardware, rather than an AI/ML model that learns from large datasets.
- How Ground Truth for Training Set was Established: Not applicable as training data details are not provided.
Information that can be extracted from the document:
The provided document is a 510(k) summary for the "Ascent Cardiorespiratory Diagnostic Software" (K242809). It details the device's substantial equivalence to predicate devices, focusing on its intended use, technological characteristics, and conformity to relevant standards and guidelines.
1. A table of acceptance criteria and the reported device performance:
As noted above, specific quantitative acceptance criteria and reported performance metrics are NOT provided in this document. The document generally states that "Ascent Cardiorespiratory Diagnostic Software was extensively validated per medical device software standards and guidance. Testing results support that Ascent fulfills its intended use/indications for use..."
It mentions that "Performance tests included FEV1, MVV, FRC, SVC, DLCO, VA, TGV, VO2, VCO2, and VE." However, no numerical results or thresholds are given. The validation was done referencing the following guidelines/standards for "acceptability and repeatability":
- ATS/ ERS Standardisation of Spirometry (2019)
- ERS/ ATS Standardisation of the Measurements of Lung Volumes (2023)
- 2017 ERS/ ATS Standards for Single-Breath Carbon Monoxide Uptake in the Lung
- ERS Technical Standard on Bronchial Challenge Testing (2017)
- 2017 ATS Guidelines for a Standardized PF Report
- ATS/ ACCP Statement on Cardiopulmonary Exercise Testing (2003)
Summary of available information for a table format (conceptual, as specific numerical data is missing):
Performance Measure | Acceptance Criteria (Stated as conforming to standards) | Reported Device Performance (General Statement) |
---|---|---|
FEV1 | Conforms to ATS/ERS Standardisation of Spirometry (2019) requirements for acceptability and repeatability. | Testing results support intended use. |
MVV | Conforms to ATS/ERS Standardisation of Spirometry (2019) requirements for acceptability and repeatability. | Testing results support intended use. |
FRC | Conforms to ERS/ATS Standardisation of the Measurements of Lung Volumes (2023) requirements for acceptability and repeatability. | Testing results support intended use. |
SVC | Conforms to ATS/ERS Standardisation of Spirometry (2019) requirements for acceptability and repeatability. | Testing results support intended use. |
DLCO | Conforms to 2017 ERS/ATS Standards for Single-Breath Carbon Monoxide Uptake in the Lung for acceptability and repeatability. | Testing results support intended use. |
VA | Conforms to 2017 ERS/ATS Standards for Single-Breath Carbon Monoxide Uptake in the Lung for acceptability and repeatability. | Testing results support intended use. |
TGV | Conforms to ERS/ATS Standardisation of the Measurements of Lung Volumes (2023) requirements for acceptability and repeatability. | Testing results support intended use. |
VO2 | Conforms to ATS/ACCP Statement on Cardiopulmonary Exercise Testing (2003) guidelines. | Testing results support intended use. |
VCO2 | Conforms to ATS/ACCP Statement on Cardiopulmonary Exercise Testing (2003) guidelines. | Testing results support intended use. |
VE | Conforms to ATS/ACCP Statement on Cardiopulmonary Exercise Testing (2003) guidelines. | Testing results support intended use. |
Cybersecurity | Addressed as per FDA Guidance - Cybersecurity in Medical Devices. | Not specified explicitly beyond "addressed." |
Risk Management | Conforms to ISO 14971. | Not specified explicitly. |
Software Life Cycle | Conforms to IEC 62304. | Not specified explicitly. |
2. Sample sized used for the test set and the data provenance:
- Sample Size: Not specified. The document only states "Software validation testing involved system level tests, performance tests and safety testing based on hazard analysis. Performance validation testing was done with the subject software device and recommended hardware devices working together."
- Data Provenance: Not specified (e.g., country of origin, retrospective or prospective).
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This information is not provided in the document.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC study is mentioned or implied. The software is described as
"aiding in the diagnosis of related conditions" and presenting "diagnostic information so that it can be checked for quality and interpreted by a qualified physician." This device is a "Predictive Pulmonary-Function Value Calculator" and performs measurements and analysis, but it is not described as an AI system assisting human readers in image interpretation or diagnosis in a comparative effectiveness study context.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The device is indeed described as a "stand-alone software application." However, the performance validation was done "with the subject software device and recommended hardware devices working together." Its output (measurements and analysis) is intended to be "displayed to the user" and "interpreted by a qualified physician."
- The document does not provide performance metrics for the algorithm only in a way that suggests a diagnostic output without human interpretation or hardware interaction. It's a software that processes data from hardware to provide measurements and analysis, not, for example, a diagnostic image analysis AI.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- The document implies that the ground truth for the performance tests (FEV1, DLCO, etc.) would be established by the standardized measurement techniques defined by governing bodies like ATS/ERS/ACCP. These typically involve comparing the device's calculated values against accepted reference methods for deriving those physiological parameters, often involving highly calibrated equipment and expert technicians following strict protocols. However, the exact nature of this "ground truth" (e.g., what gold standard was used for comparison) is not explicitly detailed beyond referencing the standards themselves.
8. The sample size for the training set:
- This information is not provided in the document. This type of device (a calculator/analyzer) is typically engineered based on established physiological formulas and algorithms, rather than being "trained" on large datasets in the way a deep learning AI model would be.
9. How the ground truth for the training set was established:
- Not applicable, as training set details are not provided and the device functions as a calculator based on established science, not a machine learning model that learns from a labeled training set.
§ 868.1890 Predictive pulmonary-function value calculator.
(a)
Identification. A predictive pulmonary-function value calculator is a device used to calculate normal pulmonary-function values based on empirical equations.(b)
Classification. Class II (performance standards).