(270 days)
Sentry WEB SmartInterp is a medical software which is intended to be used as an aid in the evaluation and diagnosis of already measured cardiopulmonary data. Access to data will be realized via network or internet with assigned access rights. Patient population is not assigned as it is defined by the measuring devices itself.
Sentry Web SmartInterp is a web application providing support for execution of post-measurement related clinical tasks like re-evaluation, quality grading and interpretation of medical readings. Sentry WEB SmartInterp does not primarily rely on electronic document formats – PDF-like reports – but utilizes modern web technologies to create a rich web-based user experience. Due to its general approach Sentry WEB SmartInterp can serve in many environments as 'the post-measurement' solution - customer-owned or as cloud based software service. Hence, Sentry WEB SmartInterp on one hand extends stand-alone diagnostic systems by running on the measurement system as local post-measurement component. For small labs Sentry WEB SmartInterp enables the attending physician to supervise several measurement units from his office. In mid-sized cardiopulmonary labs Sentry SmartInterp introduces optimized post-measurement workflow WEB capabilities. Finally in sophisticated multi-site setups Sentry WEB SmartInterp supports the channeling of data and creates the throughput required for large clinical teams.
The provided 510(k) summary for the Sentry Web SmartInterp device does not include specific acceptance criteria or a detailed study proving its performance against such criteria in the way typically expected for a diagnostic AI device assessing specific conditions.
Instead, the submission focuses on demonstrating substantial equivalence to predicate devices. This means the manufacturer is asserting that their new device is as safe and effective as a legally marketed device and does not raise new questions of safety or effectiveness. The "acceptance criteria" here are implicitly tied to the performance and safety profiles of the predicate devices.
The study described is primarily a non-clinical performance evaluation focused on software development and safety standards rather than a clinical accuracy study for specific diagnostic outcomes.
Here's an breakdown of the information provided:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria for diagnostic performance (e.g., sensitivity, specificity, AUC values) and consequently does not report device performance against such metrics. The "device performance" in this context refers to its ability to function according to its design and meet various software and safety standards.
Characteristic | Acceptance Criterion (Implicitly based on Predicate Equivalence) | Reported Device Performance |
---|---|---|
Risk Management | Compliance with ISO 14971 | Passed applicable tests and standards |
Usability | Compliance with EN 62366 | Passed applicable tests and standards |
Software Life Cycle | Compliance with ISO 62304 | Passed applicable tests and standards |
Accuracy Testing | Accuracy of evaluated data (against predicate functionality) | Passed applicable tests and standards |
Functional Claims | Meets intended use as described in product labeling | Meets functional claims and intended use |
Equivalence | Substantially equivalent to predicate devices | Substantially equivalent to predicate devices |
2. Sample Size Used for the Test Set and Data Provenance
The document does not specify a "test set" in the context of a diagnostic performance study (e.g., a set of medical images or patient records used to evaluate diagnostic accuracy). The testing performed was primarily non-clinical verification and validation of the software. Therefore, there's no information on sample size or data provenance related to diagnostic performance.
3. Number of Experts Used to Establish Ground Truth and Qualifications
As there was no "test set" for diagnostic performance, there were no experts used to establish ground truth for disease diagnoses. The "ground truth" for the non-clinical testing would refer to the expected behavior and outputs of the software based on its design specifications and standard requirements.
4. Adjudication Method
Not applicable, as no diagnostic performance study involving human interpretation and ground truth adjudication was described.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No MRMC study was performed or described. The device is presented as an "aid in the evaluation and diagnosis," similar in function to existing predicate software, rather than a system designed to improve human reader effectiveness in a comparative study.
6. Standalone Performance
A standalone (algorithm only) performance evaluation was implicitly conducted as part of the "Accuracy Testing" and "Summary Discussion of Bench Performance Data." The device (software) was tested to ensure it accurately evaluated data and met design specifications.
The statement "The validation and verification testing confirmed this device meets user needs and design inputs for PFT and CPET" and "Accuracy of evaluated Data" falling under "Non-clinical tests" implies standalone functional testing. However, this is not a standalone diagnostic performance study against clinical ground truth.
7. Type of Ground Truth Used
For the non-clinical tests described, the "ground truth" was internal to the development process:
- Design specifications and established standards: For risk management (ISO 14971), usability (EN 62366), and software life cycle (ISO 62304).
- Expected data evaluation outputs: For "Accuracy of evaluated Data," implying that the software's calculations and interpretations were compared against expected correct outputs for various cardiopulmonary data. This is more of a functional and computational accuracy check rather than a clinical diagnostic ground truth.
There was no clinical ground truth (e.g., pathology, clinical outcomes, expert consensus on diagnoses) used in the reported testing.
8. Sample Size for the Training Set
The document does not mention any "training set," which is typically associated with machine learning or AI models. Given the device's description as an "evaluation software" that relies on web technologies and mirrors existing functionalities of predicate devices, it's unlikely to be a machine learning model requiring a distinct training set in the conventional sense. It appears to be a rule-based or algorithmic system that processes and displays data.
9. How the Ground Truth for the Training Set Was Established
Not applicable, as no training set was mentioned.
§ 870.1425 Programmable diagnostic computer.
(a)
Identification. A programmable diagnostic computer is a device that can be programmed to compute various physiologic or blood flow parameters based on the output from one or more electrodes, transducers, or measuring devices; this device includes any associated commercially supplied programs.(b)
Classification. Class II (performance standards).