(55 days)
The TRAQS is a database management system that is designed specifically to collect, organize, and store Thrombolytic Assessment System (TAS) analyzer records
TRAQS is a software package designed for computers running the Windows 95 operating system. TRAQS provides features which allow the user to collect, view, and sort patient and control results from Thrombolytic Assessment System (TAS) analyzers. TRAQS also provides a method for the user to enter normal ranges and then compare the stored data set against the ranges. Stored results which fall outside the appropriate range are flagged on the screen and in printouts as out of range. Sets of stored TAS results displayed on the screen can be saved to archive files for later viewing or printing. TRAQS also keeps track of the number of results collected from TAS analyzers and can display this data by TAS serial numbers. No capability is provided to alter the data associated with TAS results. A comprehensive users manual is supplied with the software.
The provided text describes the "TAS Result Acquisition System (TRAQS)" software, its intended use, development, and testing. However, it does not contain specific quantitative acceptance criteria or a study demonstrating device performance against such criteria in the way typically expected for clinical devices (e.g., sensitivity, specificity, accuracy).
Instead, the document details a software validation process focused on meeting functional specifications and requirements.
Here's a breakdown of the requested information based on the provided text:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Stated as Pass/Fail Criteria) | Reported Device Performance (Summary of Test Results) |
---|---|
"all test scripts are completed successfully by the version of software under test." (This implies that all stated functional requirements are met). | The "Test Data Summary" section describes the systematic testing performed: |
- Testing of TRAQS features: Viewing, sorting, and archiving records were methodically tested using modified TRAQS database files with known record sets.
- Handling of TAS analyzer records: TRAQS was tested to ensure proper handling of all combinations of data (test type, sample type, error condition, patient ID, length, etc.) that could be received from a TAS analyzer. This included viewing, sorting, printing, and archiving.
- Year 2000 (Y2K) compliance: A subset of records specifically tested the software's ability to handle Y2K issues.
The overall implication is that these tests were successful, as the device received 510(k) clearance. |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Sample Size: The text does not provide a specific number for the sample size of the test set. It refers to "known record sets" and "a special set of test records" representing "every possible combination of test type, sample type, error condition, patient id, length, etc." This indicates a comprehensive, but unquantified, test set.
- Data Provenance: The data was synthetically generated or modified for the purpose of software testing.
- "modified TRAQS database files" (for known record sets)
- "TAS records received from a TAS analyzer running with modified software. The modification to the TAS analyzer created a function which filled the TAS analyzer's memory with a special set of test records."
It does not appear to be real patient data, nor is there information on the country of origin. This was a simulated/generated dataset for software validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not applicable. The ground truth for this software validation was the expected output or behavior defined by the functional specifications. The "known record sets" and the "special set of test records" were specifically designed with expected outcomes against which the software's performance was compared. This is a characteristic of functional software testing, not clinical validation requiring expert review of primary data like medical images or pathology slides.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not applicable. As described above, the testing involved comparing the software's output against predefined expected behaviors from the "Functional Specification" and the design of the "known record sets" and "special set of test records." There was no human adjudication process involved in establishing ground truth for individual cases.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. This device is a software system for collecting, viewing, and organizing results from TAS analyzers, and indicating if results are within user-defined ranges. It is not an AI-assisted diagnostic tool or an imaging modality that would involve human readers making interpretations.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in essence. The validation described is of the standalone software functionality. The tests verified that the TRAQS software independently collected, sorted, viewed, archived data, and correctly flagged out-of-range results as per its defined specifications. There is no "human-in-the-loop" aspect to its core functional validation; it's about the software performing its programmed tasks correctly.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The ground truth was established by the Functional Specification documents and the design of the test data sets. The "known record sets" and the "special set of test records" were created with predefined expected results, allowing for direct comparison against the software's output. This is typical for rigorous software validation.
8. The sample size for the training set
- Not applicable. This is a rule-based software for data management, not a machine learning or AI algorithm that requires a "training set" in the conventional sense. The software's functionality is derived from its programming according to specifications, not from learning from data.
9. How the ground truth for the training set was established
- Not applicable, as there was no training set for a machine learning model.
§ 864.5400 Coagulation instrument.
(a)
Identification. A coagulation instrument is an automated or semiautomated device used to determine the onset of clot formation for in vitro coagulation studies.(b)
Classification. Class II (special controls). A fibrometer or coagulation timer intended for use with a coagulation instrument is exempt from the premarket notification procedures in subpart E of part 807 of this chapter subject to the limitations in § 864.9.