(29 days)
The Spectra Optia Apheresis System, a blood component separator, may be used to perform therapeutic plasma exchange.
The Spectra Optia Apheresis System, a blood component separator, may be used to perform Red Blood Cell Exchange (RBCX) procedures for the transfusion management of Sickle Cell Disease in adults and children.
The Spectra Optia Apheresis System is comprised of three subsystems: the apheresis machine (or equipment), embedded software, and a single-use disposable blood tubing set. The modifications described in this submission impact the embedded software.
Spectra Optia Machine and Embedded Software: As described previously (K071079, BK140191, K151368), the Spectra Optia Apheresis System is an automated, centrifugal, blood component separation device that uses pumps, valves and sensors to control and monitor a disposable, plastic extracorporeal circuit, during therapeutic apheresis procedures. The system's embedded software controls pump flow rates and centrifuge speed to establish and maintain the required plasma/cellular interface, and ensure patient safety.
The provided text describes a 510(k) premarket notification for the Spectra Optia Apheresis System, specifically a minor software update (Version 11.3). The document focuses on demonstrating that this software update does not impact the device's fundamental scientific technology or principle of operation and that it has been adequately verified and validated.
Here's an analysis of the acceptance criteria and study information, based on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly linked to the successful completion of various verification and validation tests, ensuring the software update addresses its intended purpose (mitigating use-errors related to patient height and weight entry) without introducing new safety concerns or altering the device's fundamental function. The reported device performance is that all tests passed.
Acceptance Criteria Category | Reported Device Performance |
---|---|
New / Updated Requirements | 110 out of 110 passed |
Safety Regression Tests | 13 out of 13 passed |
Compatibility (upgrade) | 1 out of 1 passed |
Exploratory Tests | 4 out of 4 passed |
Internal Usability | 2 out of 2 passed |
Reliability | 10 out of 10 passed |
Human Factors (Summative Study) | All subjects (23) successfully completed critical tasks; no performance failures observed. |
1. Sample Size for Test Set and Data Provenance
- Software Verification Type testing: The "Number of Verifications" column in Table 6-1 indicates the sample size for these tests (e.g., 110 for "New / Updated Requirements"). Data provenance is not specified but appears to be internal testing by Terumo BCT.
- Human Factors Summative Study:
- Sample size: 23 active Spectra Optia users.
- Data Provenance: Not explicitly stated, but the users are described as "active Spectra Optia users," suggesting they are likely from real-world clinical or laboratory settings, implying prospective data collection during the study. The study was conducted on a "software simulator," not directly on patients.
2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications
- Software Verification Type testing: The document does not specify the number or qualifications of experts for establishing ground truth for the software verification tests. These likely involved internal engineering and quality assurance personnel.
- Human Factors Summative Study: The "ground truth" for this study was the successful completion of tasks related to patient height, weight, and TBV entry with correct units and no performance failures. This "ground truth" was established based on the intended correct usage of the software. The study observed user performance to confirm this. No external experts beyond the study design team are explicitly mentioned for establishing this truth.
3. Adjudication Method for the Test Set
- Software Verification Type testing: The document does not describe an adjudication method beyond the pass/fail results for each test. This suggests that the test outcomes were directly assessed against pre-defined criteria without further expert adjudication post-test.
- Human Factors Summative Study: The study observed subjects' ability to use correct units and verify entered data, with "no performance failures observed." This implies direct observation against predefined success criteria, rather than a multi-expert adjudication of ambiguous cases.
4. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, What was the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance?
- No, an MRMC comparative effectiveness study was not done. This document describes a software update for an apheresis system, which is a medical device for blood component separation, not an AI-assisted diagnostic or image interpretation tool. The device facilitates a physical procedure rather than providing diagnostic interpretations involving human "readers" or "AI assistance." Therefore, this type of study and effect size is not applicable.
5. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done?
- Partially, yes. The "Software Verification Type testing" (Table 6-1) represents standalone algorithm testing to a significant extent, particularly for "New / Updated Requirements," "Safety Regression Tests," "Compatibility (upgrade)," and "Reliability." These tests assess the software's inherent function without necessarily a human actively operating it in a simulated or real patient scenario for every single verification. For instance, safety regression tests would likely involve automated checks against known hazardous conditions.
- However, "Internal Usability" and the "Human Factors (Summative) Study" did involve humans in the loop to assess the human-device interface and mitigate use-errors by operators.
6. The Type of Ground Truth Used
- Pre-defined Engineering/Software Requirements and Safety Criteria: For the "Software Verification Type testing," the ground truth was derived from the established design specifications, functional requirements, safety criteria, and compatibility requirements for the software. A "pass" indicates the software met these predefined criteria.
- Intended Correct User Performance: For the "Human Factors (Summative) Study," the ground truth was the correct execution of critical tasks (entering patient height, weight, and TBV with correct units and verification). The study assessed whether users could achieve this intended correct performance.
7. The Sample Size for the Training Set
- The document does not mention a training set. This is because the device described is an apheresis system with embedded software, not a machine learning or AI-driven system that typically requires a distinct training dataset. The software update is a traditional, rule-based or algorithmic software modification.
8. How the Ground Truth for the Training Set Was Established
- As a training set is not mentioned, the method for establishing its ground truth is not applicable/not provided.
N/A