(261 days)
CommandEP is intended for use as a medical imaging system that allows the review, analysis, communication, and media interchange of multi-dimensional digital images. It is also intended for intraprocedural use. CommandEP is designed as an additional visualization modality to assist the clinician. CommandEP indicated for use in electrophysiology (EP) procedures to assist the clinician in visualization of the heart electroanatomic data.
The SentiAR CommandEP device is a medical imaging system which allows for the review, analysis, communication, and media interchange of multi-dimensional digital images. CommandEP HMD provides a real-time, three-dimensional (3D) visualization of electroanatomic mapping system (EAMS) data (the Mixed Reality (MXR) EAMS Visualization). CommandEP is intended to be used as an adjunct device to assist the clinician in visualization of the cardiac electrophysiology procedures.
Real-time multi-dimensional images are received from an Electroanatomic Mapping System (EAMS) by the touchscreen, all-in-one computer (the Model DPC02 Data Manager PC) and wirelessly transmitted to up to five (5) Head Mounted Displays (the Model HMD02 CommandEP HMD). Additional HMDs may be allocated to facilitate charging and device management. The EAMS is directly connected to the CommandEP by a wired network cable. The CommandEP Data Manager PC communicates with the CommandEP HMDs through a dedicated, encrypted Wi-Fi network provided by the Data Manager PC.
Clinician users view the MXR EAMS Visualization in stereoscopic 3D using optical see-through (OST) HMDs and manipulate the MXR EAMS Visualization using hands-free gaze-dwell controls. The OST display enables clinicians to view both the conventional EAMS display and the CommandEP MXR EAMS Visualization during the procedure. The hands-free controls enable clinicians to control the device without breaking sterility and may also reduce the need to verbalize commands to a non-sterile EAMS technician.
CommandEP allows a clinician user to modify the personal view of data, but does not deliver therapy, intervene with therapy, assist the clinician with therapeutic decisions, or otherwise affect the performance of any other medical device.
CommandEP also provides a shared view function which allows observers or supporting staff to view the cardiac electroanatomic data, with the notated perspective of a selected HMD user, on an either 1) an HMD or 2) a conventional PC display provided by the Data Manager PC (the Spectator Display) to facilitate team communication.
The intended physician user of the CommandEP device is an electrophysiologist. The electrophysiologist performs procedures in a cardiac electrophysiology laboratory, which is a sterile professional healthcare environment. All components of the CommandEP device are non-patient contacting device and are provided non-sterile.
The provided text does not contain information about specific acceptance criteria or the details of a study proving the device meets those criteria. It is a 510(k) summary from the FDA, which focuses on demonstrating substantial equivalence to a legally marketed predicate device rather than presenting detailed performance study results against specific acceptance criteria.
Therefore, I cannot populate the requested table or answer most of the questions. However, based on the provided text, I can infer some general information:
- No clinical testing was required: This indicates that the substantial equivalence was based on non-clinical performance and technological comparisons, not a clinical study involving human subjects or AI-assisted human reader performance.
- Focus on Substantial Equivalence: The document repeatedly emphasizes that the device is "as safe and effective as the legally marketed predicate device" and "raises no new questions of safety or effectiveness." This is the primary "proof" for 510(k) clearance.
- Acceptance Criteria for Non-Clinical Tests: The document states that "The test methods and acceptance criteria were equivalent to the predicate device in support of the intended use." While the specific criteria aren't detailed, it implies that the device had to perform comparably to the predicate in areas like biocompatibility, electromagnetic compatibility, wireless capability, electrical, mechanical, and thermal safety, and software specifications.
Here's what can be extracted directly or indirectly from the provided text, and what cannot:
1. A table of acceptance criteria and the reported device performance:
Acceptance Criteria (Implied) | Reported Device Performance (Implied) |
---|---|
Biocompatibility compliant to ISO 10993-1 | Demonstrated compliance |
Electromagnetic compatibility compliant to IEC 60601-1-2 | Demonstrated compliance |
Wireless capability compliant to relevant standards | Demonstrated compliance |
Electrical, mechanical, and thermal safety compliant to ANSI AAMI ES60601-1 | Demonstrated compliance |
Software life cycle processes compliant to IEC 62304 | Demonstrated compliance |
Optical properties compliant to IEC 63145-20-10 | Demonstrated compliance |
Image quality compliant to IEC 63145-20-20 | Demonstrated compliance |
Safety and effectiveness equivalent to predicate device (K192890) | Concluded to be as safe and effective as predicate, raising no new questions of safety or effectiveness. |
(Note: The document states "test methods and acceptance criteria were equivalent to the predicate device," but does not list the specific numerical or qualitative acceptance criteria themselves. The "reported device performance" is a general statement of compliance rather than specific metrics.)
2. Sample sized used for the test set and the data provenance:
- The document does not mention "test sets" in the context of clinical data. The testing described is non-clinical performance verification and validation. No sample sizes for data sets are provided.
- Data provenance is not applicable here as no patient data or clinical study data is referenced.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):
- Not applicable. No clinical test set or ground truth established by experts is mentioned for this 510(k) submission.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable. No clinical test set or adjudication process is mentioned.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC comparative effectiveness study was done. The document explicitly states: "No clinical testing was required to develop evidence of substantial equivalence to the predicate device." The device is intended as an "additional visualization modality to assist the clinician," but its impact on human reader performance was not assessed in a clinical study for this submission.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, in a sense. The "performance testing" mentioned (e.g., software, electrical, optical properties) would be standalone testing of the device's functionality. However, it's not "algorithm only" in the context of diagnostic performance as would be seen for an AI diagnostic device. The device is a "medical imaging system" for visualization rather than an autonomous diagnostic algorithm.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- Not applicable in the context of clinical evaluation or performance against a diagnostic gold standard. The "ground truth" for the non-clinical tests would be the established engineering and safety standards (e.g., ISO, IEC, ANSI AAMI) and the performance characteristics of the predicate device.
8. The sample size for the training set:
- Not applicable. This device is not described as an AI/ML device that requires a "training set" in the conventional sense of machine learning for diagnostic tasks. Its function is to visualize data from an EAMS.
9. How the ground truth for the training set was established:
- Not applicable, as there is no mention of a training set for an AI/ML model.
§ 892.2050 Medical image management and processing system.
(a)
Identification. A medical image management and processing system is a device that provides one or more capabilities relating to the review and digital processing of medical images for the purposes of interpretation by a trained practitioner of disease detection, diagnosis, or patient management. The software components may provide advanced or complex image processing functions for image manipulation, enhancement, or quantification that are intended for use in the interpretation and analysis of medical images. Advanced image manipulation functions may include image segmentation, multimodality image registration, or 3D visualization. Complex quantitative functions may include semi-automated measurements or time-series measurements.(b)
Classification. Class II (special controls; voluntary standards—Digital Imaging and Communications in Medicine (DICOM) Std., Joint Photographic Experts Group (JPEG) Std., Society of Motion Picture and Television Engineers (SMPTE) Test Pattern).