Search Results
Found 1 results
510(k) Data Aggregation
(549 days)
EPIC ClearView System
The ClearView™ System provides two sets of numbers under two different conditions, one with capacitive barrier to minimize the effect of variables such as oils and perspiration on the image and one without the capacitive barrier. The device provides numerical measures of electrophysiological signals emanating from the skin. The device is limited to use as a measurement tool and is not intended for diagnostic purposes or for influencing any clinical decisions. This device is only to be used to image and document electrophysiological signals emanating from the skin. Clinical management decisions should be made on the basis of routine clinical care and professional judgment in accordance with standard medical practice.
The ClearView System consists of the ClearView Device (hardware) attached to a computer/software system. The measurements are digital photographs acquired when placing a fingertip in contact with a glass electrode. A series of electrical impulses are applied to the glass electrode generating a localized electromagnetic field around the fingertip. Under the influence of this field, an image is generated. A software analysis of the images of the 10 fingers (including the thumbs) provides the inputs for an algorithm-driven Response Scale Report. The ClearView System provides numerical electrophysiological data to the healthcare professional. Any interpretation of this information is the responsibility of the healthcare professional; the device is limited to use as a measurement tool and is not intended for diagnostic purposes or for influencing any clinical decisions.
The ClearView™ System is intended as a non-invasive measurement tool for detecting electrophysiological signals from the skin. It is explicitly stated that the device is not intended for diagnostic purposes or for influencing any clinical decisions, and the reported numbers have no clinical context. Therefore, the acceptance criteria and study design focus on the repeatability and reproducibility of its measurements, rather than clinical efficacy.
1. Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria Category | Specific Metric | Acceptance Threshold (Implicit) | Reported Device Performance |
---|---|---|---|
Reliability | Intraclass Correlation Coefficient (ICC) - Inter-Operator and Intra-Operator | "Good reliability" (ICC between 0.60 and 0.74) and "excellent reliability" (ICC above 0.75) based on Cicchetti (1994) and Cicchetti & Sparrow (1981) | Average Inter-Operator ICC: 0.74 (95%CI: 0.65, 0.83) |
Average Intra-Operator ICC: 0.77 (95%CI: 0.68, 0.87) | |||
Intraclass Correlation Coefficient (ICC) - Inter-Day and Intra-Day | "Good reliability" (ICC between 0.60 and 0.74) and "excellent reliability" (ICC above 0.75) | Average Inter-Day ICC: 0.72 (95%CI: 0.62, 0.82) | |
Average Intra-Day ICC: 0.78 (95%CI: 0.71, 0.86) | |||
Repeatability/Reproducibility | Coefficient of Variation (CV) | Relatively low coefficients of variation (implicitly indicating good repeatability and reproducibility) | Average CV by operator-scanner pair: 0.098 (95%CI: 0.061, 0.14) |
Average CV by day: 0.096 (95%CI: 0.060, 0.13) |
Conclusion on Acceptance Criteria: The study results for ICCs generally fall within or exceed the "good reliability" and "excellent reliability" thresholds, and the coefficients of variation are reported as "relatively low," indicating the device meets its implicit acceptance criteria for repeatability and reproducibility.
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 18 subjects (9 males, 9 females).
- Data Provenance: Prospectively enrolled, single-center study. Subjects were recruited internally from the study site (family and friends of staff members, excluding employees directly involved with the study). This suggests a limited geographic origin, likely the country where EPIC™ Research & Diagnostics is located (Scottsdale, AZ, USA).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
This information is not applicable as the study did not involve establishing a diagnostic ground truth or expert interpretation of the electrophysiological signals. The "ground truth" for this study was the raw measurement itself, and the goal was to assess its consistency.
4. Adjudication Method (for the test set)
This information is not applicable. The study focused on the technical performance (repeatability and reproducibility) of the device's measurements, not on clinical interpretations that would require adjudication.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study was done
No, a Multi Reader Multi Case (MRMC) comparative effectiveness study was not done. The study design involved three trained and certified operators using three different ClearView Scanners to assess the consistency of the device's measurements, not to compare human reader performance with and without AI assistance. The device is explicitly not intended for diagnostic purposes or for influencing clinical decisions.
6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was done
The study primarily assessed the system's standalone performance in terms of repeatability and reproducibility. While operators initiated scans, the analysis and generation of "numerical measures of electrophysiological signals" are done by the device's software. The study's focus on Coefficient of Variation and ICCs for these numerical outputs directly evaluates the consistency of the algorithm's output under different operators, scanners, and days. The "device is limited to use as a measurement tool and is not intended for diagnostic purposes or for influencing any clinical decisions," implying its output stands on its own as a measurement.
7. The Type of Ground Truth Used
The "ground truth" for this study was the device's own measurements of electrophysiological signals. The study aimed to assess the consistency and reliability of these measurements across different variables (operators, scanners, days), rather than comparing them to an external, independently established clinical ground truth (e.g., pathology, clinical outcome, or expert consensus on diagnosis).
8. The Sample Size for the Training Set
This information is not provided in the document. The document describes a clinical study to assess the performance of the device, but it does not detail any internal training data used for the device's software algorithm development.
9. How the Ground Truth for the Training Set Was Established
This information is not provided in the document. As this document describes the regulatory submission for the device and a study on its reliability, details about the development and training of the internal algorithm are not included.
Ask a specific question about this device
Page 1 of 1