Search Results
Found 1 results
510(k) Data Aggregation
(343 days)
Notal Vision Home Optical Coherence Tomography (OCT) System
The Notal Vision Home Optical Coherence Tomography (OCT) System is an Artificial Intelligence (AI)-based Home Use device indicated for visualization of intraretinal and subretinal hypo-reflective spaces in a 10 by 10-degrees area centered on the point of fixation of eyes diagnosed with neovascular age-related macular degeneration (NV-AMD). In addition, it provides segmentation and an estimation of the volume of hyporeflective spaces. The Notal Home OCT device is intended for imaging at home between regularly scheduled clinic assessments and not intended to be used to make treatment decisions or replace standard-of care regularly scheduled examinations and clinical testing as needed, including in-office imaging and assessments for changes in vision, by an ophthalmologist.
The NVHO System is a device that consists of two elements:
- . Notal Home OCT device: patients use this to self-image their eyes using Spectral-Domain OCT: At the end of each scanning session the data is transmitted via a secured wireless communication to the Notal Health Cloud.
- Notal Health Cloud: cluster of servers and analysis units on which the Notal OCT . Analyzer (NOA) analyzes the scans received from the Notal Home OCT device. Processed data are presented on a dedicated interactive web-application, the Notal Home OCT Web Viewer.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
The core acceptance criteria for the Notal Vision Home OCT System revolve around its ability to:
- Accurately visualize intraretinal (IRO) and subretinal (SRO) hypo-reflective spaces (HRS), often referred to as intraretinal fluid (IRF) and subretinal fluid (SRF) in the clinical studies.
- Provide segmentation and estimation of the volume of these hypo-reflective spaces (TRO).
- Perform as intended in a home-use setting, including successful self-imaging.
The studies focused on the agreement in visualization of these spaces and the agreement/precision of volume quantification.
Table of Acceptance Criteria and Reported Device Performance
Acceptance Criteria (from Special Controls & Study Endpoints) | Reported Device Performance (from 001 and 006 Clinical Studies) |
---|---|
Clinical Performance (001 Study): |
- Evaluate agreement between NVHO and RC-graded in-office OCT macular scans in the visualization of retinal fluid (TRO, SRO, IRO) in the central 3x3-mm area.
- Evaluate the ability of participants to successfully self-image with the NVHO device (success rate of setup, success rate for self-imaging attempts). | 001 Study (Visualization Agreement):
- PPA (Positive Percent Agreement) for TRO: 0.864 (86.4%) (95% CI 0.802, 0.926; p=0.043 for H0: PPA ≤ 0.8). This met the statistical threshold for PPA.
- NPA (Negative Percent Agreement) for TRO: 0.849 (84.9%) (95% CI 0.792, 0.907; p=0.094 for H0: NPA ≤ 0.8). This did not meet the statistical threshold (p > 0.05).
- Initial NVHO setup success rate: 86.7% (95% CI 80.8% - 91.3%; 156/180).
- NVHO self-imaging success rate: 96.1% (95% CI 92.2% - 98.4%).
- Successful transmission rate to Cloud (any self-imaging): Study eyes: 97.2% (95% CI, 93.6% - 99.1%); Fellow eyes: 94.9% (95% CI, 89.8% - 97.9%). |
| Clinical Performance (006 Study): - Evaluate agreement in estimated retinal fluid volume between manually segmented CIRRUS HD-OCT scans and NOA algorithm analyzing NVHO scans (TRO volume).
- Estimate repeatability and reproducibility of the TRO parameter.
- Evaluate amount of overlap in segmentation of IRF and SRF between NOA and manual graders (using Dice coefficient and NPA).
- Visual Acuity of patients: 20/320 or better. | 006 Study (Volume Quantification & Segmentation):
- Repeatability %CV for TRO: 10 VU: 5.9% to 25.0%.
- Reproducibility %CV for TRO: 10 VU: 11.4% to 33.4%.
- Agreement (Bland-Altman LOAs) and Deming regression analyses were conducted for volume estimates but specific numerical results for agreement with CIRRUS are not detailed in the provided text beyond stating the analysis method.
- Segmentation Overlap (Dice Coefficient - Mean ± SD):
- NOA vs. Grader 1 TRO: 0.5819 ± 0.2958
- NOA vs. Grader 2 TRO: 0.5655 ± 0.3203
- NOA vs. Grader 3 TRO: 0.5196 ± 0.3182
- For comparison, Grader-Grader TRO Dice varied (e.g., Grader 1 vs Grader 2: 0.6222 ± 0.2754)
- Segmentation Negative Percent Agreement (NPA - NOA vs. Graders):
- TRO: Ranges from 0.734 to 0.787 (e.g., NOA vs Grader 1: 0.734 (58/79))
- SRO: Ranges from 0.764 to 0.856 (e.g., NOA vs Grader 1: 0.764 (123/161))
- IRO: Ranges from 0.946 to 0.966 (e.g., NOA vs Grader 1: 0.946 (176/186))
- Patient Visual Acuity: Mean logMAR (Snellen) for mVAP in 001 study was 0.281 (20/38.2). Max logMAR was 1.04 (20/219.3). In 006 study, Mean logMAR was 0.350 (20/44.8). Max logMAR was 1.20 (20/320.0). The study populations included patients with VA up to 20/320, consistent with the indication for use limit. |
| Non-clinical performance (bench testing): Verify technical specifications, spatial characteristics, sensitivity, diopter range, image quality, field of view, resolution. | Bench performance verification (Table 2): Passed all tests including axial resolution, lateral resolution, axial range, lateral range, device sensitivity, and diopter range. Specific numerical criteria (b)(4) were not provided, but the results are "Passed". |
| Human Factors Validation: Demonstrate intended users can correctly use the device (patient & HCP interfaces). | Human Factors Studies: Two summative studies conducted for patient and HCP interfaces. Found that the patient interface (NVHO device, labeling) and the physician interface (Web Viewer, labeling) were safe and effective for intended users, uses, and environments. No use errors were observed on critical tasks. |
| Biocompatibility: Patient-contacting components are biocompatible. | Biocompatibility Evaluation: Conducted for five patient-contacting components according to ISO 10993-1. Found to be acceptable. |
| Software Verification, Validation & Hazard Analysis: Software operates as specifications, protected from cyber threats, addresses risks. | Software Documentation & Testing: Detailed description of inputs/outputs, modules, and interactions provided. All components controlled by software. Training and testing data described. Verification and validation testing addressed hazards with satisfactory results. Cybersecurity information demonstrated protection from cyber vulnerability threats. Adequate and met standards. |
| Optical Radiation Safety: Provide acceptable light hazard protection. | Optical Radiation Safety Testing: Conducted in accordance with ANSI Z80.36:2021. Test report, descriptions, measurement procedures, and justification for worst-case scenarios were found to be acceptable. |
| EMC, Wireless Coexistence, Electrical Safety: Performance testing demonstrates safety. | EMC & Electrical Safety Testing: Performed per IEC 60601-1, IEC 60601-1-11, and IEC 60601-1-2. Results support electrical safety and electromagnetic compatibility. |
Study Details:
1. Sample Sizes for Test Sets and Data Provenance:
- 001 Study (Visualization Agreement):
- Test Set (mVAP cohort): 160 participants.
- Data Provenance: Prospective, longitudinal study conducted at seven sites in the United States.
- 006 Study (Volume Quantification & Segmentation):
- Test Set (Fluid Precision Analysis Population, Fluid Agreement Analysis Population): 331 participants.
- Test Set (Dice Analysis Population): 336 participants.
- Data Provenance: Prospective, cross-sectional, observational, single-visit study conducted at six sites in the United States.
2. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- 001 Study: Ground truth was established by expert graders at a third-party reading center (RC). The specific number and qualifications of these experts are not explicitly stated beyond "expert graders."
- 006 Study: Ground truth (manual segmentation of hypo-reflective spaces) was performed by three independent, masked graders from a third-party reading center (RC). Their specific qualifications are not explicitly stated beyond being "independent, masked graders."
3. Adjudication Method for the Test Set:
- 001 Study: For visualization of retinal fluid, "Disagreements between graders were adjudicated." The specific method (e.g., 2+1, 3+1) is not explicitly detailed, but resolution of disagreements by adjudication is stated.
- 006 Study: For manual segmentation, graders were masked to each other's determinations. The text does not explicitly state an adjudication method for segmentation disagreements between the three graders; instead, it compares the NOA's performance against each grader acting as the "reference standard" and also provides grader-to-grader agreement. However, for the primary outcome of agreement in estimated retinal fluid volume, it notes "Independent, masked graders from a third-party reading center (RC) performed manual segmentation... Graders were masked to each others' determinations..." The process for resolving discrepancies or creating a single "ground truth" through adjudication before comparing with the AI is implied by the term "reference standard" but not explicitly detailed as 2+1, etc.
4. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:
- The provided text does not describe an MRMC comparative effectiveness study designed to evaluate how much human readers improve with AI vs. without AI assistance.
- The studies instead focus on the device's standalone performance (NOA algorithm) compared to expert human grading or clinic-based OCT, and the patient's ability to self-image using the device. The 001 study evaluated the agreement of the NVHO device's visualization with expert-graded in-office OCT, not an AI-assisted human reading task. The 006 study evaluated the agreement and precision of the automated volume quantification with expert manual segmentation.
5. Standalone Performance (Algorithm Only without Human-in-the-Loop):
- Yes, standalone performance was evaluated extensively.
- The Notal OCT Analyzer (NOA), which is the AI algorithmic module, operates on the Notal Health Cloud and processes the volume scans. Its purpose is to segment and calculate the volume of SRO and IRO.
- The 001 Study's primary effectiveness endpoints were the Positive and Negative Percent Agreements (PPA, NPA) of the NVHO's visualization (implying the NOA's interpretation of the scans) with RC-graded CIRRUS HD-OCT scans. This is a direct measure of the device's standalone performance in detecting fluid.
- The 006 Study directly evaluated the agreement between NOA-based volume estimates and manually segmented CIRRUS-based volume estimates, as well as the Dice coefficients (segmentation overlap) and NPAs between the NOA and individual graders. These are all measures of the standalone algorithmic performance.
6. Type of Ground Truth Used:
- For the clinical studies (001 and 006), the ground truth for fluid presence/absence and volume/segmentation was established by expert consensus/grading from a third-party reading center based on in-office Spectral-Domain OCT images (CIRRUS HD-OCT).
- In the 001 study, "RC graders were masked to each other's determinations, to the source device, and to the participant ID number. Ordering of the scans was randomized. Disagreements between graders were adjudicated."
- In the 006 study, "Independent, masked graders from a third-party reading center (RC) performed manual segmentation of hypo-reflective spaces on the central 3×3-mm area of acceptable CIRRUS macular scans. Graders were masked to each others' determinations and to the participant ID number. Ordering of the scans was randomized."
7. Sample Size for the Training Set:
- The provided text states: "A detailed description of data used to train and test the algorithms was provided, including cases, sources, demographics and reference standards." However, the specific sample size for the training set is not explicitly listed in the provided document. The clinical studies (001 and 006) are described as "pivotal clinical studies" for evaluating clinical performance data, implying them as test sets, rather than training sets.
8. How the Ground Truth for the Training Set Was Established:
- Similar to the training set size, the document confirms that a "description of the data used to train... algorithms" was provided by the sponsor. However, the specific methods for establishing ground truth for the training set are not detailed in this public summary. It can be inferred that it likely involved similar expert grading processes as those used for the test sets, but the document does not confirm this.
Ask a specific question about this device
Page 1 of 1