(181 days)
IDx-DR is indicated for use by healthcare providers to automatically detect more than mild diabetic retinopathy in adults diagnosed with diabetes who have not been previously diagnosed with diabetic retinopathy. IDx-DR is indicated for use with the Topcon NW400.
The IDx-DR device consists of several component parts. A camera is attached to a computer, where IDx-DR client is installed. Guided by the Client, users acquire two fundus images per eye to be dispatched to IDx-Service. IDx-Service is installed on a server hosted at a secure datacenter. From IDx-Service, images are transferred to IDx-DR Analysis. No information other than the fundus images is required to perform the analysis. IDx-DR Analysis, which runs on dedicated servers hosted in the same secure datacenter as IDx-Service, processes the fundus images and returns information on the exam quality and the presence or absence of mtmDR to IDx-Service. IDx-Service then transports the results to the IDx-DR Client that displays them to the user.
Here's an analysis of the acceptance criteria and study information for the IDx-DR device, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The provided 510(k) summary (K203629) states that the device modifications do not affect clinical performance and refers to the predicate device (DEN180001) for clinical trial details. Therefore, the acceptance criteria and reported device performance are identical to the predicate device. To provide complete information, one would need to refer to the DEN180001 submission. However, based solely on the provided document K203629, the table would look like this:
Acceptance Criterion | Reported Device Performance (from K203629) |
---|---|
Auto-detect more than mild diabetic retinopathy (mtmDR) | Not explicitly stated in K203629. |
K203629 states: "The device modifications do not affect clinical performance." | |
Performance is considered "Equivalent" to predicate device DEN180001. | |
Refer to an eye care professional for mtmDR detected | Not explicitly stated in K203629. |
K203629 states: "The device modifications do not affect clinical performance." | |
Performance is considered "Equivalent" to predicate device DEN180001. | |
Rescreen in 12 months for mtmDR not detected | Not explicitly stated in K203629. |
K203629 states: "The device modifications do not affect clinical performance." | |
Performance is considered "Equivalent" to predicate device DEN180001. | |
Insufficient image quality identified | Implied as an output, but no performance metric given. |
K203629 states: "The device modifications do not affect clinical performance." | |
Performance is considered "Equivalent" to predicate device DEN180001. |
Important Note: To get the actual numerical acceptance criteria (e.g., sensitivity, specificity thresholds) and the reported performance values, the DEN180001 submission would need to be reviewed. This document explicitly avoids providing those details for the current submission.
2. Sample Size Used for the Test Set and Data Provenance
Since the current submission (K203629) states that "The determination of substantial equivalence is not based on an assessment of clinical performance data" and refers to DEN180001 for clinical trial details, this information is not available in the provided text.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
This information is not provided in the K203629 document. It would be found in the clinical trial details for the predicate device (DEN180001).
4. Adjudication Method for the Test Set
This information is not provided in the K203629 document. It would be found in the clinical trial details for the predicate device (DEN180001).
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study Was Done, and the Effect Size of How Much Human Readers Improve with AI vs. Without AI Assistance
A Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with AI assistance versus without AI assistance is not mentioned in the furnished K203629 document. The document explicitly states that the substantial equivalence determination is not based on new clinical performance data and refers to the predicate device's clinical trial.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Study Was Done
The K203629 document describes the IDx-DR Analysis component as "Software that analyzes the patient's images and determines exam quality and the presence/absence of diabetic retinopathy." This implies a standalone algorithmic assessment. However, the performance metrics of this specific version of the standalone algorithm are not presented in this document, as it relies on the predicate device's clinical performance. The "Outputs" section of Table 1 supports the standalone nature of the output, as it directly states the detection of DR and referral decisions.
7. The Type of Ground Truth Used
This information is not provided in the K203629 document. It would be found in the clinical trial details for the predicate device (DEN180001). Typically, for diabetic retinopathy, ground truth is established by a panel of expert ophthalmologists or retina specialists through consensus reading of images, potentially correlated with other clinical findings.
8. The Sample Size for the Training Set
The document does not specify the sample size for the training set. It mentions "Future algorithm improvements will be made under a consistent medically relevant framework" and "A protocol was provided to mitigate the risk of algorithm changes," but no details on training data for the current or previous versions are given.
9. How the Ground Truth for the Training Set Was Established
The document does not provide details on how the ground truth for the training set was established.
§ 886.1100 Retinal diagnostic software device.
(a)
Identification. A retinal diagnostic software device is a prescription software device that incorporates an adaptive algorithm to evaluate ophthalmic images for diagnostic screening to identify retinal diseases or conditions.(b)
Classification. Class II (special controls). The special controls for this device are:(1) Software verification and validation documentation, based on a comprehensive hazard analysis, must fulfill the following:
(i) Software documentation must provide a full characterization of technical parameters of the software, including algorithm(s).
(ii) Software documentation must describe the expected impact of applicable image acquisition hardware characteristics on performance and associated minimum specifications.
(iii) Software documentation must include a cybersecurity vulnerability and management process to assure software functionality.
(iv) Software documentation must include mitigation measures to manage failure of any subsystem components with respect to incorrect patient reports and operator failures.
(2) Clinical performance data supporting the indications for use must be provided, including the following:
(i) Clinical performance testing must evaluate sensitivity, specificity, positive predictive value, and negative predictive value for each endpoint reported for the indicated disease or condition across the range of available device outcomes.
(ii) Clinical performance testing must evaluate performance under anticipated conditions of use.
(iii) Statistical methods must include the following:
(A) Where multiple samples from the same patient are used, statistical analysis must not assume statistical independence without adequate justification.
(B) Statistical analysis must provide confidence intervals for each performance metric.
(iv) Clinical data must evaluate the variability in output performance due to both the user and the image acquisition device used.
(3) A training program with instructions on how to acquire and process quality images must be provided.
(4) Human factors validation testing that evaluates the effect of the training program on user performance must be provided.
(5) A protocol must be developed that describes the level of change in device technical specifications that could significantly affect the safety or effectiveness of the device.
(6) Labeling must include:
(i) Instructions for use, including a description of how to obtain quality images and how device performance is affected by user interaction and user training;
(ii) The type of imaging data used, what the device outputs to the user, and whether the output is qualitative or quantitative;
(iii) Warnings regarding image acquisition factors that affect image quality;
(iv) Warnings regarding interpretation of the provided outcomes, including:
(A) A warning that the device is not to be used to screen for the presence of diseases or conditions beyond its indicated uses;
(B) A warning that the device provides a screening diagnosis only and that it is critical that the patient be advised to receive followup care; and
(C) A warning that the device does not treat the screened disease;
(v) A summary of the clinical performance of the device for each output, with confidence intervals; and
(vi) A summary of the clinical performance testing conducted with the device, including a description of the patient population and clinical environment under which it was evaluated.