Search Results
Found 1 results
510(k) Data Aggregation
(92 days)
The MatchedFlicker® device is a software program that is intended for use by health care professionals to collect, store, and spatially calibrate (i.e. register and align) images of the posterior segment of the human eye.
The MatchedFlicker® device is a software program that is intended for use by health care professionals to collect, store, and spatially calibrate (i.e. register and align) images of the posterior segment of the human eye.
EyeIC's MatchedFlicker® Device: Acceptance Criteria and Study Details
The provided 510(k) summary for K090266 describes the MatchedFlicker® device as digital imaging software intended for healthcare professionals to collect, store, and spatially calibrate images of the posterior segment of the human eye for monitoring disease progression. As a Class II device (21 CFR 892.2050. NFJ), it is regulated as a Picture Archiving & Communication System.
Based on the provided document, the device did not undergo performance testing with specific acceptance criteria that would typically involve numerical metrics (e.g., sensitivity, specificity, accuracy). Instead, its substantial equivalence was established by comparing its technological characteristics and intended use to legally marketed predicate devices.
The document does not describe a study that proves the device meets specific acceptance criteria in terms of clinical performance metrics. The regulatory approval appears to rely on demonstrating that the device is substantially equivalent to existingpredicate devices for its stated intended use.
Here's an analysis of the provided information in the context of the requested details:
1. Table of Acceptance Criteria and Reported Device Performance
As discussed, the document does not lay out explicit acceptance criteria in the form of performance metrics (e.g., sensitivity, specificity, alignment accuracy thresholds) for the MatchedFlicker® device itself that were then measured and reported.
Instead, the "performance" described is the device's functionality and its technological characteristics being similar to predicate devices. The implicit "acceptance criterion" from the FDA's perspective for a 510(k) submission based on substantial equivalence is that the new device is as safe and effective as a legally marketed predicate device.
Acceptance Criterion (Implicit) | Reported Device Performance |
---|---|
Ability to collect images | MatchedFlicker® is designed to collect images. |
Ability to store images | MatchedFlicker® is designed to store images. |
Ability to spatially calibrate (register and align) images of the posterior segment of the human eye | MatchedFlicker® is designed to spatially calibrate (register and align) images of the posterior segment of the human eye. The core feature is comparing time-series images through spatial calibration ("MatchedFlicker" implies visual comparison of aligned images). |
Aids professionals to more easily compare and annotate time-series images | The device's technological characteristic section states it "is software to aid professionals to more easily compare and annotate time-series images." |
Substantially equivalent to predicate devices (NAVIS (K013694), Retasure (K071299), IMAGEnet (K082364)) | The submission asserts and the FDA confirms (via 510(k) clearance) that MatchedFlicker® is substantially equivalent to these predicate devices based on its intended use and technological characteristics. |
Note: The document does not provide details on specific studies where numerical performance metrics were collected for these functionalities. The term "acceptance criteria" in the context of this 510(k) submission refers more to demonstrating functional equivalence and safety/effectiveness comparable to predicates, rather than meeting quantitative performance thresholds.
2. Sample size used for the test set and the data provenance
Not applicable/Not provided. The document does not describe a performance study with a dedicated "test set" of patient data for evaluating new performance metrics. The clearance is based on substantial equivalence, implying the device's functionality is inherently similar to existing predicate devices, and therefore, does not necessitate a novel clinical performance study with a specific test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/Not provided. No test set requiring expert ground truth establishment for performance evaluation is described.
4. Adjudication method for the test set
Not applicable/Not provided. No test set or adjudication process for performance evaluation is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No. The document does not describe a multi-reader multi-case (MRMC) comparative effectiveness study or any study evaluating the improvement of human readers with or without AI assistance. The device is purely a digital imaging software for image management and spatial calibration, not an AI-assisted diagnostic tool in the sense of generating interpretations or aiding in detection beyond improved visualization.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
No. The device is described as "software to aid professionals," explicitly designed for human-in-the-loop use. There is no mention of a standalone algorithm performance evaluation.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
Not applicable/Not provided. No novel performance study requiring ground truth for clinical outcomes or diagnostic accuracy is described. The "ground truth" for the submission is the established safety and effectiveness of the predicate devices.
8. The sample size for the training set
Not applicable/Not provided. As a substantial equivalence submission for image management and calibration software, the document does not indicate the use of AI/machine learning models that would require a "training set" in the context of predictive or diagnostic performance. If any algorithms are involved (e.g., for image registration), their development would fall under standard software engineering practices rather than a "training set" as understood in machine learning.
9. How the ground truth for the training set was established
Not applicable/Not provided. No training set requiring ground truth establishment is described.
Ask a specific question about this device
Page 1 of 1