Search Results
Found 1 results
510(k) Data Aggregation
(90 days)
KOWA DR-1a
The KOWA DR-1a is an ocular surface interferometer, which is an ophthalmic imaging device that is intended for use by physician in adult patients to observe and record a video image of specular (interferometric) observations of the tear film, which can be visually monitored and photographically documented.
The KOWA DR-1a is ocular surface interferometer, which is an ophthalmic imaging device that is intended for use by a physician in adult patients to observe and record a video image of specular (interferometric) observations of the tear film, which can be visually monitored and photographically documented. This instrument is intended to observe and record a video of the interference image with illuminating white light on the tear film layer. Users can replay the recorded image on the instrument's monitor to observe the condition of the tear film layer. In addition, using the image currently being replayed, users can measure the duration time by specifying 2 different point of time. This instrument has a function that allows users to output a video or a still image clipped from the video to an external personal computer.
Here's an analysis of the acceptance criteria and study proving the device meets them, based on the provided text, categorized by your requests.
Important Note: The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to a predicate device for regulatory clearance. It is not a detailed clinical study report designed to prove clinical accuracy or effectiveness of an AI algorithm in the specific ways you requested (e.g., MRMC study, standalone performance, expert consensus as ground truth for AI model). The KOWA DR-1a is an ophthalmic camera, and the performance data presented primarily focuses on its physical and optical characteristics compared to the predicate, and safety/electrical/biocompatibility testing. There is no mention of an AI component in this device or its performance evaluation. Therefore, many of your questions related to AI performance, such as sensitivity, specificity, MRMC studies, or training/test set ground truth for AI, cannot be answered from this document.
Acceptance Criteria and Device Performance (Based on Device Characteristics and Comparative Testing, Not AI Performance)
1. Table of acceptance criteria and the reported device performance
Since this is a substantial equivalence submission for an ophthalmic camera and not an AI algorithm, the "acceptance criteria" are generally that the new device is as safe and effective as the predicate, with no new safety or effectiveness concerns. The performance data presented focuses on physical and optical characteristics.
Acceptance Criteria (Implied / Comparator) | Reported Device Performance (KOWA DR-1α) |
---|---|
Illumination Area (Similar to Predicate) | Wide type: Diameter: 8.0 mm, Height: 7.2 mm |
Narrow type: Width: 3.4 mm, Height: 2.5 mm | |
(Test results indicated similar Illumination area to LipiView) | |
Image Resolution (At least comparable to Predicate) | Narrow type: 45.3 line pairs / mm |
Wide type: 18.0 line pairs / mm | |
(Test results indicated higher Image resolution than LipiView) | |
Interference Image (Similar to Predicate) | (Test results indicated similar Interference image to LipiView) |
Repeatability of Hue Values (Within Acceptance Range) | Hue values obtained did not exceed the acceptance range (tested with three units and three examiners, 5 repetitions each). |
Optical Radiation Safety (Compliance with ANSI Z80.36-2016) | Classified in Group 1 of continuous wave instrument; complies with ANSI Z80.36-2016. |
EMC and Electrical Safety (Compliance with IEC 60601-1-2.2007 and IEC 60601-1:2012) | Confirmed in accordance with specified IEC standards. |
Biocompatibility (Compliance with ISO 10993-1:2009 and FDA guidance, no adverse reactions) | Cytotoxicity, Sensitization, and Irritation tests conducted for forehead and chin rests according to ISO 10993-1:2009. |
Software Validation (Compliance with FDA guidance) | Software validated according to FDA "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices." |
2. Sample size used for the test set and the data provenance
- Sample Size for Comparative Testing: Not explicitly stated with patient numbers. The performance comparison refers to "Test was performed to evaluate performance of the KOWA DR-1a compared to the LipiView regarding Illumination area, Image resolution and Interference image." It doesn't specify if this involved human subjects or just device measurements.
- Sample Size for Repeatability: "The repeatability was investigated by comparing the hue values measured with three units of DR-1α and by three examiners. Each measurement was repeated 5 times." This refers to device measurements, not a patient test set.
- Data Provenance: The document is a 510(k) submission from Kowa Company, Ltd. in Japan. The testing described would typically be conducted by the manufacturer in a controlled environment. The document does not specify country of origin for any "test set" and given it's a device performance study for substantial equivalence, it's unlikely to involve large-scale retrospective or prospective patient data in the way an AI algorithm study would.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This device is an ophthalmic camera. The performance tests described relate to its physical and optical characteristics (e.g., image resolution, illumination, repeatability of hue values) and safety. There is no mention of "ground truth" in the context of clinical interpretation by experts because the device itself does not provide a diagnosis or interpretation that would require clinical ground truth for validation in this submission.
- For the repeatability test, "three examiners" were used. Their qualifications are not specified, but they would likely be trained operators or engineers.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Adjudication methods like 2+1 or 3+1 are typically used in clinical studies where multiple human readers interpret medical images and their consensus or a tie-breaker establishes a "ground truth" for disease presence/absence or findings.
- Not applicable for the device performance tests described in this 510(k) summary, as it does not involve clinical interpretation or a "test set" requiring adjudication of clinical findings.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- An MRMC study is designed to evaluate the impact of a new technology (like AI) on physician performance.
- No MRMC study was performed or needed for this 510(k) submission, as the KOWA DR-1a is an ophthalmic camera and the document does not indicate it incorporates an AI component influencing clinical decision-making or diagnosis. The focus is on the device's ability to capture images.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- This question pertains to the performance of an AI algorithm independent of human interaction (e.g., its sensitivity, specificity, AUC for a diagnostic task).
- Not applicable. The KOWA DR-1a is an ophthalmic camera. The document does not describe any standalone AI algorithm for interpretation or diagnosis.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- As explained above, the "ground truth" concept (e.g., for disease diagnosis) is not relevant to the performance tests reported here.
- The reported performance tests rely on technical measurements and compliance with established standards:
- Illumination area and image resolution are measured specifications.
- Repeatability is assessed against an "acceptance range" for hue values (an objective measurement).
- Safety checks (optical radiation, EMC, electrical, biocompatibility) are assessed against specific industry and regulatory standards.
8. The sample size for the training set
- This question applies to AI/machine learning models.
- Not applicable. This document is for the clearance of an ophthalmic camera and does not describe the development or validation of an AI algorithm, thus there is no "training set."
9. How the ground truth for the training set was established
- This question applies to AI/machine learning models.
- Not applicable. As there's no mention of an AI model or training set, this information is not provided.
Ask a specific question about this device
Page 1 of 1