Search Results
Found 5 results
510(k) Data Aggregation
(217 days)
Manufacturer: Luneau SAS 510(k) exempt, regulation number 886.1760 Product code: HKO
{4}------------
The VX120 is a multi-function diagnostic device combining wavefront aberometer, corneal topographer, retro-illuminator, tonometer and pachymeter, indicated for:
Measuring the refraction of the eye giving both lower and higher order aberrations
Measuring the shape of the comea
Retro-illumination imaging of the eye
Measuring the intraocular pressure without contacting the eye for glaucoma evaluation
Photographing the eye and taking images of the eye to evaluate the thickness of the cornea.
The VX120 is a multifunctional ophthalmic diagnostic device.
The VX120 combined wavefront aberrometer, corneal topographer, retro illumination device, Scheimpflug pachymeter, and non-contact tonometer is a single platform that contains five different measurement units.
The wavefront aberrometer works on the Shack-Hartmann principle and is used as an advanced autorefractometer that measures both lower and higher order aberrations of the refraction of the eye.
Retro illumination is used to image ocular opacities.
The corneal topographer uses a Placido disk to measure keratometry and the detailed shape of the cornea.
The Scheimpflug pachymeter measures the thickness of the central cornea by illuminating it with a slit of light and photographing it using the Scheimpflug technique. An air puff non-contact tonometer is included for measurement of the intraocular pressure.
The device is fully automated and a number of different measurements can be performed by a single command including alignment and focusing.
The provided text describes the 510(k) summary for the VX120 Ophthalmic Diagnostic Device, focusing on its tonometry and pachymetry functions for substantial equivalence to predicate devices. It outlines performance data including bench and clinical testing.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based only on the provided text:
Acceptance Criteria and Reported Device Performance
The acceptance criteria are implicitly derived from the performance claims and the standards the device was tested against. The text explicitly states that the device meets the requirements of the specified standards.
| Acceptance Criterion (Implicit/Explicit from Text) | Reported Device Performance |
|---|---|
| Tonometry Function: | |
| Accuracy (against predicate/standard) | Equivalent to ±2mmHg or better (bench testing) |
| Repeatability (standard deviation) | ±1.2 mmHg or better (bench testing) |
| Compliance with ISO8612:2010 | Met all requirements of the standard, if eyes with astigmatism >3mm are excluded (Clinical evaluation) |
| Compliance with ANSI Z80.10-2009 | Met all requirements of the standard, if eyes with astigmatism >3mm are excluded (Clinical evaluation) |
| Pachymetry Function: | |
| No significant statistical difference in CCT measurements compared to Pentacam | No significant statistical difference between measurements of CCT with VX120 and the Pentacam (Comparison study). |
| General Device Performance: | |
| Electrical safety | Complies with IEC60601-1:2006 |
| EMC compatibility | Complies with IEC60601-1-2 |
| Software verification and validation | Done according to IEC62304 |
| Risk management | Evaluated according to ISO14971: 2009; all risks reduced to safe levels. |
| Ophthalmic product testing | Evaluated in accordance with ISO15004-1:2009 and ISO15004-2:2007; met all requirements. |
| Optical hazards | Evaluated in accordance with IEC60825-1: 2008; VX120 is laser class 1. |
Here's a breakdown of the study details based on the provided text:
-
Sample size used for the test set and the data provenance:
- Tonometry: The text mentions "clinical evaluation" and "bench testing." It does not specify the sample size for either.
- Pachymetry: The text mentions a "comparison study" with Pentacam. It does not specify the sample size for this study.
- Data Provenance: The text does not explicitly state the country of origin of the data or whether the studies were retrospective or prospective. The manufacturer, Luneau SAS, is based in France. The predicates are from the UK (Keeler) and Germany (Oculus), implying international standards and potentially international data, but this is not confirmed.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document describes performance testing against standards (ISO, ANSI) and predicate devices. It does not mention the use of human experts or human readers to establish ground truth for the test sets for tonometry or pachymetry performance. The "ground truth" for tonometry appears to be established by the performance of the predicate device and the specified accuracy and repeatability limits from the standards. For pachymetry, the ground truth is implicitly the measurements from the Pentacam predicate.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not applicable, as ground truth was not established by human experts requiring adjudication.
-
If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No MRMC or human-in-the-loop study comparing human readers with and without AI assistance is described. The device is referred to as a "diagnostic device," but the performance studies focus on its intrinsic measurement accuracy and comparison to predicate devices, not its assistance to human interpretation or diagnosis.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- Yes, the performance data presented (bench testing, clinical evaluation against standards, and comparison studies for tonometry and pachymetry) appear to represent the standalone performance of the VX120 device/algorithm. The focus is on the device's output accuracy and consistency.
-
The type of ground truth used (expert concensus, pathology, outcomes data, etc):
- Tonometry: The ground truth for the tonometry function appears to be established by:
- Benchmarking against industry standards: ISO8612:2010 and ANSI Z80.10-2009 for clinical evaluation, and internal specifications for accuracy (±2mmHg) and repeatability (±1.2mmHg) from bench testing.
- Comparison to predicate device performance: The "Pulsair tonometer" is also stated to have an accuracy of ±2 mmHg.
- Pachymetry: The ground truth for the pachymetry function is established by comparison to a legally marketed predicate device, the Pentacam, noted as demonstrating "no significant statistical difference."
- Tonometry: The ground truth for the tonometry function appears to be established by:
-
The sample size for the training set:
- The document is a 510(k) summary, not a detailed technical report on the algorithm development. It does not mention a "training set" or specify its size. The device relies on established physical measurement principles (Shack-Hartmann, Placido disk, Scheimpflug, air puff tonometry) rather than machine learning that typically requires training data.
-
How the ground truth for the training set was established:
- Not applicable, as a "training set" is not mentioned or implied for this type of device and its described validation. The device's operation is based on fundamental physics and established optical/measurement techniques rather than trained algorithms in the sense of AI/machine learning.
Ask a specific question about this device
(41 days)
Re: K062930
Trade/Device Name: LADARWave® CustomCornea® Wavefront System Regulation Number: 21 CFR 886.1760
The LADARWave® CustomCornea® Wavefront System is used for measuring, recording, and analyzing visual aberrations (such as myopia, astigmatism, coma and spherical aberration) and for displaying refractive error maps of the eye to assist in prescribing refractive corrections.
This device is enabled to export wavefront data and associated anatomical registration information to a compatible treatment laser where a wavefront-guided treatment is indicated.
This device can also export centration data and associated registration information for conventional phoropter-based refractive surgery with the LADAR6000™ System.
The LADARWave® CustomCornea® Wavefront System is an aberrometer, utilizing Hartmann-shack wavefront sensing to measure the aberrations in the human eye. The device contains four major optical subsystems used in the clinical wavefront examination.
- A fixation subsystem provides the patient with an unambiguous point of fixation. . Optics in this path automatically adjusts to correct for the patient's spherocylindrical error so that the target is clearly observed.
- A video subsystem provides the device operator with a view of the eye at the . measurement plane. The operator uses the video imagery to position the eye for the measurement and to record the geometry of the wavefront relative to anatomical features.
- A probe beam subsystem directs a narrow beam of eye-safe infrared radiation . into the eye to generate the re-emitted wavefront.
- A wavefront detection subsystem images the re-emitted wavefront onto the . entrance face of the Hartmann-shack wavefront sensor.
These subsystems are all under control of the device software.
Once the wavefront examination is complete, the operator may export the exam data to removable media. The exported electronic file contains all information necessary to perform customized ablative surgery using either the LADARVision®4000 or LADAR6000" System. The information includes the wavefront measurement, essential patient identification information, and geometric registration data. The electronic file is in a proprietary format and is encrypted so that wavefront data cannot be exported for use by an incompatible treatment device.
Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided text:
Acceptance Criteria and Device Performance
| Acceptance Criteria | Reported Device Performance |
|---|---|
| The accuracy in translational (x,y) and cyclotorsional registration for the "overlay" method must be at least as good as with the current "sputnik" method to establish equivalence of the new automated registration process. | The results demonstrate that the "overlay" method is as good or better than the "sputnik" method in offset registration and rotational registration at all values of registration error (i.e. at all x-axis values). Additionally, system level software verification and validation were successfully completed in accordance with General Principles of Software Validation; Final Guidance for Industry and FDA Staff dated January 11, 2002. |
Study Details:
-
Sample Size used for the test set and the data provenance:
- The text states, "For each eye, one LADARWave wavefront session (centration photo and five wavefront measurements with associated photos) with eye marks and a session without eye marks were taken for the testing." This implies that multiple eyes were used, but a specific number is not provided.
- Data Provenance: Not explicitly mentioned (e.g., country of origin). The study appears to be retrospective or a controlled prospective study specifically for this submission, as it compares two methods ("sputnik" and "overlay") on the same eyes.
-
Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- This information is not provided in the document. The method of "comparing" the two registration methods ("sputnik" and "overlay") suggests a quantitative assessment rather than expert consensus for ground truth on registration accuracy directly.
-
Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- This information is not provided in the document. The comparison seems to be based on direct measurement of registration accuracy between the two methods rather than an adjudication process involving human reviewers.
-
If a multi-reader, multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, an MRMC comparative effectiveness study was not explicitly mentioned or described. The study focused on comparing two registration methods (manual "Sputnik" vs. automated "Overlay") for the device itself, rather than evaluating human reader performance with or without AI assistance. The "overlay" method is described as an "assisted" registration method, suggesting it might involve some automation, but the study design is not an MRMC study comparing human performance.
-
If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The "overlay" method is described as an "assisted" registration method, implying some level of automation. The testing compared this "overlay" method to the "current manual 'Sputnik' method." The study's focus was on establishing the equivalence of the automated/assisted registration process, which is a standalone function of the software, to the existing manual method. Therefore, a standalone performance assessment of the "overlay" method's accuracy in registration, compared to a manual method, was performed.
-
The type of ground truth used (expert consensus, pathology, outcomes data, etc):
- The ground truth for registration accuracy appears to be derived from direct comparisons between the outcomes of the "sputnik" and "overlay" methods. The statement "The primary test was to establish equivalence of the two methods of registration by testing that the accuracy in translational (x,y) and cyclotorsional registration for the 'overlay' method are at least as good as with the current 'sputnik' method" suggests that the "sputnik" method provides a baseline or reference against which the "overlay" method's accuracy is measured. This implies a comparative ground truth derived from a reference method rather than an independent expert consensus, pathology, or outcomes data.
-
The sample size for the training set:
- This information is not provided in the document. The document describes testing the automated registration process, but does not detail the development or training of the "overlay" software.
-
How the ground truth for the training set was established:
- This information is not provided in the document. Since the training set size is not mentioned, the method for establishing its ground truth is also absent.
Ask a specific question about this device
(195 days)
. | Classification: | Class I 886.1760 |
| D. | Product
Lake Forest, CA 92630
Re: K050336
Trade/Device Name: OPD-Station Software Regulation Number: 21 CFR 886.1760
The OPD-Station software is indicated for use in analyzing the corneal shape and refractive powers measured by the OPD-Scan Models ARK-9000 or ARK-10000, and to display the data in the form of maps, and manage the data.
Nidek has developed a stand-alone software option for users of the OPD-Scan™ device called OPD-Station, which will run on an independent PC (i.e., separate from the OPD-Scan™ device). The OPD-Station software is able to access data measured by the OPD-Scan™ device via a separate Nidek data management software package called NAVIS.
The OPD-Station uses the same software as that used for the OPD-Scan device so that a physician can view OPD-Scan data on their PC of choice. However, the OPD-Station software has the following new functions:
- Maps of Point Spread Function (PSF), Modulation Transfer Function (MTF), MTF . graphing, and Visual Acuity mapping
- Improved color mapping .
The provided text describes the Nidek Incorporated OPD-Station software, which is a standalone software option for users of the OPD-Scan™ device. The software analyzes corneal shape and refractive powers measured by the OPD-Scan Models ARK-9000 or ARK-10000, displaying the data in various maps and managing the data.
Here's an analysis of the acceptance criteria and the study that proves the device meets them, based on the provided text:
1. A table of acceptance criteria and the reported device performance
The provided document is a 510(k) summary, which focuses on demonstrating substantial equivalence to predicate devices rather than setting specific performance acceptance criteria like sensitivity, specificity, or accuracy metrics. The primary "acceptance criterion" implied throughout this document is substantial equivalence to the predicate devices.
| Acceptance Criterion | Reported Device Performance |
|---|---|
| Substantial Equivalence to Predicate Devices | The OPD-Station software uses the same software as the OPD-Scan and has new functions (PSF, MTF maps, improved color mapping). A comparison of technological characteristics was performed, demonstrating equivalence to marketed predicate devices. The performance data indicate the OPD-Station software meets all specified requirements and is substantially equivalent. |
2. Sample size used for the test set and the data provenance
The document does not specify any sample size used for a test set (clinical or otherwise) or the data provenance (e.g., country of origin, retrospective/prospective). The performance data mentioned is described very generally as indicating "all specified requirements" without detailing the nature of this data or how it was gathered.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The document does not mention any experts used to establish ground truth or their qualifications. Given that this is a 510(k) for software intended to display and manage existing data from an already cleared device (OPD-Scan), the focus is on the software's functionality and its output being consistent with the predicate device's capabilities, rather than a clinical accuracy study requiring expert adjudication of a test set.
4. Adjudication method for the test set
The document does not describe any adjudication method.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
An MRMC comparative effectiveness study was not conducted or mentioned. The OPD-Station software is not described as an AI-assisted device directly improving human reader performance but rather as a tool for analyzing and displaying existing data from another ophthalmic device.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
The document is about a standalone software ("OPD-Station software") that operates independently on a PC to analyze and display data from the OPD-Scan device. While it's standalone software, the performance described is its ability to access and process data from the OPD-Scan and display it; it's not an "algorithm only" performance in the sense of an AI algorithm making diagnostic decisions without human involvement. The software itself is the "device" in question operating independently.
7. The type of ground truth used
The document does not specify the type of ground truth used. The verification process appears to rely on comparing the new software's functionality and output with that of the predicate devices. This implies that the "ground truth" for demonstrating equivalence would be the established functionality and output of the cleared predicate devices.
8. The sample size for the training set
The document does not mention a training set or its sample size. This suggests that the software development did not involve machine learning or AI models that require training sets. The software's design likely follows deterministic algorithms based on established ophthalmic principles and data processing techniques from the original OPD-Scan.
9. How the ground truth for the training set was established
As there is no mention of a training set, there is no information on how its ground truth would have been established.
Ask a specific question about this device
(189 days)
|
| Regulation Number: | 886.1760
/Device Name: Topcon Model BV-1000 Automated Subjective Refraction System Trade Dovice Name: 10 CFR 886.1760
The Topcon Model BV-1000 Automated Subjective Refraction System provides sphere, cylinder, and axis measurements of the eye. The BV-1000 assists the eyecare professional in evaluating pre and post operative eye procedures and is used as an aid in prescribing eyeglasses and contact lenses.
The Topcon Model BV-1000 is a safe and effective instrument. In essence, it is a combination of three Class I devices:
- Ophthalmic Refractometer ... an AC-powered device that consist of a fixation system, a measurement and recording system and an alignment system.
- Visual Acuity Chart ... a device, with a Landolt "C" chart in graduated sizes to test visual acuity
- Onthalmic Motorized Refractor ... a device that incorporates a set of lenses of various dioptric powers intended to measure the refractive power of the eyc.
The BV-1000 is designed to perform binocular, simultaneous auto-refraction. It incorporates subjective refinement steps after the objective measurements have been obtained. The BV-1000 reduces the amount of time that eyecare professionals need to spend in refracting their patients as a substantial portion of the traditional refraction can be accomplished in the "pre test" room.
The provided document is a 510(k) summary for the Topcon Model BV-1000 Automated Subjective Refraction System. Based on the information available, a detailed description of acceptance criteria and the study proving it is not present in the typical format of an AI/ML device study.
This document describes a medical device from 2003, which predates the widespread regulatory framework for AI/ML medical devices. Therefore, the information requested, particularly regarding AI-specific criteria like training sets, ground truth establishment for AI, MRMC studies for AI assistance, and standalone AI performance, will not be found in this document.
However, I can extract information related to the device's performance specifications and how it was compared to predicate devices, which serves as a form of "acceptance criteria" and "study" in the context of a 510(k) submission from that era.
Here's the breakdown of what can be inferred and what is not available based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document doesn't explicitly state "acceptance criteria" as pass/fail thresholds against which the device was tested. Instead, it demonstrates substantial equivalence by comparing the BV-1000's performance specifications, particularly its measurement ranges, with those of legally marketed predicate devices. The implicit acceptance criteria are that the device's performance falls within a comparable and safe range to these predicates.
| Parameter | Acceptance Criteria (Implied by Predicate Devices) | Reported Device Performance (Topcon BV-1000) |
|---|---|---|
| Sphere (S) | Predicate devices range from -28.00D to +20.00D to -12.00D to +20.00D. | Objective Mode: -25.00D to +22.00DSubjective Mode: -18.00D to +18.00D |
| Cylinder (C) | Predicate devices range from 0 to -7.75D, 0 to ±8.00D, 0 to ±6.00D, 0 to ±7D. | Objective Mode: 0.00D to -8.00DSubjective Mode: 0.00D to -8.00D |
| Axis (A) | Predicate devices all state 0° to 180° or 0° to 180°. | Objective Mode: 1° to 180°Subjective Mode: 1° to 180° |
| Refraction Method | Predicate devices use various methods like Manual Retinoscopy, Built-In Rotary Prism, Built-In Continuously Variable Sphere & Cylinder. | Objective Refraction: Built-In Rotary PrismSubjective Refraction: Landolt Charts; Jackson Cross Cylinder (after objective measurements have been obtained) |
| Illumination | Predicate devices use Halogen, 680nm LED, Tungsten. | Objective Refraction: 680nm LEDSubjective Refraction: Tungsten |
| Test Types | Predicate devices mention Snellen Charts, Jackson Cross Cylinder, Simulcross Cross Cylinder, Presbyopic Charts. | Subjective Refraction: Landolt Charts; Jackson Cross Cylinder |
| Type of Refraction | Predicate devices offer Objective and Subjective Refraction. | Performs binocular, simultaneous auto-refraction and incorporates subjective refinement steps after objective measurements. |
2. Sample size used for the test set and the data provenance
- Sample Size: Not explicitly stated. The document focuses on performance specifications and comparison to predicates, not a clinical trial with a specific patient sample size for testing.
- Data Provenance: Not specified. Given it's a 510(k) from 2003, such details were often less rigorously documented in the summary unless critical for equivalence demonstrations (e.g., specific clinical study data if equivalence was not clear from technological comparison). It's likely based on internal testing and engineering assessments rather than a large clinical test set described in this summary.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- This information is not available and is not relevant for this type of device and submission from this era. "Ground truth" in the context of refractive measurements is often established by established clinical methods (e.g., best-corrected visual acuity determined by an optometrist/ophthalmologist) or comparison to existing gold-standard devices. Experts would be involved in designing and interpreting the performance data, but not typically in the "ground truth" establishment as understood in AI/ML validation studies.
4. Adjudication method for the test set
- Not applicable/Not available. Adjudication methods are typically associated with resolving discrepancies in expert labeling or diagnoses, especially in AI/ML studies. This document doesn't describe a study design that would necessitate such a method.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, this device is not an AI-assisted device. Therefore, an MRMC study comparing human readers with and without AI assistance was not conducted and is not applicable. The device's purpose is to assist eyecare professionals by providing objective measurements and streamlining subjective refinement, not to provide AI diagnostics or interpretations.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not applicable. This device is a piece of hardware that takes measurements. It is not an algorithm designed to perform diagnostics "stand-alone." Its output (sphere, cylinder, axis measurements) still requires interpretation and use by an eyecare professional. The "Automated Subjective Refraction System" name implies it automates parts of the subjective refraction process, but the human is still in the loop for the overall prescription.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- For a device like this, ground truth would implicitly be established through comparison to established clinical methods and predicate devices. For example, the accuracy of its refractive measurements would be compared against the results obtained by experienced clinicians using traditional phoropters or other auto-refractors considered gold standards at the time. The document doesn't detail the specific ground truth process but relies on the device producing measurements within expected clinical ranges, comparable to predicates.
8. The sample size for the training set
- Not applicable/Not available. This is not an AI/ML device, so there is no "training set" in the sense of data used to train a machine learning model. The device's "training" would be its engineering design, calibration, and validation against physical measurement standards and clinical performance expectations.
9. How the ground truth for the training set was established
- Not applicable. As a non-AI/ML device, there is no "training set ground truth" as understood in current AI/ML terminology.
Ask a specific question about this device
(18 days)
System Device Name: Primary Classification Name: Ophthalmic Diagnostic Device Refractometer Class I 886.1760
886.1350), some measure the refractive power of the eye by measuring light reflexes from the retina (HKO, 886.1760
K023249
Trade/Device Name: Alcon LADAR Wave™ CustomCornea® Wavefront System Regulation Number: 21 CFR 886.1760
The LADARWave" CustomCornea® Wavefront System is used for measuring, recording, and analyzing visual aberrations (such as myopia, hyperopia, astigmatism, coma and spherical aberration) and for displaying refractive error maps of the eye to assist in prescribing refractive corrections. This device is enabled to export wavefront data and associated anatomical registration information to a compatible treatment laser with an indication for wavefront-guided refractive surgery.
The LADARWave" CustomCornea® Wavefront System is an aberrometer, utilizing Hartmann-Shack wavefront sensing to measure the aberrations in the human eye. The device contains four major optical subsystems used in the clinical wavefront examination: a fixation subsystem, a video subsystem, a probe beam subsystem, and a wavefront detection subsystem. These subsystems are all under control of the device software.
The provided document is a 510(k) summary for the Alcon LADARWave™ CustomCornea® Wavefront System. This type of regulatory submission primarily focuses on demonstrating substantial equivalence to a predicate device rather than providing detailed clinical study results or performance against specific acceptance criteria in the manner one might find for novel or high-risk devices.
Therefore, much of the requested information regarding acceptance criteria, study details, sample sizes, and ground truth establishment is not available in this document. The document confirms that the device is an aberrometer used for measuring, recording, and analyzing visual aberrations and displaying refractive error maps to assist in prescribing refractive corrections. It also states that the device can export data to a compatible treatment laser for wavefront-guided refractive surgery.
Here's a breakdown of the available information based on your request:
Acceptance Criteria and Reported Device Performance
The document does not explicitly state quantitative acceptance criteria or detailed performance metrics from a study designed to prove the device meets these criteria. Instead, it relies on demonstrating substantial equivalence to predicate devices.
Table of Acceptance Criteria and Reported Device Performance:
| Acceptance Criteria | Reported Device Performance |
|---|---|
| Not explicitly defined in the document. The submission focuses on substantial equivalence to predicate devices rather than specific performance metrics. | Not explicitly reported in the document in terms of quantitative performance metrics against acceptance criteria. |
Study Details
The document does not describe a specific clinical study with test sets, ground truth, or statistical analysis in the way modern AI/ML device submissions would. It refers to the device's characteristics and its equivalence to other diagnostic devices.
2. Sample size used for the test set and the data provenance:
- Not available. The document does not describe a test set or its provenance.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- Not available. Ground truth establishment is not discussed.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:
- Not available. Adjudication methods are not discussed.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done:
- Not available. No MRMC study is mentioned.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:
- The device is a diagnostic tool designed to measure and analyze. Its function is inherently standalone in gathering the measurements, but the application of its output (prescribing corrections, guiding surgery) involves a human in the loop. The document doesn't detail a specific "standalone performance" study as would be expected for an AI algorithm. Its performance is based on the accuracy and reliability of its measurements compared to established methods.
7. The type of ground truth used:
- Not explicitly stated in the context of a "study" for acceptance. Given the nature of a refractometer, the "ground truth" for its measurements would typically be established through comparison with other accepted methods of refractive error measurement (e.g., subjective refraction, retinoscopy) or physical phaco-optics models, which the document alludes to by comparing it to predicate devices that measure refractive characteristics.
8. The sample size for the training set:
- Not applicable/Not available. This device predates the widespread use of large-scale machine learning and "training sets" in the modern sense. Its design and "knowledge" are based on optical physics and engineering principles, not statistical learning from a dataset.
9. How the ground truth for the training set was established:
- Not applicable/Not available. See point 8.
Ask a specific question about this device
Page 1 of 1