Search Filters

Search Results

Found 1 results

510(k) Data Aggregation

    K Number
    K073508
    Device Name
    PARKONE
    Date Cleared
    2008-09-11

    (273 days)

    Product Code
    Regulation Number
    886.1850
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    PARKONE

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PARK 1 is designed to photograph the eye and take Scheimpflug images of the anterior segment to evaluate the thickness of the cornea. The implanted keratometer measures the central radii of the cornea. The implanted Ophthalmic Refractometer measures the refractive power of the eye.

    Device Description

    The PARK 1 is a non-invasive, diagnostic system created to:

    • take photos of the anterior segment of the eye
    • measure the refractive power of the eye
    • measure the central corneal K-values.
      The device is stationary and AC powered. The PARK 1:
    • is based on the Scheimpflug Principle for Slit Image photography. The measuring system uses blue light (UV-free) given to a slit to illuminate the eye, and a CCD-Camera for photography. The device takes a series of images of the anterior segment of the eye from one fixed location (180°) and analyses one, selected by software
    • has a real keratometer to measure directly the central keratometer values as per definition in the 3.1mm ring.
    • includes an Ophthalmic Refractometer to measure the refractive power of the eye (21CFR886.1760)
      The device consists of a measurement unit, built in CPU, head and chin rest and an external power supply.
    AI/ML Overview

    This application for the PARK 1 device does not explicitly define "acceptance criteria" as a separate section with specific thresholds. Instead, the "Brief summary of nonclinical tests and results" section presents repeatability data, which implies that the device's performance is deemed acceptable if it demonstrates a certain level of precision for pachymetry and keratometry measurements. The substantial equivalence claim is based on similarity to predicate devices rather than meeting a predefined performance standard.

    Here's a breakdown of the requested information based on the provided text:

    Acceptance Criteria and Reported Device Performance

    Given the lack of explicit acceptance criteria, the table below presents the reported repeatability measurements, which are the primary performance metrics provided for the device. The "Acceptance Criteria" column is inferred as the reported repeatability values themselves, implying that these values were considered appropriate for demonstrating substantial equivalence.

    Measurement CategoryMeasurementAcceptance Criteria (Implied)Reported Device Performance (Repeatability)
    PachymetryApical Thickness≤ 4.79 µm4.79 µm
    Min. Thickness≤ 5.51 µm5.51 µm
    KeratometryK1≤ 0.11 dpt0.11 dpt
    K2≤ 0.12 dpt0.12 dpt

    Detailed Study Information

    1. Sample size used for the test set and the data provenance:

      • Sample Size: 46 subjects (92 eyes).
      • Data Provenance: The study was "internally performed" by OCULUS Optikgeräte GmbH. It is retrospective, as the measurements were taken and then analyzed. The country of origin is not explicitly stated for the subjects, but the applicant is based in Germany.
    2. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

      • Number of Experts: Not applicable. The study measured the device's repeatability rather than comparing its measurements to a "ground truth" established by experts. The measurements were taken by "the same operator" to determine how consistently the device performs.
      • Qualifications of Experts: Not applicable, as there were no experts establishing ground truth in this study design.
    3. Adjudication method for the test set:

      • Adjudication Method: Not applicable. There was no ground truth that required adjudication. The study focused on the variability of the device's own measurements.
    4. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

      • MRMC Study: No. This study was a device repeatability study, not a comparative effectiveness study involving human readers or AI assistance. The PARK 1 is a diagnostic measurement device, not an AI-assisted diagnostic tool.
      • Effect Size: Not applicable.
    5. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

      • Standalone Performance: Yes, in a sense. The study evaluated the device's intrinsic repeatability when operated by a single individual, focusing on the consistency of the device's measurements. The "algorithm" here refers to the device's measurement system. However, it's not a standalone AI algorithm.
    6. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

      • Type of Ground Truth: Not applicable. The study aimed to determine the repeatability of the device's measurements, not its accuracy against an external gold standard or ground truth. Each measurement performed by the device on an eye served as its own data point for assessing consistency.
    7. The sample size for the training set:

      • Training Set Sample Size: Not applicable. This device is not an AI/ML algorithm that requires a "training set" in the conventional sense. It's an optical measurement device. Its internal algorithms are part of its fixed design, not learned from data.
    8. How the ground truth for the training set was established:

      • Ground Truth for Training Set: Not applicable. As it's not an AI/ML device with a training set, the concept of establishing ground truth for a training set does not apply.
    Ask a Question

    Ask a specific question about this device

    Page 1 of 1