Search Filters

Search Results

Found 10 results

510(k) Data Aggregation

    K Number
    K243878
    Device Name
    CLARUS (700)
    Date Cleared
    2025-04-17

    (120 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CLARUS 700 ophthalmic camera is indicated to capture, display, annotate and store images to aid in the diagnosis and monitoring of diseases and disorders occurring in the retina, ocular surface and visible adnexa. It provides true color and autofluorescence imaging modes for stereo, widefield, ultra-widefield, and montage fields of view.

    The CLARUS 700 angiography is indicated as an aid in the visualization of vascular structures of the retina and the choroid.

    Device Description

    The CLARUS 700 is an active, software controlled, high resolution ophthalmic imaging device for In-vivo imaging of the human eye. Imaging modes include True color, Fundus Auto-fluorescence with green excitation, Fundus Auto-fluorescence with blue excitation, Fluorescein Angiography, Stereo External eye and Fluorescein Angiography- Indocyanine green angiography (FA-ICGA). All true color images can be separated into red, green and blue channel images to help enhance visual contrast of details in certain layers of the retina.

    The CLARUS 700 angiography imaging aids in the visualization of the vascular structures of the retina and the choroid. With a single capture, CLARUS 700 produces a 90º high definition widefield image. Widefield images are automatically merged to achieve a 135º ultra-widefield of view. The CLARUS 700 makes use of a deep learning algorithm for Optic Nerve Head (ONH) detection. The ultra-widefield montage on CLARUS 700 is no longer dependent just on the patient accurately fixating their gaze on the internal fixation. With the ONH detection, the software will find the optic nerve and determine based on the image(s) captured where the patient was gazing at the point of capture. The CLARUS 700 device allows clinicians to easily review and compare high-quality images captured during a single exam while providing annotation and caliper measurement tools that allow in-depth analysis of eye health. CLARUS 700 is designed to optimize each patient's experience by providing a simple head and chin rest that allows the patient to maintain a stable, neutral position while the operator brings the optics to the patient, facilitating a more comfortable imaging experience. The ability to swivel the device between the right and left eye helps technicians capture an image without realigning the patient. Live IR Preview allows the technician to confirm image quality and screen for lid and lash obstructions, prior to imaging, ensuring fewer image recaptures.

    The CLARUS 700 device's principle of operation is Slit Scanning Ophthalmic Camera also referred to as Broad Line Fundus Imaging (BLFI). During image capture, a line of illumination passes through the slit and scans across the retina. A 2D monochromatic camera captures the returned light to image the retina. A single sweep of the illumination is used to illuminate the retina for image capture. Repeated sweeps of near infrared light are used for a live retina view for alignment. Red, green and blue LEDs sequentially illuminate to generate true color images. Blue and green LED illumination enables Fundus Autofluorescence (FAF) imaging. Fluorescein Angiography images are captured with green LED illumination at a wavelength that stimulates fluorescence of the injected sodium fluorescein dye. The principle of operation of CLARUS 700 has not changed since the previous clearance, K191194.

    The CLARUS 700 system is mainly comprised of an acquisition device, all-in-one PC, keyboard, mouse, instrument lift table and external power supply.

    The device hardware is based on the predicate CLARUS 700 (K191194) hardware. The new ICGA imaging mode on the device required the following hardware changes as stated in the summary above:

    • Lightbox for Infrared (IR) Laser
    • Modified Slit filter – FA/ICG Slit Excitation Filter – new coating, no change to FA
    • Modified Turret Filter 1- FA/ICG Dual Band barrier filter – new coating, no change to FA.
    • Added Turret Filter 2 – Added second filter. Same coating as Turret Filter 1 to eliminate cornea reflex band in ICG images with a different shape.
    • Added Large Alignment Tool (LAT)
    • Added ICG Power Meter Tool

    The CLARUS software provides the user the capability to align, capture, review and annotate images. The software has two installation configurations: Software installed on the Instrument (Acquisition & Review) as well as Software installed on a separate 'Review Station' (Laptop or Computer) (only Review).

    The DEVICE software version 1.2 is based on the predicate CLARUS 700 software version 1.0 (K191194).

    Added capability for DEVICE software version 1.2 include:

    • Simultaneous capture of Fluorescein Angiography (FA) + Indocyanine Green Angiography (ICGA)
    • Angiography Movie: Capture of multiple pictures in sequence, after a single press of a button. Available for FA, ICGA and Simultaneous FA+ICGA.
    • Early Treatment Diabetic Retinopathy Study (ETDRS) – Manual placement of ETDRS grids (7 field ETDRS and Macula ETDRS) over the pictures:
      • The ETDRS 7-fields grid in CLARUS is a display of the standard 7-fields in Color Fundus Photography used to determine an ETDRS (Early Treatment Diabetic Retinopathy Study) level for patients with Diabetic Retinopathy. These 7-fields in and around the macular region are displayed in one single widefield image according to definitions followed by the gold-standard 7-field images using narrow-field fundus cameras.
      • The Macular ETDRS grids display assists in the identification of an ETDRS level in nine subfields centered around the fovea.
    • ICGA Boost Mode: user-selectable option for ICGA capture that increase used light to obtain better picture at later phase.
    • 8 up view: addition of the possibility to view eight pictures side by side (currently it is only possible to see 1, 2, 4, 16)

    The CLARUS 700 device meets the requirements of ISO 10940:2009 standard. The device technical specifications are identical to the predicate device.

    AI/ML Overview

    The provided text is a 510(k) clearance letter and summary for the CLARUS 700 ophthalmic camera, particularly focusing on the new v1.2 software update. While it discusses the device's intended use, technical characteristics, and various tests performed, it does not contain detailed acceptance criteria or the specific results of a comprehensive clinical study in the format of "acceptance criteria vs. reported device performance."

    The document mentions "clinical testing aimed at demonstrating the ability of the new model of CLARUS 700 to image a variety of retinal and choroidal conditions using simultaneous FA and simultaneous ICGA and standalone ICGA." It states that "Our analysis of the grading of angiography images showed that the quality of the images captured by the CLARUS 700 simultaneous FA, simultaneous ICGA, and standalone ICGA were clinically acceptable by three independent graders." However, this is a qualitative statement rather than quantitative acceptance criteria with specific performance metrics.

    Therefore, I cannot populate a table of acceptance criteria and reported device performance from the provided text. I can, however, extract related information from the "Clinical Data" section:


    Acceptance Criteria and Study Details (Based on provided text)

    1. Table of Acceptance Criteria and Reported Device Performance:

    Acceptance Criteria (Quantitative/Specific)Reported Device Performance (Quantitative/Specific)
    Not explicitly defined in the provided document.Not explicitly defined in the provided document beyond a qualitative statement.
    Example of typical criteria (not from text): Minimum percentage of images graded as "clinically acceptable"Reported: "Our analysis of the grading of angiography images showed that the quality of the images captured by the CLARUS 700 simultaneous FA, simultaneous ICGA, and standalone ICGA were clinically acceptable by three independent graders."

    Explanation: The document states that the "quality of the images... were clinically acceptable by three independent graders." This implies an implicit acceptance criterion that images must be "clinically acceptable." However, no quantitative threshold (e.g., "90% of images must be clinically acceptable") is provided, nor are specific quantitative performance metrics (e.g., actual percentage of acceptable images).

    2. Sample Size Used for the Test Set and Data Provenance:

    • Test Set Sample Size: Not specified in the provided text.
    • Data Provenance: Not specified (e.g., country of origin, retrospective/prospective). The text only states "ZEISS conducted clinical testing."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts:

    • Number of Experts: Three independent graders.
    • Qualifications of Experts: Not specified (e.g., "radiologist with 10 years of experience").

    4. Adjudication Method for the Test Set:

    • Adjudication Method: Not specified beyond "three independent graders." It does not mention if consensus, majority rule (e.g., 2+1), or another method was used for discordant readings.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done and the Effect Size of how much human readers improve with AI vs without AI assistance:

    • MRMC Study: No, an MRMC comparative effectiveness study was not described. The study focused on the image quality produced by the device as assessed by human graders, not on the improvement of human readers' performance with AI assistance.
    • Effect Size: Not applicable, as no MRMC study comparing human readers with/without AI assistance was conducted or reported. The device's deep learning algorithm for ONH detection is noted, but its specific impact on reader performance or an MRMC study related to it is not detailed.

    6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    • Standalone Performance: Not explicitly detailed or quantified. The document notes that "The CLARUS 700 makes use of a deep learning algorithm for Optic Nerve Head (ONH) detection." However, no standalone performance metrics (e.g., specificity, sensitivity, accuracy) for this algorithm are provided in the clinical data summary.

    7. The Type of Ground Truth Used:

    • Type of Ground Truth: Expert grading/consensus from "three independent graders" on "clinical acceptability" of angiography images. It is not stated if this was against a pathology or outcomes data gold standard.

    8. The Sample Size for the Training Set:

    • Training Set Sample Size: Not specified in the provided text. The document refers to a "deep learning algorithm for Optic Nerve Head (ONH) detection." While this implies a training set was used, its size is not disclosed.

    9. How the Ground Truth for the Training Set Was Established:

    • Ground Truth for Training Set: Not specified.
    Ask a Question

    Ask a specific question about this device

    K Number
    K191194
    Device Name
    CLARUS
    Date Cleared
    2019-06-25

    (53 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CLARUS 700 ophthalmic camera is indicated to capture, display, annotate and store images to aid in the diagnosis and monitoring of diseases and disorders occurring in the retina, ocular surface and visible adnexa. It provides true color and autofluorescence imaging modes for stereo, widefield, ultra-widefield, and montage fields of view. The CLARUS 700 angiography is indicated as an aid in the visualization of vascular structures of the retina and the choroid.

    Device Description

    The CLARUS™ model 700 is a new addition to the CLARUS product family consisting of existing model 500 (K181444). The CLARUS 700 is an active, software controlled, highresolution ophthalmic imaging device for In-vivo imaging of the human eye. Imaging modes include True color, Fundus Auto-fluorescence with green excitation. Fundus Auto-fluorescence with blue excitation. Fluorescein Angiography, Stereo and External eve. All true color images can be separated into red, green and blue channel images to help enhance visual contrast of details in certain layers of the retina. The CLARUS 700 angiography imaging aids in the visualization of the vascular structures of the retina and the choroid. With a single capture, CLARUS 700 produces a 90° high definition widefield image. Widefield images are automatically merged to achieve a 135° ultra-widefield of view. The CLARUS 700 makes use of a deep learning algorithm for Optic Nerve Head (ONH) detection. The ultra-widefield montage on CLARUS 700 is no longer dependent just on the patient accurately fixating their gaze on the internal fixation. With the ONH detection, the software will find the optic nerve and determine based on the image(s) captured where the patient was gazing at the point of capture. The CLARUS 700 device allows clinicians to easily review and compare high-quality images captured during a single exam while providing annotation and caliper measurement tools that allow in-depth analysis of eye health. CLARUS 700 is designed to optimize each patient's experience by providing a simple head and chin rest that allows the patient to maintain a stable, neutral position while the operator brings the optics to the patient. facilitating a more comfortable imaging experience. The ability to swivel the device between the right and left eye helps technicians capture an image without realigning the patient. Live IR Preview allows the technician to confirm image quality and screen for lid and lash obstructions, prior to imaging, ensuring fewer image recaptures.

    The CLARUS 700 device's principle of operation is Slit Scanning Ophthalmic Camera also referred to as Broad Line Fundus Imaging (BLFI), same as the predicate CLARUS 500 (K181444). During image capture, a line of illumination passes through the slit and scans across the retina. A 2D monochromatic camera captures the returned light to image the retina. A single sweep of the illumination is used to illuminate the retina for image capture. Repeated sweeps of near infrared light are used for a live retina view for alignment. Red, green and blue LEDs sequentially illuminate to generate true color images. Blue and green LED illumination enables Fundus Autofluorescence (FAF) imaging. Fluorescein Angiography images are captured with green LED illumination at a wavelength that stimulates fluorescence of the injected sodium fluorescein dye.

    The CLARUS 700 system is mainly comprised of an acquisition device, all-in-one PC, keyboard, mouse, instrument lift table and external power supply.

    The CLARUS 700 hardware is based off the predicate CLARUS 500 (K181444) hardware. New FA imaging mode on the CLARUS 700 require the below hardware changes:

    • Added filters to support FA imaging mode .
    • Updated slim turret and motor with new positions for reliability, angiography filters and ● FPGA code
    • Updated calibration tool for new turret positions and differentiation
    • Change to lightbox board for reliability and support higher duty cycle in support of FA imaging
    • Updated Onyx All-in-one Computer for 32GB RAM and 2TB HDD storage space
    • Updated belt driven slit for reliability and to support FA imaging mode ●
    • . Updated camera to support FA imaging mode

    The CLARUS software provides the user the capability to align, capture, review and annotate images. The software has two installation configurations: Software installed on the Instrument (Acquisition & Review) as well as Software installed on a separate 'Review Station' (Laptop or Computer) (only Review).

    The CLARUS software version 1.1 is based off the predicate CLARUS software version 1.0 (K181444). Added image capture modality includes Fluorescein Angiography. Other changes implemented in the software version 1.1 include:

    • Automated Optic Nerve Head (ONH) detection for montaging ●
    • Smart (Region of Interest) Focus ●
    • Auto brightness for FA image series
    • Calibration software update for DEVICE hardware changes ●
    • . FORUM/ Other EMR connectivity updates for new FA imaging mode

    The CLARUS 700 device meets the requirements of ISO 10940:2009 standard. The device technical specifications are identical to the predicate device. The performance specifications relevant to the user are summarized in the Table 1 below.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary for the CLARUS 700:

    1. Table of Acceptance Criteria and Reported Device Performance:

    Criteria #Criteria DescriptionAcceptance CriteriaReported Device Performance
    1Area and lesion of interest is visible on the angiogramN/A (implied high visibility)17/20 (85%)
    2Clinically useful image. Image appearance is consistent with the disease and transit phase of dyeN/A (implied high clinical utility)19/20 (95%)
    3Artifacts, if any, do not interfere with ability to interpret imageN/A (implied minimal interference)19/20 (95%)

    Note: The document only provides the reported device performance as "passing rates" for the Fluorescein Angiography (FA) imaging mode, without explicitly stating numerical acceptance criteria for each point. The acceptance criteria are implied to be high percentages, demonstrating good clinical utility and image interpretability.

    2. Sample Size Used for the Test Set and Data Provenance:

    • Sample Size: 20 eyes from 13 subjects (11 male, 2 female)
    • Data Provenance: The document does not explicitly state the country of origin or whether the study was retrospective or prospective. However, based on the nature of a "clinical study to support indications for use," it is highly probable that it was a prospective study designed for regulatory submission. The location of the manufacturer (Dublin, California, USA) suggests the study might have been conducted in the US, but this is not explicitly stated for the clinical data itself.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications:

    The document does not explicitly state the number of experts used or their specific qualifications for establishing the ground truth of the test set images. It mentions "clinically useful image" and "ability to interpret image," implying expert evaluation, but the specifics are not provided in this summary.

    4. Adjudication Method for the Test Set:

    The document does not mention any specific adjudication method (e.g., 2+1, 3+1). Expert consensus or independent review by a single expert is implied by the evaluation of "clinical utility" and "interpretability," but the process is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study:

    • Was an MRMC study done? No, a traditional MRMC comparative effectiveness study involving human readers with and without AI assistance was not explicitly conducted or reported for the performance of AI on human readers.
    • Effect Size: Therefore, no effect size for human reader improvement with AI assistance is reported.

    Note on AI: The device does make use of a deep learning algorithm for Optic Nerve Head (ONH) detection to improve montage creation, but the clinical study described focuses on the overall performance of the Fluorescein Angiography imaging mode, not specifically on the impact of ONH detection AI on reader performance.

    6. Standalone (Algorithm Only) Performance:

    The document does not present separate standalone (algorithm-only) performance metrics for the deep learning algorithm (ONH detection). The clinical study evaluates the device's ability to capture useful images, which would indirectly incorporate the functionality of the device's software, but it's not a standalone performance evaluation of the AI component in isolation.

    7. Type of Ground Truth Used:

    The ground truth used for evaluating the clinical utility of the Fluorescein Angiography images appears to be expert clinical judgment/interpretation of the images. The criteria like "Area and lesion of interest is visible on the angiogram" and "Clinically useful image" strongly suggest evaluation by a medical professional or panel thereof. The study's objective was to demonstrate the device's ability to capture images useful for "diagnosis and monitoring of diseases and disorders," further supporting expert clinical judgment as the ground truth.

    8. Sample Size for the Training Set:

    The sample size for the training set of the deep learning algorithm (ONH detection) is not specified in the provided document.

    9. How the Ground Truth for the Training Set Was Established:

    The document briefly mentions "deep learning algorithm for Optic Nerve Head (ONH) detection" but does not detail how the ground truth for training this algorithm was established. It can be inferred that it would involve expertly annotated images for ONH location, but the specifics are not provided.

    Ask a Question

    Ask a specific question about this device

    K Number
    K181534
    Device Name
    CIRRUS HD-OCT
    Date Cleared
    2019-02-15

    (249 days)

    Product Code
    Regulation Number
    886.1570
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    CIRRUS™ HD-OCT is a non-contact, high resolution tomographic and biomicroscopic imaging device intended for invivo viewing, axial cross-sectional, and three-dimensional imaging of anterior ocular structures. The device is indicated for visualizing and measuring and posterior ocular structures, including corneal epithelium, retinal nerve fiber layer, ganglion cell plus inner plexiform layer, macula, and optic nerve head. The CIRRUS normative databases are quantitative tools indicated for the comparison of retinal nerve fiber layer thickness, macular thickness, ganglion cell plus inner plexiform layer thickness, and optic nerve head measurements to a database of normal subjects.

    CIRRUS' AngioPlex OCT Angiography is indicated as an aid in the visualization of vascular structures of the retina and choroid. (Model 5000 only.)

    CIRRUS HD-OCT is indicated as a diagnostic device to aid in the detection and management of ocular diseases including, but not limited to, macular holes, cystoid macular edema, diabetic retinopathy, age-related macular degeneration, and glaucoma.

    Device Description

    The CIRRUSTM HD-OCT is a computerized instrument that acquires and analyses crosssectional tomograms of anterior and posterior ocular structures (including cornea, retinal nerve fiber layer, macula, and optic disc). It employs non-invasive, non-contact, low-coherence interferometry to obtain these high-resolution images. Using this non-invasive optical technique, CIRRUS HD-OCT produces high-resolution cross-sectional tomograms of the eye without contacting the eye. It also produces images of the retina and layers of the retina from an en face perspective (i.e., as if looking directly in the eye).

    The CIRRUS HD-OCT is offered in two models, Model 5000 and 500. In the CIRRUS HD-OCT Model 5000, the fundus camera is a line scanning ophthalmoscope. The CIRRUS HD-OCT Model 500 is similar to the Model 5000 except that it provides the fundus image using the OCT scanner only.

    The acquired imaging data can be analyzed to provide thickness and area measurements of regions of interest to the clinician. The system uses acquired data to determine the fovea location or the optic disc location. Measurements can then be oriented using the fovea and/or optic disc locations. The patient's results can be compared to subjects without disease for measurements of RNFL thickness, neuroretinal rim area, average and vertical cup-to-disc area ratio, cup volume, macular thickness and ganglion cell plus inner plexiform layer thickness.

    In addition to macular and optic disc cube scans, the CIRRUS HD-OCT 5000 also offers scans for OCT angiography imaging, a non-invasive approach with depth sectioning capability to visualize microvascular structures of the eye.

    Anterior segment scans enable analysis of the anterior segment including Anterior Chamber Depth, Angle-to-Angle and automated measurement of the thickness of the cornea with the Pachymetry scan.

    AI/ML Overview

    The provided text describes the 510(k) summary for the Carl Zeiss Meditec, Inc. CIRRUS HD-OCT with Software Version 10. The main study detailed is for corneal epithelial thickness measurements.

    Here's a breakdown of the acceptance criteria and study information:

    1. Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implied by the statistical analyses performed, primarily focusing on repeatability, reproducibility, and agreement with manual measurements. The performance is reported in terms of these statistical metrics rather than predefined thresholds for acceptance.

    Corneal Epithelial Thickness Measurements (Pachymetry Scans)

    MetricAcceptance Criteria (Implied)Reported Device Performance (Normal Eyes - Central Sector)Reported Device Performance (Pathology Eyes - Central Sector)
    Repeatability SDLow standard deviation (SD) for repeated measurements.0.8 µm1.4 µm
    Repeatability LimitLow limit (2.8 x Repeatability SD).2.2 µm4.0 µm
    Repeatability CV%Low coefficient of variation.1.6%3.0%
    Reproducibility SDLow standard deviation (SD) across different operators/devices.1.1 µm1.8 µm
    Reproducibility LimitLow limit (2.8 x Reproducibility SD).3.2 µm5.1 µm
    Reproducibility CV%Low coefficient of variation.2.3%3.8%
    Automated vs. Manual Agreement (Deming Regression)Slope close to 1, Intercept close to 0.Slope: 0.88 (95% CI: 0.71, 1.04)Intercept: 4.69 (95% CI: -3.65, 13.02)Slope: 1.03 (95% CI: 0.95, 1.10)Intercept: -2.46 (95% CI: -6.35, 1.43)
    Automated vs. Manual Agreement (Bland-Altman Limits of Agreement)Narrow limits of agreement around a mean difference close to 0.Mean Difference: -1.59 µm (SD 1.77)Lower LOA: -5.05 µmUpper LOA: 1.88 µmMean Difference: -1.17 µm (SD 2.98)Lower LOA: -7.00 µmUpper LOA: 4.67 µm

    Note: The table provides data for the "Central" sector as an example. The document provides detailed results for 25 different sectors.

    2. Sample Size and Data Provenance

    • Test Set Sample Size:
      • Normal Corneas (Group 1): 11 adult participants (one eligible eye per participant).
      • Keratoconus/Post-LASIK (Group 2): 12 participants (one eligible eye per participant).
    • Data Provenance: The document does not explicitly state the country of origin. It is a prospective clinical study specifically conducted to determine repeatability, reproducibility, and agreement.

    3. Number of Experts and Qualifications for Ground Truth

    • Number of Experts: Three masked graders.
    • Qualifications of Experts: Not explicitly stated beyond "masked graders."

    4. Adjudication Method for the Test Set

    The document states: "To generate manually marked corneal epithelial thickness measurements, three masked graders reviewed images and manually performed measurements in the 25 sectors." It does not specify an adjudication method like 2+1 or 3+1; it simply mentions that three graders performed the measurements. The comparison is made between the automated measurement and these manual measurements.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    There is no mention of a multi-reader multi-case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance. The study focuses on the agreement between the device's automated measurements and manual measurements.

    6. Standalone (Algorithm Only) Performance

    Yes, a standalone performance assessment was done. The study specifically evaluated the "automated corneal epithelial thickness measurements" generated by the device's software. The comparison to manual measurements serves as a validation of this standalone algorithm's performance against human expert measurements.

    7. Type of Ground Truth Used

    The ground truth for the corneal epithelial thickness measurements was established by expert manual measurements performed by three masked graders. This serves as a reference standard to which the automated measurements were compared.

    8. Sample Size for the Training Set

    The document does not provide information about the sample size used for the training set of the algorithm. This study focuses on the clinical evaluation of the device in its final form.

    9. How the Ground Truth for the Training Set Was Established

    The document does not provide information on how the ground truth for the training set (if any) was established. The clinical evaluation described pertains to the performance validation of the already developed algorithm.

    Ask a Question

    Ask a specific question about this device

    K Number
    K181444
    Device Name
    CLARUS
    Date Cleared
    2019-01-10

    (223 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The CLARUS 500 ophthalmic camera is indicated to capture, display, annotate and store images to aid in the diagnosis and monitoring of diseases and disorders occurring in the retina, ocular surface and visible adnexa. It provides true color and autofluorescence imaging modes for stereo, widefield, ultra-widefield, and montage fields of view.

    Device Description

    The CLARUS™ 500 is an active, software controlled, high-resolution ophthalmic imaging device for In-vivo imaging of the human eye. Imaging modes include True color, Fundus Autofluorescence with green excitation, Fundus Auto-fluorescence with blue excitation, Stereo and External eye. All true color images can be separated into red, green and blue channel images to help enhance visual contrast of details in certain layers of the retina. With a single capture, CLARUS 500 produces a 90° high definition widefield image. Widefield images are automatically merged to achieve a 135° ultra-widefield view. The technology allows clinicians to easily review and compare high-quality images captured during a single exam while providing annotation and caliper measurement tools that allow analysis of eye health. CLARUS 500 is designed to optimize each patient's experience by providing a simple head and chin rest that allows the patient to maintain a stable, neutral position while the operator brings the optics to the patient, facilitating a more comfortable imaging experience. The ability to swivel the device between the right and left eye helps technicians capture an image without realigning the patient. Live Infrared Preview allows the technician to confirm image quality and screen for lid and lash obstructions, prior to imaging, ensuring fewer image recaptures.

    The CLARUS 500 device's principle of operation is based on Slit Scanning Ophthalmoscope also referred to as Broad Line Fundus Imaging (BLFI). During image capture, a broad line of illumination is scanned across the retina. A monochromatic camera captures the returned light to image the retina. A single sweep of the illumination is used to illuminate the retina for image capture. Repeated sweeps of near infrared light are used for a live retina view for alignment. Red, green and blue LEDs sequentially illuminate to generate true color images. Blue and green LED illumination enables Fundus Autofluorescence (FAF) imaging.

    The CLARUS 500 system is mainly comprised of an acquisition device, all-in-one PC, keyboard, mouse, instrument lift table and external power supply.

    The CLARUS software provides the user the capability to align, capture, review and annotate images. The software has two installation configurations: Software installed on the Instrument (Acquisition & Review) as well as Software installed on a separate 'Review Station' (Laptop or Computer) (only Review).

    The CLARUS 500 technical features relevant to the user are: Field of View (FoV), Image Resolution, Pixel Pitch and Focusing Range. The device meets the requirements of ISO 10940:2009 standard. The performance specifications are summarized in the Table 1 below.

    AI/ML Overview

    The provided document describes the Carl Zeiss Meditec CLARUS 500 ophthalmic camera. However, it does not explicitly state acceptance criteria or a detailed study proving the device meets specific performance criteria in the format requested. The document focuses on demonstrating substantial equivalence to predicate devices for FDA clearance.

    Despite this, I can extract information related to performance and testing:

    1. A table of acceptance criteria and the reported device performance:

    The document doesn't provide a formal "acceptance criteria" table like one might find in a clinical trial protocol for an AI device. Instead, it lists technical specifications and states that the device meets an ISO standard and passed various verification and validation tests.

    FeatureSpecification (Acceptance Criterion - implied)Reported Device Performance and Verification Method
    Technical Specifications (from Table 1 - implying acceptance criteria for these features)
    FoV – Widefield (single capture)90°Verified through bench testing using a test eye.
    FoV - Ultra-widefield (montage)135°Verified through software algorithm verification.
    Image Resolution60 lp/mm at central field (0°), 40 lp/mm at 23° FOV, 25 lp/mm at 45° FOVData not explicitly stated, but the device "meets the requirements of ISO 10940:2009 standard," which would cover resolution.
    Sensors12 megapixel monochromeNot explicitly tested as a performance criterion, rather a design characteristic.
    Sensor Resolution3000 x 3000 pixelsNot explicitly tested as a performance criterion, rather a design characteristic.
    Focusing Range+20 D to -24DData not explicitly stated, but the device "meets the requirements of ISO 10940:2009 standard," which would cover focusing range.
    Pixel Pitch on the Fundus7.3 µm/pixelNot explicitly tested as a performance criterion, rather a design characteristic.
    General Performance/Safety (implied acceptance criteria for compliance)
    Design RequirementsSatisfy established system requirementsDesign verification testing demonstrated compliance.
    Customer AcceptanceMeet requirements set by Product Requirements Specifications and user experience acceptance criteria.Design validation testing demonstrated these were met.
    Consensus Standards ComplianceMeet requirements for conformity to multiple industry standards.R&D evaluation documented compliance. Includes ISO 10940:2009 for fundus cameras, ANSI AAMI 60601-1:2005/(R) 2012 and A1:2012 (Ed 3.1) for electrical safety, IEC 60601-1-2:2014 Ed 4.0 for EMC, ANSI Z80.36-2016 and ISO 15004-2:2007 for optical safety, IEC 60825-1:2007 for laser safety, ISO 15004-1:2009 for environmental conditions, NEMA PS 3.1-3.20 (2016) for DICOM.
    Software PerformanceComply with FDA's Guidance for Industry and FDA staff, "Guidance for the Content of premarket Submissions for Software Contained in Medical Devices."Software verification testing was conducted and documented.
    BiocompatibilityComply with requirements of ISO 10993-1:2009 standard for patient-contact components.Materials for patient chin rest and forehead rest were evaluated and comply.
    Clinical Feature ResolutionSimilar amount of clinical features resolved compared to reference device."Study results concluded that similar amount of clinical features can be resolved on CLARUS 500 images as the images from the reference device in almost all cases."
    FAF Imaging PerformancePerformance comparable to FAF imaging mode of reference device (CIRRUS photo).A clinical study was performed to demonstrate the performance of the FAF-B and FAF-G imaging modes as compared to the FAF imaging mode of the reference device CIRRUS photo.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

    The document mentions "A clinical study was conducted" for both general imaging modes and FAF imaging modes.

    • Sample Size: Not specified.
    • Data Provenance: Not specified (country/region, retrospective/prospective). It simply states "A clinical study was conducted."

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

    Not specified. The document only mentions that the study "concluded that similar amount of clinical features can be resolved." There is no detail on how this "ground truth" or comparison was established by experts.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    Not specified.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    No, this is not an AI device, and therefore no MRMC comparative effectiveness study involving AI assistance for human readers was done or described. The clinical study mentioned compares the device's imaging modes to a reference device. The focus is on the performance of the imaging capture, not an AI interpretation aid.

    6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done:

    The CLARUS 500 is an imaging device, not an AI algorithm for interpretation. Its performance is inherent in the quality of the image capture. The "standalone" performance would be the image quality itself, which is verified through technical specifications and ISO compliance. The clinical study compares the "performance of the CLARUS 500 imaging modes" (standalone imaging output) to a reference device.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    For the clinical study on imaging modes, the "ground truth" seems to be the ability to resolve "clinical features" when compared to images from a reference device. This implies a qualitative assessment, likely by clinicians, but the specific method or standard for "ground truth" (e.g., expert consensus on feature visibility, comparison to an actual disease state) is not detailed.

    8. The sample size for the training set:

    Not applicable, as this is an imaging device, not a machine learning algorithm that requires a training set in the typical sense. The software verification would involve testing against requirements, not "training data."

    9. How the ground truth for the training set was established:

    Not applicable, as it's not a machine learning algorithm with a training set.

    Ask a Question

    Ask a specific question about this device

    K Number
    K182318
    Device Name
    Retina Workplace
    Date Cleared
    2018-10-24

    (58 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Retina Workplace is a FORUM application intended for processing and displaying fundus image and optical coherence tomography data. It is also intended for generating reports that contain results from optical coherence tomography and fundus photography.

    The Retina Workplace uses CIRRUS algorithms and normative databases as a quantitative tool for the comparison of macular thickness data to a database of normal subjects. It supports the processing and displaying of CIRRUS OCT-Angiography data, which is indicated as an aid in the visualization of vascular structures of the retina and choroid.

    The Retina Workplace is intended to aid trained healthcare professionals in the detection, monitoring and management of ocular diseases including, but not limited to, macular edema, diabetic retinopathy and age-related macular degeneration.

    Device Description

    The Retina Workplace is designed in conjunction with the FORUM PACS system, which support clinically focused client workplaces to aid the optometric and ophthalmology clinicians with the processing, display, review, management, and storage of digital data contained in patient records. The Retina Workplace is software application of FORUM Archive and Viewer PACs workplace. FORUM is a software system designed for storage, processing, and review of images, videos, and reports originating from computerized diagnostic instruments or electronic documentation systems over a network. The Retina Workplace is connected to the FORUM server via an internal interface. The Retina Workplace retrieves CIRRUS OCT exam data and fundus images from the FORUM server. The Retina Workplace is intended to support the physician with a clinically focused workplace for the retina by using the imported OCT exam data from CIRRUS HD-OCT and CIRRUS Photo to report and display the results.

    The Retina Workplace (version 2.5) is the latest generation device in the Retina Workplace series. The version of the 2.5 that is the subject of this submission is a modified version of the Retina Workplace (version 2.0) cleared under K170638.

    Like its predecessors, Retina Workplace version 2.5 is designed to process and display CIRRUS OCT exams by using the algorithms and databases that are currently in use on the FDA cleared CIRRUS HD-OCT version 8 (K150977) and CIRRUS Photo (K133217). The CIRRUS HD-OCT OCT-Angiography was also cleared in K150977.

    AI/ML Overview

    The provided text describes the Retina Workplace device and its substantial equivalence to predicate devices, but it does not contain the specific details required to answer all parts of your request regarding acceptance criteria and a study proving the device meets those criteria.

    The document is a 510(k) summary for a medical device cleared by the FDA, which generally focuses on demonstrating substantial equivalence to existing devices rather than presenting the full details of a clinical performance study with specific acceptance criteria, sample sizes, and expert adjudication as you've requested.

    Here's what can be extracted and what is missing:

    What is present/can be inferred:

    • Device Performance: The document generally states that "All criteria for the verification and validation testing were met; the results demonstrate that the Retina Workplace meets all performance specifications and requirements." and "Retina Workplace v2.5 performs as well as the predicate devices." However, it does not specify what those performance specifications and requirements (acceptance criteria) were.
    • Study type: "Software testing was conducted to establish the ability of the subject Retina Workplace (version 2.5 - K182318) to meet design and customer requirements. Verification and validation activities for the Retina Workplace were conducted... The system verification test report provides the test cases, expected results for each test case and the actual results obtained. Validation testing was conducted to ensure that the device meets the customer's requirements with respect to performance." This indicates that verification and validation (V&V) testing was performed, typical for software devices, rather than a prospective clinical trial. It sounds more like functional and performance testing against internal specifications.
    • Human-in-the-loop/Standalone: The device is intended "to aid trained healthcare professionals," suggesting a human-in-the-loop context. No information on standalone algorithm performance is provided.
    • Ground Truth: The document states "The Retina Workplace uses CIRRUS algorithms and normative databases as a quantitative tool for the comparison of macular thickness data to a database of normal subjects." This implies that existing, cleared algorithms and normative databases from the predicate devices (CIRRUS HD-OCT) are leveraged as the "ground truth" or reference for quantitative measurements.
    • Training Set (Inferred): Since the device uses "CIRRUS algorithms and reference databases" that are already "currently in use on the FDA cleared CIRRUS HD-OCT version 8 (K150977) and CIRRUS Photo (K133217)", it suggests that the training of these core algorithms would have occurred as part of the predicate device development. No specific "training set" for the Retina Workplace v2.5 itself (as a new algorithm being trained) is mentioned, as its primary function is processing and displaying data using existing cleared algorithms.

    What is missing from the provided text:

    1. A table of acceptance criteria and the reported device performance: The specific criteria (e.g., sensitivity, specificity, accuracy targets, imaging quality metrics) are not listed. Only a general statement that "all criteria... were met" is provided.
    2. Sample size used for the test set and the data provenance: No details on the number of cases or patients used in the verification and validation (V&V) testing, nor the origin (country, retrospective/prospective) of the data.
    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts: As this appears to be software/functional V&V rather than a clinical reader study, expert ground truth establishment for a test set in the way you describe is not explicitly mentioned. If the V&V largely relies on output comparison to predicate devices or established algorithms, then the "ground truth" might be the output of those validated predicate systems.
    4. Adjudication method: Not mentioned.
    5. Multi-Reader Multi-Case (MRMC) comparative effectiveness study: Not mentioned. The focus is on the device using existing, cleared algorithms, not on how human readers perform with or without its assistance.
    6. Standalone performance: While the device leverages algorithms, no specific standalone performance metrics (e.g., diagnostic accuracy of an automated detection) for the Retina Workplace itself are presented, implying it's an "aid" rather than a fully autonomous diagnostic tool.
    7. The sample size for the training set: Not applicable and not mentioned, as the device leverages existing, cleared algorithms.
    8. How the ground truth for the training set was established: Not applicable and not mentioned, as the device leverages existing, cleared algorithms.

    Conclusion based on the provided text:

    The document describes the Retina Workplace (version 2.5) as a software application that integrates and processes data using algorithms and databases already cleared and present in predicate devices (CIRRUS HD-OCT and CIRRUS Photo). The "study" proving it meets acceptance criteria appears to be a software verification and validation (V&V) process, ensuring that the new version correctly implements and displays the functionalities of the existing, cleared algorithms and processes data as intended. It does not appear to involve a new clinical performance study with human readers, novel algorithm training, or the establishment of new, independent ground truth for a diagnostic AI. The acceptance criteria were internal performance specifications for the software, which were reportedly met.

    To get the detailed information you're asking for, one would need to refer to the full 510(k) submission (if publicly available beyond this summary) or documentation from the predicate devices (K150977 for CIRRUS HD-OCT version 8 and K133217 for CIRRUS Photo).

    Ask a Question

    Ask a specific question about this device

    K Number
    K173371
    Date Cleared
    2018-04-13

    (168 days)

    Product Code
    Regulation Number
    886.4370
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The VisuMax Femtosecond Laser is cleared for the following indications for use:

    · In the creation of a corneal flap in patients undergoing LASIK surgery or other treatment requiring initial lamellar resection of the cornea;

    · In patients undergoing surgery or other treatment requiring initial lamellar resection of the cornea;

    · In the creation of a lamellar cut/resection of the cornea for lamellar keratoplasty;

    · In the creation of a cut/incision for penetrating keratoplasty and corneal harvesting;

    · In patients undergoing surgery or other treatment requiring initial lamellar resection of the cornea to create tunnels for placement of corneal ring segments.

    Device Description

    The VisuMax Femtosecond Laser is an ophthalmic surgical femtosecond laser intended for use in patients requiring corneal incisions. The cutting action of the VisuMax laser is achieved through precise individual micro-photodisruptions of tissue, created by tightly focused ultrashort pulses which are delivered through a disposable applanation lens while fixating the eye under very low vacuum.

    AI/ML Overview

    The provided text describes the 510(k) premarket notification for the Carl Zeiss Meditec VisuMax Femtosecond Laser (K173371). The focus of this submission is on software modifications to enable an additional indication for use: "creation of tunnels for placement of corneal ring segments."

    The submission concludes that the device is substantially equivalent to predicate devices, but it does not contain specific acceptance criteria, comprehensive study designs, or detailed results typically found in a clinical study report. Instead, it provides a summary of the performance testing.

    Here's an attempt to extract the requested information based only on the provided text, with acknowledgments for missing data:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Stated or Implied)Reported Device Performance
    For Tunnel Cuts (expanded indication):
    Accuracy of corneal tunnel lateral dimensions (implied quantitative range)"Test acceptance criteria for all cut length dimension and cut angular dimensions were met."
    Repeatability of corneal tunnel lateral dimensions (implied quantitative range)"Test acceptance criteria for all cut length dimension and cut angular dimensions were met."
    Accuracy of corneal tunnel depth dimensions (implied quantitative range)"Test acceptance criteria for all cut depth dimensions were met."
    Repeatability of corneal tunnel depth dimensions (implied quantitative range)"Test acceptance criteria for all cut depth dimensions were met."
    Quality of tunnel cuts (ease of tissue separation)"All corneas tested in this manner were judged to be of good cut quality, meeting the performance test acceptance criteria."
    For Software (general):
    Performance, accuracy, functionality, and safety of software modifications (implied comprehensive criteria)"The software verification and validation testing results demonstrate that the VisuMax Femtosecond Laser meets all requirements for performance, accuracy, functionality and safety for the modifications proposed in this 510(k) premarket notification."

    Note: The document states that "Test acceptance criteria...were met" but does not explicitly define the quantitative metrics or thresholds for these criteria. The acceptance criteria above are inferred from the description of the tests.

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size for Test Set: The document mentions "a series of corneal models" and "all corneas tested in this manner," but does not specify the exact number (sample size) of ex vivo corneas or corneal models used for performance testing.
    • Data Provenance: The testing involved "ex vivo corneas" and "corneal model material." The country of origin is not specified, and the data is retrospective in the sense that it's laboratory/bench testing and not a prospective clinical trial on human subjects for this specific claim.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    • The document mentions "ex vivo corneas" and "corneal model material" for performance testing. The evaluation of cut quality mentions "judged to be of good cut quality," implying subjective assessment.
    • However, the number of experts and their qualifications (e.g., radiologists with X years of experience) used to establish ground truth or evaluate these ex vivo tests are not specified in the provided text.

    4. Adjudication Method for the Test Set

    • The document implies subjective judgment for cut quality ("judged to be of good cut quality").
    • However, a formal adjudication method (e.g., 2+1, 3+1) is not described for any aspect of the performance testing in the provided text. The non-contact optical techniques used for dimensional measurements suggest objective assessment rather than adjudication by human experts.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • A Multi-Reader Multi-Case (MRMC) comparative effectiveness study to assess improvement in human readers with/without AI assistance was not done/described in this submission. This device is a surgical laser, not an AI diagnostic tool that assists human readers.

    6. Standalone Performance Study (Algorithm Only)

    • A standalone performance study focused on the algorithm's performance without human-in-the-loop was not explicitly described in the conventional sense of AI diagnostic devices. The performance data presented focuses on the physical output of the laser system (tunnel cuts) after software modifications. The software itself was verified and validated against functional requirements.

    7. Type of Ground Truth Used

    • For the physical performance tests related to tunnel cuts, the "ground truth" was established based on physical measurements using non-contact optical techniques in ex vivo corneas and corneal model materials. Cut quality was assessed subjectively against "performance test acceptance criteria."
    • For the software, the "ground truth" was its adherence to specified performance, accuracy, functionality, and safety requirements during verification and validation (V&V) testing.

    8. Sample Size for the Training Set

    • The document describes software modifications to an existing device and performance testing, not the development or training of a machine learning model.
    • Therefore, a "training set" for an AI algorithm is not applicable/not mentioned in this submission.

    9. How Ground Truth for the Training Set Was Established

    • As a "training set" for an AI algorithm is not applicable, the method for establishing its ground truth is not mentioned. The software verification and validation process involved testing the modified software against predefined functional specifications and safety requirements.
    Ask a Question

    Ask a specific question about this device

    K Number
    K161194
    Date Cleared
    2016-10-26

    (182 days)

    Product Code
    Regulation Number
    886.1570
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PLEX™ Elite 9000 Swept-Source OCT [SS-OCT] is a non-contact, high resolution, wide field of view tomographic and biomicroscopic imaging device intended for in-vivo viewing, axial cross-sectional and three-dimensional imaging of posterior ocular structures. The device is indicated for visualizing posterior ocular structures including, but not limited to, retinal nerve fiber layer, ganglion cell plus inner plexiform layer, macula, optic nerve head, vitreous and choroid.

    The PLEX™ Elite SS-OCT angiography is indicated as an aid in the visualization of vascular structures of the retina and choroid.

    Device Description

    The PLEX™ Elite 9000 SS-OCT is a computerized instrument that acquires cross-sectional tomograms of the posterior ocular structures (including cornea, retinal nerve fiber layer, macula, and optic disc). It employs non-invasive, non-contact, low-coherence interferometry to obtain these high-resolution images. Using this non-invasive optical technique, the PLEX Elite SS-OCT produces high-resolution cross-sectional tomograms of the eye without contacting the eye. It also produces images of the retina and layers of the retina from an en face perspective (i.e., as if looking directly in the eye) and non-contrast angiographic imaging of the retinal microvasculature.

    The PLEX Elite 9000 SS-OCT is offered in one model, the Elite 9000 in a new compact desktop system. The PLEX Elite SS-OCT contains a swept source, Class 1 Laser system operating at 1060 nm and includes a new system computer and archive with up to 24 TB storage capacity. The PLEX Elite also contains an iris viewer, fixation system and the fundus camera is a similar line-scanning ophthalmoscope (LSO) as used on the CIRRUS HD-OCT system, model 4000.

    AI/ML Overview

    The provided text describes the Carl Zeiss Meditec, Inc. K161194 submission for the PLEX Elite 9000 SS-OCT device. However, it does not explicitly detail a standalone study with acceptance criteria and reported device performance in the format requested. Instead, it focuses on demonstrating substantial equivalence to predicate devices through technological comparisons, bench testing, non-clinical tests, and a clinical case series.

    Here's an attempt to extract and synthesize the information based on the provided document, acknowledging the limitations regarding the specific details of a formal "acceptance criteria and study" as might be found in a performance study report:


    1. Table of Acceptance Criteria and Reported Device Performance

    The document does not present a formal table of acceptance criteria with specific quantitative targets and corresponding reported performance for a standalone clinical study. Instead, it describes general claims of improved performance compared to the predicate device.

    Performance CharacteristicAcceptance Criteria (Implicit)Reported Device Performance (PLEX Elite 9000 SS-OCT)
    Image QualityImages of posterior ocular structures should be "high quality" and visualize structures as well as or better than the predicate device."produce high quality images of the retina and choroid."
    Signal-to-Noise RatioImproved signal-to-noise ratio compared to the predicate device (CIRRUS HD-OCT)."higher signal-to-noise ratio and an increased depth of penetration as compared to the CIRRUS OCT angiography with intensity only processing."
    Depth PenetrationIncreased depth penetration compared to the predicate device (CIRRUS HD-OCT)."higher signal-to-noise ratio and an increased depth of penetration as compared to the CIRRUS OCT angiography with intensity only processing." Also, axial scan depth is 3.0 mm (in tissue) for PLEX Elite 9000 vs. 2.0 mm for predicate.
    Field of ViewWider field of view for imaging posterior ocular structures compared to the predicate device."wider field of view, increased depth penetration and with a higher signal-to-noise ratio as compared to the CIRRUS HD-OCT system." Also, transverse scan range up to 42°x42° (Cube) and 56°x0° (Max line scan) for PLEX Elite 9000, compared to 31°x31° (Max) for predicate.
    Visualization of Vascular StructuresAbility to image vascular structures of the retina and choroid as well as the predicate device."The PLEX Elite 9000 angiography scans were shown to image the structures of the retina and choroid as well as the predicate CIRRUS HD- OCT device."
    Scan Speed (OCT)Higher scan speed than the predicate device.100,000 A-scan points per second for all scan types, compared to 27,000-68,000 A-scans/sec for the predicate.
    Axial ResolutionSimilar axial resolution to the predicate.5.5 µm (in tissue) for PLEX Elite 9000, compared to 5 µm (in tissue) for predicate. Described as "Similar resolution, change due to wavelength of imaging beam."
    Transverse ResolutionSimilar transverse resolution to the predicate.≤ 20 µm (in tissue) for PLEX Elite 9000, compared to ≤ 15 µm (in tissue) for predicate. Described as "Similar resolution, change in technology."
    SafetyCompliance with relevant international standards and risk mitigation."Risk management is ensured via a risk analysis... These potential hazards are controlled by software means, user instructions, verification of requirements and validation of the clinical workflow... To minimize electrical, mechanical and radiation hazards, ZEISS adheres to recognized and established industry practice and relevant international standards." Laser is Class 1. Controlled optical power < 5.4 mW at the cornea with electronic safety interlock.

    2. Sample Size Used for the Test Set and Data Provenance

    The document mentions a "case series study" as part of the clinical report. However, it does not specify the exact sample size of patients or images used for this case series. It only states that the study included "subjects with varying retinal pathologies."

    The data provenance is implicit: the clinical imaging was "performed on human eyes," indicating prospective data collection for the purpose of this submission, likely from clinical sites where the device was being evaluated. The document does not specify the country of origin of the data.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications

    The document does not explicitly state the number of experts used to establish ground truth or their specific qualifications (e.g., "radiologist with 10 years of experience"). It mentions that a "clinical report is provided that includes clinical images" and compares PLEX Elite 9000 OCT angiography to "FA, ICGA, CIRRUS HD-OCT angiography, and color fundus photography." This implies that existing clinical imaging modalities and potentially expert interpretations of these modalities were used as a reference, but the process for establishing ground truth for the test set images is not detailed.

    4. Adjudication Method for the Test Set

    The document does not describe any specific adjudication method (e.g., 2+1, 3+1, none) for images or findings within the clinical report/case series. The comparison is made against other imaging modalities like FA and ICGA, suggesting these served as reference standards, but a formal adjudication process for a "test set" in the context of an algorithm's performance is not described.

    5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was Done

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study, where human readers analyze images with and without AI assistance to quantify the effect size of AI improvement, is not described in this document. The clinical evaluation focuses on demonstrating the device's ability to produce high-quality images and visualize structures similarly to or better than the predicate, mostly through direct comparison of image characteristics and content.

    6. If a Standalone (i.e. algorithm only without human-in-the-loop performance) was Done

    While the device itself is a standalone imaging system (producing images without human real-time intervention for image generation), the submission does not describe a standalone algorithm performance study. The context is that of an imaging device. The document does refer to "En Face Algorithm," "Angiography OMAG Complex and a structure cube Algorithm," and algorithms for retinal tracking, but there are no standalone performance metrics specifically for these algorithms presented in this summary beyond their contribution to image quality.

    7. The Type of Ground Truth Used

    The ground truth implicitly used for the clinical report/case series appears to be a combination of established clinical imaging modalities (Fluorescein Angiography (FA), Indocyanine Green Angiography (ICGA), CIRRUS HD-OCT angiography, and color fundus photography) for validating the visualization capabilities of the new SS-OCT angiography. For the general imaging capabilities (posterior ocular structures), the ground truth is the ability to visualize these structures as demonstrated by the images themselves and comparison to the predicate. No specific "expert consensus," "pathology," or "outcomes data" are explicitly stated as ground truth for the clinical evaluation of the device in this summary, although ICGA and FA are considered gold standards for certain vascular pathologies.

    8. The Sample Size for the Training Set

    The document does not mention a training set or its sample size. This submission is for an imaging device, not explicitly for an AI/CADe/CADx algorithm that requires a distinct training set in the way a deep learning model would. While the device utilizes "proprietary algorithms" for features like FastTrac™ 2.0 Retinal Tracking Technology and OCT angiography processing, the document does not elaborate on their development or training methodology.

    9. How the Ground Truth for the Training Set Was Established

    Since a training set and its sample size are not mentioned, the method for establishing its ground truth is also not provided in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K143275
    Device Name
    IOLMaster700
    Date Cleared
    2015-07-10

    (238 days)

    Product Code
    Regulation Number
    886.1850
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The IOLMaster 700 is intended for biometric measurements and visualization of ocular structures. The measurements and visualization assist in the determination of the appropriate power and type of intraocular lens. The IOLMaster 700 measures:

    • Lens thickness
    • Corneal curvature and thickness
    • Axial length
    • Anterior chamber depth
    • Pupil diameter
    • White-to-white distance (WTW)

    For visualization, the IOLMaster 700 employs optical coherence tomography (OCT) to obtain two-dimensional images of ocular structures of the anterior and posterior segments of the eye.

    The Reference Image functionality is intended for use as a preoperative and postoperative image capture tool.

    Device Description

    The IOLMaster 700 device is a computerized biometry device consisting of an OCT system, a Keratometer system, and a camera for the purposes of:

    • measuring distances within the human eye along the visual axis (e.g. axial length, lens thickness, anterior chamber depth),
    • measuring the corneal surface with a keratometer,
    • measuring distances at the front of the eye with a camera (e.g. white-to-white distance).

    The IOLMaster 700 is used for visualization and measurement of ocular structures mainly required for the preparation of cataract surgeries to calculate the refractive power of the intraocular lens (IOL) to be implanted. The IOLMaster 700 device includes a swept source frequency domain optical coherence tomography (OCT) module capable of acquiring tomograms of the eye. The axial measurements are based on those tomograms.

    The IOLMaster 700 device is operated via multi touch monitor and alternatively with computer mouse and keyboard. A joystick on the measuring head is used for manual alignment of the device to the patient's eye.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the IOLMaster 700 device, based on the provided document:

    1. Table of Acceptance Criteria and Reported Device Performance

    The document doesn't explicitly state quantitative acceptance criteria (e.g., "AL difference must be less than 0.05 mm"). Instead, it focuses on demonstrating comparability to predicate devices and showing good repeatability and reproducibility. The "Limits of Agreement" from the Bland-Altman analysis (which is implied by the report of mean difference and limits of agreement) can be considered as the implicit acceptance ranges for comparability.

    Measured ParameterIOLMaster 700 vs. IOLMaster 500 (Mean Difference (SD))Limits of Agreement (Lower, Upper)IOLMaster 700 vs. Lenstar LS 900 (Mean Difference (SD))Limits of Agreement (Lower, Upper)IOLMaster 700 Repeatability (SD)IOLMaster 700 Repeatability (%CV)
    Biometry Mode
    Axial Length (AL) [mm]0.004 (0.025)-0.045, 0.053N/AN/A0.0090.037%
    Anterior Chamber Depth (ACD) [mm]0.017 (0.121)-0.221, 0.254N/AN/A0.0100.314%
    Lens Thickness (LT) [mm]N/AN/A0.020 (0.120)-0.246, 0.2560.0190.410%
    Central Corneal Thickness (CCT) [µm]N/AN/A0.116 (4.492)-8.688, 8.9202.2710.414%
    Keratometry Mode
    R1, Radius in Flat Meridian [mm]0.001 (0.044)-0.086, 0.087N/AN/A0.0260.334%
    R2, Radius in Steep Meridian [mm]-0.002 (0.046)-0.093, 0.088N/AN/A0.0240.316%
    Spherical Equivalent (SE) [D]-0.001 (0.190)-0.374, 0.371N/AN/A0.1000.231%
    Cylinder (Cyl) [D]0.013 (0.318)-0.610, 0.636N/AN/A0.19120.449%
    Axis (A) [°]1.407 (11.388)-20.91, 23.73N/AN/A5.6336.93%
    WTW Mode
    White-to-White (WTW) [mm]-0.125 (0.167)-0.452, 0.203N/AN/A0.0900.755%

    Acceptance Criteria Implied: The study's conclusion that "The results of the IOLMaster 700 measurements are comparable to those of the predicate devices" and that it "demonstrated good repeatability and reproducibility for all parameters" indicates that the observed differences and variability fall within clinically acceptable ranges, deeming the device substantially equivalent. The p-values for paired t-tests (Table 2) consistently being above 0.05 (except for WTW) further support comparable means, although differences in variability exist.

    2. Sample Size and Data Provenance

    • Test Set Sample Size: A total of 120 eyes were enrolled in the study. Only one eye of each subject was designated as the study eye.
    • Data Provenance: The study was a prospective non-significant risk clinical study conducted at three sites. The document does not specify the countries/regions of these sites, but it is a US FDA submission, implying compliance with US regulations.

    3. Number of Experts and Qualifications for Ground Truth for Test Set

    The document does not mention the use of experts to establish ground truth for the test set. The study directly compares the IOLMaster 700 measurements against the measurements obtained by the predicate devices (IOLMaster 500 and Lenstar LS 900) as the reference for comparability. For repeatability and reproducibility, the device's own measurements are compared against each other.

    4. Adjudication Method for the Test Set

    There is no mention of an adjudication method in the context of expert review for ground truth, as ground truth was not established by experts. For data quality during analysis, "scans were reviewed using the same quality criteria as described in the User Manual, Software Description." The specific criteria included issues like incorrect caliper placement, blurred images, closed eyelids, and distorted reflections.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • No, an MRMC comparative effectiveness study was not done. This study focuses on device biometry measurement accuracy, comparability to predicate devices, and repeatability/reproducibility, not on human reader performance with or without AI assistance. The device is a biometer, not an AI-assisted diagnostic tool for interpretation.

    6. Standalone Performance Study

    • Yes, a standalone study was performed in the sense that the IOLMaster 700's own measurements were evaluated for repeatability and reproducibility, independent of human interpretation or "human-in-the-loop" performance. The device is intended to provide objective biometric measurements. The comparison to predicate devices also serves as a standalone performance assessment against established benchmarks.

    7. Type of Ground Truth Used

    The "ground truth" for the comparative study was the measurements obtained from the predicate devices:

    • IOLMaster 500 (for AL, ACD, R1, R2, SE, Cyl, A, WTW)
    • Lenstar LS 900 (for LT, CCT)

    For repeatability and reproducibility, the device's own repeated measurements served as the basis for assessing consistency.

    8. Sample Size for the Training Set

    The document does not describe a "training set" in the context of an algorithm or AI development. This device is a measurement instrument based on optical principles, not a machine learning model that requires a separate training data set for its core functionality of measurement.

    9. How the Ground Truth for the Training Set was Established

    As there is no mention of a training set for an algorithm, this question is not applicable based on the provided document. The device's measurement algorithms are likely based on physical principles (e.g., optical coherence tomography) rather than being trained on a large dataset with established ground truth in the AI/ML sense.

    Ask a Question

    Ask a specific question about this device

    K Number
    K121653
    Date Cleared
    2012-12-27

    (205 days)

    Product Code
    Regulation Number
    892.5900
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The INTRABEAM® System is indicated for radiation therapy treatments. The INTRABEAM® Spherical Applicators are indicated for use with the INTRABEAM® System to deliver a prescribed dose of radiation to the treatment margin or turnor bed during intracavity and intraoperative radiotherapy treatments.

    The INTRABEAM® Spherical Applicators used with the INTRABEAM System are able to deliver a prescribed dose of intraoperative radiation in conjunction with whole breast irradiation, based upon the medical judgment of the physician. The safety and effectiveness of the INTRABEAM System as a replacement for whole breast irradiation in the treatment of breast cancer has not been established.

    Device Description

    The INTRABEAM System is a miniature, high-dose rate, low energy X-ray source that emits Xray radiation intraoperatively for the treatment of cancer at the tumor cavity. The INTRABEAM Spherical applicators are accessories to the INTRABEAM System. The INTRABEAM Spherical Applicators received 510(k) clearance in K992577. There are eight sizes of applicators in a set ranging from 1.5 cm to 5.0 cm in diameter. The INTRABEAM Spherical Applicators have not changed in design or technological characteristics as described in K992577.

    AI/ML Overview

    Here's an analysis of the provided text regarding the INTRABEAM® System with INTRABEAM® Spherical Applicators, focusing on acceptance criteria and the supporting study:

    1. Table of Acceptance Criteria and Reported Device Performance

    The provided text does not explicitly state quantitative "acceptance criteria" in the typical sense (e.g., target accuracy, sensitivity, specificity thresholds) for a diagnostic AI device. Instead, the "acceptance criteria" appear to be based on a non-inferiority finding from a clinical trial, comparing the device's outcome to standard treatment.

    Therefore, the table below reflects the primary clinical outcome used to support the device's expanded indication, interpreted as the "performance" that met the "acceptance" for the new indication.

    Acceptance Criterion (Implicit)Reported Device Performance (INTRABEAM + Whole Breast Irradiation)Comparator Performance (Whole Breast Irradiation Alone)
    Non-inferior local control rate of breast cancer when IORT is used in conjunction with whole breast irradiation.1.2% recurrence rate0.95% recurrence rate
    Statistical Significancep=0.41 (non-significant difference)N/A

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (TARGIT-A Trial): N=2,232
    • Data Provenance: "international, prospective, randomized" study. This indicates data was collected from various countries and in a forward-looking manner, specifically for the trial.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications

    The document does not specify the number of experts or their qualifications for establishing the "ground truth" (e.g., recurrence diagnosis) within the TARGIT-A trial. However, as a large, international Phase 3 non-inferiority clinical trial on cancer treatment, it can be reasonably inferred that:

    • Diagnosis of recurrence would have been established by a multidisciplinary team of qualified medical professionals, including oncologists, pathologists, and radiologists, adhering to strict clinical trial protocols.
    • The investigators themselves were "physicians around the world".

    4. Adjudication Method for the Test Set

    The document does not explicitly state an adjudication method (like 2+1, 3+1). As a large-scale clinical trial studying recurrence rates, the "ground truth" for local control (recurrence) would typically be determined by clinical follow-up and confirmed diagnostic procedures as part of the trial's defined endpoints, rather than a separate expert review panel for each case. Any ambiguous cases would likely follow the trial's pre-defined adjudication process, often involving an independent endpoint committee, though this is not detailed.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No, a Multi-Reader Multi-Case (MRMC) comparative effectiveness study was not done in the context of this 510(k). The study described (TARGIT-A trial) is a clinical trial comparing treatment modalities (IORT vs. whole breast irradiation), not a study evaluating human readers' diagnostic performance with or without AI assistance. The INTRABEAM System is a device for delivering radiation therapy, not a diagnostic AI tool for image interpretation.

    6. Standalone (Algorithm Only Without Human-in-the-Loop) Performance Study

    Yes, in a sense, the TARGIT-A trial can be considered a standalone performance study of the treatment modality. The trial compared the clinical outcome (local recurrence rate) of patients receiving INTRABEAM IORT (in conjunction with whole breast irradiation) versus those receiving whole breast irradiation alone. The device's "performance" in this context is its ability to achieve comparable clinical outcomes regarding breast cancer recurrence. The "algorithm" here is the treatment protocol involving the INTRABEAM system.

    7. Type of Ground Truth Used

    The ground truth used in the TARGIT-A trial was clinical outcomes data, specifically:

    • Local control of breast cancer (i.e., presence or absence of recurrence).

    This would be determined through patient follow-up, physical examinations, imaging studies, and potentially biopsy/pathology to confirm recurrence.

    8. Sample Size for the Training Set

    This information is not applicable as the INTRABEAM System is a medical device for radiation therapy delivery, not an AI algorithm that requires a training set in the typical machine learning sense. The "training" for practitioners would involve learning how to operate the device and apply the treatment based on clinical guidelines and the evidence from trials like TARGIT-A.

    9. How the Ground Truth for the Training Set Was Established

    This information is not applicable for the same reasons as point 8. The device's efficacy is established through clinical trials, not through training on labeled datasets.

    Ask a Question

    Ask a specific question about this device

    K Number
    K090439
    Device Name
    FORUM
    Date Cleared
    2009-03-25

    (33 days)

    Product Code
    Regulation Number
    892.2050
    Reference & Predicate Devices
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    FORUM is a software system application intended for use in storing, managing, and displaying patient data, diagnostic data, videos and images from computerized diagnostic instruments or video documentation systems through networks.

    Device Description

    FORUM is a personal computer software system designed for storage, retrieval, and review of DICOM images, videos and reports originating from ophthalmic instruments and surgical microscopy. FORUM consists of two components: the FORUM Archive and the FORUM Viewer. The FORUM Archive, which contains both a server and client application, provides an archive for storage and administration of medical documents and patient data. The FORUM Viewer is an additional module to the client application which allows images, reports and videos stored in the archive to be reviewed. The FORUM Viewer also includes a modality worklist scheduling function.

    When utilized together, the FORUM Archive and Viewer provide a complete workflow cycle from administering patient information via scheduling patients for examinations on connected instruments, through archiving the results of the examinations to retrieval and review of examination data.

    AI/ML Overview

    Here's an analysis of the provided text regarding the FORUM™ device's acceptance criteria and study information:

    The supplied documentation (510(k) summary) for the FORUM™ device does not contain acceptance criteria or detailed study results proving the device meets specific performance criteria. Instead, it focuses on demonstrating substantial equivalence to a predicate device (Nidek Advanced Vision Information System (NAVIS)).

    The relevant sections state:

    • "Evaluation performed on FORUM supports the indications for use statement, demonstrates that the device is substantially equivalent to the predicate device and does not raise new questions regarding safety and effectiveness."
    • "Performance testing was conducted on FORUM and was found to perform as intended. FORUM is DICOM compliant according to its DICOM conformance statement."
    • "As described in this 510(k) Summary, all testing deemed necessary was conducted on FORUM to ensure that the device is safe and effective for its intended use when used in accordance with its Instructions for Use."

    This indicates that internal performance testing was conducted to ensure the device functions as intended and is DICOM compliant, but specific quantitative acceptance criteria and the results of those tests are not disclosed in this summary. The primary "proof" of meeting requirements is established through the argument of substantial equivalence.

    Therefore, many of the requested items cannot be extracted from the provided text.

    Here is a summary of what can be inferred or explicitly stated based on the provided text, and what is missing:


    Acceptance Criteria and Device Performance

    The core assertion in the document is that the FORUM™ device performs "as intended" and is "substantially equivalent" to the predicate device. Specific quantitative acceptance criteria (e.g., minimum accuracy, sensitivity, specificity, or system response times) and their corresponding reported device performance values are not provided.

    Acceptance Criteria (Not Explicitly Stated/Quantified in document)Reported Device Performance (as stated in document)
    Functional equivalence to predicate device (NAVIS)FORUM™ is "functionally equivalent" to NAVIS.
    Perform as intended for storing, managing, and displaying data"Performance testing was conducted on FORUM and was found to perform as intended."
    DICOM compliance"FORUM is DICOM compliant according to its DICOM conformance statement."
    Safety and Effectiveness (no new questions)"does not raise new questions regarding safety and effectiveness."

    2. Sample size used for the test set and the data provenance

    • Sample size for test set: Not specified.
    • Data provenance: Not specified (e.g., country of origin, retrospective or prospective).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • Number of experts: Not applicable, as detailed test set and ground truth establishment are not described. The document pertains to a Picture Archiving and Communications System (PACS) rather than a diagnostic AI algorithm that would typically require expert-established ground truth for performance metrics like accuracy.
    • Qualifications of experts: Not applicable.

    4. Adjudication method for the test set

    • Adjudication method: Not applicable, as detailed test set and ground truth establishment are not described.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC study: No. This device is a PACS system, not an AI-powered diagnostic tool. The document does not describe any MRMC study or AI assistance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone study: Not applicable in the context of an AI algorithm. The device itself is a standalone software system for managing and displaying data. The "performance testing" mentioned refers to its functionality as a PACS, not an AI algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • Type of ground truth: Not applicable. For a PACS system, "ground truth" would typically relate to successful storage, retrieval, display of data, and DICOM compliance, generally verified through functional testing and adherence to standards rather than expert clinical diagnoses or pathology.

    8. The sample size for the training set

    • Sample size for training set: Not applicable. The device is a PACS system, not a machine learning or AI model that requires a training set.

    9. How the ground truth for the training set was established

    • Ground truth for training set: Not applicable, as the device is not an AI model.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1