Search Filters

Search Results

Found 3 results

510(k) Data Aggregation

    K Number
    K240890
    Date Cleared
    2024-12-23

    (266 days)

    Product Code
    Regulation Number
    870.2785
    Reference & Predicate Devices
    N/A
    Why did this record match?
    Device Name :

    PanopticAI Vital Signs

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The PanopticAl Vital Signs device is intended for noninvasive spot measurement of pulse rate when the subject is still. It is software for assessing facial video stream captured from a specified smartphone or tablet camera.

    The PanopticAl Vital Signs device is intended for use by healthcare professionals. The device is only intended to be used in healthy subjects.

    The PanopticAl Vital Signs device is indicated for use on humans 18 to 60 years of age who do not require critical care or continuous monitoring.

    The PanopticAl Vital Signs device is not intended to be the sole method to assess a subject's physical health condition. The pulse rate measurements it provides should complement, not replace, professional medical care and/or medication.

    Device Description

    PanopticAl Vital Signs is a medical software device that uses remote photoplethysmography (rPPG) to measure a person's pulse rate. The app utilizes the surrounding light as the light source and works by capturing and measuring the subtle color changes on the skin caused by light absorption and reflection by the blood vessels beneath the skin. The app uses the front camera of an iPhone or iPad to capture videos of the subject. Then, the algorithm in the app detects and tracks the subject's face to capture the subtle light changes reflected in the changes in RGB pixel values. This information is sent to PanopticAl's cloud server for further processing to calculate the pulse rate. The pulse rate value is then returned to the PanopticAl Vital Signs app, and the result is displayed on the app.

    AI/ML Overview

    Here's a breakdown of the acceptance criteria and study details for the PanopticAI Vital Signs device, based on the provided text:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Endpoint)TargetReported Device Performance (iPhone)Reported Device Performance (iPad)
    Accuracy (overall RMSE)Below 3 BPMDemonstrated to be substantially equivalent to predicate (RMSE)Demonstrated to be substantially equivalent to predicate (RMSE)
    Overall Mean BiasBelow 3 BPMBelow 3 BPMBelow 3 BPM
    95% CI of Upper Limit of Agreement [LoA]Within ± 5 BPM (inclusive)(2.3094 to 3.0645) - Within criteria(2.5285 to 3.4571) - Within criteria
    95% CI of Lower Limit of Agreement [LoA]Within ± 5 BPM (inclusive)(-2.1954 to -1.4402) - Within criteria(-3.0104 to -2.0818) - Within criteria
    Correlation coefficientNot explicitly stated as a numerical target, but "met""Met""Met"
    Intercept of zero within 95% CIContained within 95% CIs"Met""Met"
    Slope of one within 95% CIContained within 95% CIs"Met""Met"
    Subgroup analysis (Bias, LoA within ±5 BPM)Expected to be met across subgroupsMajority of 95% CI of LoA within ±5 BPMMajority of 95% CI of LoA within ±5 BPM (exceptions for Heart Shape and History of Hypertension in iPad subjects, but absolute bias still within 3 BPM)
    Performance with glassesNo significant differenceNo significant differenceNo significant difference
    Performance with extreme heart rate (50-60 BPM)No significant differenceNo significant differenceNo significant difference
    Performance with extreme heart rate (100-130 BPM)No significant differenceNo significant differenceNo significant difference
    Performance with make-upNo significant differenceNo significant differenceNo significant difference
    Performance at distance (0.4 m and 0.6 m)No significant differenceNo significant differenceNo significant difference
    Performance with facial hairNo significant differenceNo significant differenceNo significant difference
    Performance at luminosity (100 lux and 500 lux)No significant differenceNo significant differenceNo significant difference

    2. Sample Size Used for the Test Set and Data Provenance

    • Sample Size (Clinical Validation Study): N=107
    • Data Provenance: The document does not explicitly state the country of origin. It describes participants based on gender, age, BMI, facial shape, history of hypertension, race/ethnicity (Asian, Black, Hispanic, White), and Fitzpatrick Skin Type Scale. The study appears to be prospective since it's a clinical validation study aiming to assess agreement between the device and a "standard pulse rate measurement by a clinician."

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications

    The document mentions "standard pulse rate measurement by a clinician" as the ground truth. It does not specify the number of clinicians or their specific qualifications (e.g., years of experience).

    4. Adjudication Method for the Test Set

    The document does not describe an adjudication method for establishing ground truth. The ground truth was based on "standard pulse rate measurement by a clinician."

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    No. The document describes a clinical validation study comparing the device's performance to a clinician's standard measurement. It does not mention an MRMC study or the effect size of human readers improving with AI assistance. The device is a standalone measurement tool.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    Yes, a standalone study was performed. The clinical validation study directly assesses the PanopticAI Vital Signs app's accuracy against a clinical standard, demonstrating its performance without human interpretation of its outputs beyond what is displayed by the app itself. The device is intended for "non-invasive spot measurement of pulse rate," implying a direct output from the algorithm.

    7. Type of Ground Truth Used

    The ground truth used was expert consensus / clinical measurement from a "clinician" performing "standard pulse rate measurement."

    8. Sample Size for the Training Set

    The document does not provide information regarding the sample size for the training set. It focuses solely on the clinical validation (test) set.

    9. How the Ground Truth for the Training Set Was Established

    Since the training set sample size is not provided, how its ground truth was established is also not described in this document.

    Ask a Question

    Ask a specific question about this device

    K Number
    K223381
    Manufacturer
    Date Cleared
    2023-03-15

    (128 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    iExaminer System with Panoptic Plus

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The iExaminer system with PanOptic Plus, consisting of PanOptic Plus, SmartClip, iExaminer application, and one of the following: iPhone 11 Pro, iPhone 11 Pro Max, iPhone 12 Pro, is intended to be used to capture images as an aid to clinicians in the evaluation, and documentation of ocular health. The images from the iExaminer System with PanOptic Plus are not intended to be used as a sole means of diagnosis.

    Device Description

    The iExaminer System with PanOptic Plus is a medical device that allows the user to capture images through the use of a PanOptic Plus ophthalmoscope and a smart device. The iExaminer System with PanOptic Plus consists of (also see Figure 1):

    1. PanOptic Plus Ophthalmoscope:
      a. Ophthalmoscope Head
      b. Compatible energy sources (i.e. battery handles or wall units)
      c. Optional Patient Eyecup
    2. Smart device attachment instrument (made of SmartBracket and SmartClip);
    3. Compatible smart device (iPhone X, iPhone 11 Pro, iPhone 11 Pro Max, iPhone 12 Pro).
    4. iExaminer Pro Software Application.
      The iExaminer system with PanOptic Plus is intended to take photographs of the eye and surrounding area.
    AI/ML Overview

    Here's a breakdown of the acceptance criteria and the study that proves the device meets them, based on the provided FDA 510(k) summary:

    Acceptance Criteria and Device Performance

    The document states that a non-clinical and clinical (image comparison) study was performed to demonstrate that the iExaminer System with PanOptic Plus images are substantially equivalent to the predicate device images in their usefulness for documentation and clinical referrals. The acceptance criteria themselves are not explicitly detailed in a table with specific numerical thresholds for metrics like sensitivity, specificity, or accuracy. Instead, the general acceptance criterion for the clinical study was that the new device's images are "substantially equivalent" in usefulness for documentation and clinical referrals compared to the predicate device.

    Given that no specific performance metrics like sensitivity or specificity were reported for the device itself against a ground truth, it's not possible to create a table of acceptance criteria and reported device performance in those terms. The study focuses on comparative usefulness.

    However, based on the conclusion, the device did meet the established acceptance criteria:
    "The results of the image comparison study demonstrate that the iExaminer System with PanOptic Plus has passed all established acceptance criteria and is as safe and effective as the predicate device for its intended use."

    Study Details:

    1. A table of acceptance criteria and the reported device performance
      As noted above, explicit numerical acceptance criteria for performance metrics (sensitivity, specificity) are not provided. The acceptance criterion was "substantially equivalent" usefulness of images for documentation and clinical referrals compared to the predicate device. The study concluded this criterion was met.

    2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
      The document does not specify the sample size for the test set (number of images or patients). It also does not mention the country of origin of the data or whether it was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
      The document does not describe the establishment of a "ground truth" by experts for specific pathologies. Instead, the study was an "image comparison study" where the new device's images were compared to the predicate device's images for "usefulness for documentation and clinical referrals." The number and qualifications of experts (if any specific graders were involved in comparing image usefulness) are not detailed.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
      The document does not provide details on any adjudication method used for comparing images or establishing usefulness.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
      No, a multi-reader multi-case (MRMC) comparative effectiveness study focusing on human reader improvement with or without AI assistance was not conducted or reported. The study was an "image comparison study" between two devices to show substantial equivalence. The device itself is an image capture system, not an AI-powered diagnostic tool.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
      No, a standalone algorithm performance study was not done. The device is a system for capturing images for human clinicians to evaluate, not an automated diagnostic algorithm.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
      The study did not use a traditional "ground truth" in the sense of confirmed diseases (e.g., pathology, outcomes data, or expert consensus on diagnosis). Instead, it focused on the "usefulness for documentation and clinical referrals" of images captured by the proposed device compared to images from the predicate device. The predicate device's images are assumed to be a sufficient and already accepted reference point for "usefulness."

    8. The sample size for the training set
      This information is not applicable. The device is an image capture system, not an AI model that requires a training set.

    9. How the ground truth for the training set was established
      This information is not applicable, as there is no training set for an AI model.

    Ask a Question

    Ask a specific question about this device

    K Number
    K121405
    Device Name
    PANOPTIC
    Manufacturer
    Date Cleared
    2012-12-20

    (224 days)

    Product Code
    Regulation Number
    886.1120
    Reference & Predicate Devices
    Why did this record match?
    Device Name :

    PANOPTIC

    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The iExaminer is an attachment and software used only with the iPhone 4 and iPhone 4S in conjunction with the Welch Allyn PanOptic Ophthalmoscope to allow users to capture, send, store and retrieve images of the eye. The device is intended to be used by trained personnel within a medical or school environment.

    Device Description

    The Welch Allyn iExaminer is comprised of the adapter, the iPhone and the software application. The adapter is specifically designed to hold the iPhone 4 and iPhone 4S in a fixed position in order to align the camera in the iPhone with the optics of the Welch Allyn PanOptic. The software application allows the user to capture, store, send, and retrieve images of the eye as seen through the PanOptic.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Welch Allyn iExaminer, formatted as requested:

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are primarily derived from the comparative technical specifications and the "Summary of ISO 10940 performance" table. The device's performance is listed for the iPhone 4 Standard res, iPhone 4 High res, iPhone 4S Standard res, and iPhone 4S High res. Since the document states "The transferred images were determined to be at least as accurate and adequate (carry sufficient imaging details) to discern important clinical information as the predicate Optomed device, and thus substantially equivalent," the implicit acceptance criteria is 'performs at least as well as the predicate device (Optomed Smartscope M5 EY3) for clinical utility.'

    Acceptance Criteria (Predicate: Optomed Smartscope M5 EY3/General Standards)Reported iExaminer Performance (iPhone 4 / 4S)Notes
    Resolving power on fundus camera
    • Centre: ≥ 80 lp/mm
    • Middle (r/2): ≥ 60 lp/mm
    • Periphery (r): ≥ 40 lp/mm | iPhone 4 High res:
    • Centre: 82.94 lp/mm
    • Middle (r/2): 74.12 lp/mm
    • Periphery (r): 58.82 lp/mm | iExaminer (iPhone 4 High res) meets/exceeds all resolving power criteria. Predicate "Not possible to observe" for these. |
      | Tolerance of angular field of view
      +/- 5% (against 25° claim) | -1.16 % (Measured 24.71° against 25° claim) | iExaminer (all iPhone versions) meets this criterion. Predicate (+10.5%) does not meet this specific tolerance. |
      | Tolerance of pixel pitch on fundus
      +/- 7% (Implicit from predicate comparison) | iPhone 4: 16.25 um/pixel
      iPhone 4S: 4.25 um/pixel
      (Predicate: 8.77 um/pixel) | The document explicitly states "N.A. (Fundus camera on a digital sensor)" for magnification tolerance, and then introduces "Tolerance of pixel pitch on fundus" with a +/- 7% criterion. The various iPhone models show different pixel pitch values. To meet the tolerance, the measured pixel pitch should be within +/- 7% of an expected value, or be comparable to the predicate. With the predicate at 8.77 um/pixel, and the iExaminer at 5.37 um/pixel (iPhone 4 High res) and 4.25 um/pixel (iPhone 4S High res), these are different but not necessarily failing. The raw pixel pitch values alone don't directly indicate compliance without a target pixel pitch. However, the document's structure implies these are presented for comparison and general compliance. |
      | Range of diopter adjustment of the optical finder
      -5D to +5D (General Standard) | -20D to +20D | iExaminer meets/exceeds the range. Predicate "At least -20D to +20D". |
      | Range of focus adjustment for compensation of patient's refractive
      -15D to +15D (General Standard) | -20D to +20D | iExaminer meets/exceeds the range. Predicate "At least -20D to +20D". |
      | Image Quality for Clinical Information
      "at least as accurate and adequate (carry sufficient imaging details) to discern important clinical information as the predicate Optomed device." | "images... are accurate enough and carry sufficient imaging details to discern important clinical information." | This is met based on the conclusion of the clinical trial. |
      | Compliance to applicable standards: ISO 14971, Guidance on OTS Software, Guidance for Premarket Submission for software, ISO 10940, ISO 15004-1, ISO 15004-2, IEC 60601-1, IEC 60601-1-2 | "tested and found to be in compliance to applicable industry standards." | This is stated as met based on non-clinical tests. |

    2. Sample Size for the Test Set and Data Provenance

    • Sample Size for Test Set: Not explicitly stated as a number of patients or images for the clinical trial. The text says "Clinical data were collected" and refers to "the clinical trial data," but no specific sample size in terms of cases or participants is provided.
    • Data Provenance: Not explicitly stated (e.g., country of origin, retrospective/prospective). The study is referred to as "the clinical trial," suggesting it was a prospective study conducted for the purpose of this submission, but its geographical location is not mentioned.

    3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of those Experts

    This information is not provided in the document. The document states that the test established images had "sufficient detail to allow a trained professional to discern clinically important information," but it does not detail how many professionals were involved or their qualifications.

    4. Adjudication Method for the Test Set

    This information is not provided in the document.

    5. If a Multi Reader Multi Case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • MRMC Study: No, this does not appear to be an MRMC comparative effectiveness study involving AI assistance for human readers. This device is an ophthalmic camera attachment and software. Its primary function is to capture and transmit images, not to provide AI interpretation or assist human readers with diagnosis in a comparative effectiveness setting.
    • Effect Size: Not applicable as it's not an AI-assisted diagnostic device in the context of improving human reader performance.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done

    • Standalone Performance: Not applicable in the traditional sense of an AI algorithm providing diagnostic output. The device itself (iExaminer) is a standalone imaging and image management system. The "clinical study" confirmed the image quality was sufficient for a trained professional to discern clinical information. It does not suggest an AI algorithm was evaluating images without human-in-the-loop.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    The type of ground truth is indirectly implied as expert assessment of image quality for clinical utility. The clinical trial aimed to confirm that "images... are accurate enough and carry sufficient imaging details to discern important clinical information." This suggests that human experts, likely clinicians (ophthalmologists or similar specialists), evaluated the captured images to determine if they contained enough detail for clinical decision-making. There is no mention of pathology or outcomes data as ground truth.

    8. The Sample Size for the Training Set

    This information is not applicable/not provided for a traditional training set as this is not an AI/Machine Learning diagnostic device at the time of FDA submission. The software's function is image capture, storage, and transfer, not automated interpretation.

    9. How the Ground Truth for the Training Set was Established

    This information is not applicable/not provided for a traditional training set as this is not an AI/Machine Learning diagnostic device.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1