Search Filters

Search Results

Found 4 results

510(k) Data Aggregation

    K Number
    K160499
    Manufacturer
    Date Cleared
    2017-04-24

    (426 days)

    Product Code
    Regulation Number
    868.2375
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    The Apnea Risk Evaluation System (ARES), Model 620 is indicated for use in the diagnostic evaluation by a physician of adult patients with possible sleep apnea. The ARES can record and score respiratory events during sleep (e.g., apneas, hypopneas, mixed apneas and flow limiting events). The device is designed for prescription use in the patient's home to aid a physician in diagnosing adults with possible sleep-related breathing disorders.

    Device Description

    The Apnea Risk Evaluation System (ARES™) includes a battery powered patient worn device called a Unicorder (Model 620). The Unicorder is worn by a patient for one to three nights, each night recording up to 7 hours of data. Data recorded includes oxygen saturation, snoring level, head movement, head position, and airflow. Additionally, the Unicorder 620 allows collection of data from ARES compatible peripheral devices. The device monitors signal quality during acquisition and notifies the user via voice messages when adjustments are required. A standard USB cable connects the Unicorder to a USB port on a host computer when patient data is to be uploaded or downloaded. The USB cable provides power to the Unicorder during recharging from the host computer or from a USB wall charger. The Unicorder cannot record nor can it be worn by the patient when connected to the host computer or the wall charger. Software, residing on a local PC or a physical or virtual server controls the uploading and downloading of data to the Unicorder, processes the sleep study data and generates a sleep study report. The ARES™ can auto-detect positional and non-positional obstructive and mixed apneas and hypopneas similarly to polysomnography. It can detect sleep/wake and REM and non-REM. After the sleep study has been completed, data is transferred off the Unicorder is prepared for the next study. The downloaded sleep study record is then processed with the ARES ™ Insight software to transform the raw signals and derive and assess changes in oxygen saturation (SpO2), head movement, head position, snoring sounds, airflow, and EEG or respiratory effort. The red and IR signals are used to calculate the SpO₂. The actigraphy signals are transformed to obtain head movement and head position. A clinician can convert an auto-detected obstructive apnea to a central apnea based on visual inspection of the waveforms. ARES "" Screener can predict pre-test probability of obstructive sleep apnea (OSA). The ARES "" data can also assist the physician to identify patients who will likely have a successful OSA treatment outcome, including CPAP and oral appliance therapies. ARES™ can help identify patients who would benefit from a laboratory PAP titration.

    AI/ML Overview

    The provided text does not contain detailed information about a study proving the device meets acceptance criteria. Instead, it focuses on demonstrating substantial equivalence to a predicate device through a comparison of specifications and non-clinical testing. Therefore, I cannot fully answer all aspects of your request as the specific study details, sample sizes, expert qualifications, and ground truth methodologies for performance evaluation are not present.

    However, I can extract the available information regarding acceptance criteria and reported device performance from the comparison table.

    1. Table of Acceptance Criteria and Reported Device Performance

    The acceptance criteria are implicitly defined by the "Performance" section of the comparison table between the predicate device (ARES Model 610) and the proposed device (ARES Model 620). For most parameters, the goal is "Identical" or "Equivalent," meaning the new device should perform at least as well as the predicate.

    Acceptance Criteria (Predicate Reference)Reported Device Performance (Proposed ARES Model 620)Discussion of Differences (if any)
    SpO2 Accuracy (Model 610): 70-100% SpO2 Range Error (± 1 SD)SpO2 Accuracy (Model 620): 70 to 100% SpO2 ± 2% Non-Clinical Testing Conclusion: IdenticalEquivalent (The ±2% likely represents a standard for accuracy within this range)
    Airflow (Model 610): Via Nasal Pressure Range ± 0.55 cm H₂O Accuracy ± 2%Airflow (Model 620): Via Nasal Pressure Range ± 0.55 cm H₂O Accuracy ± 2% Non-Clinical Testing Conclusion: IdenticalIdentical
    Head Position (Model 610): Via accelerometers Position accuracy 3° @ 30°CHead Position (Model 620): Via accelerometers Position accuracy 3° @ 30°C Non-Clinical Testing Conclusion: IdenticalIdentical
    Snoring Level (Model 610): From microphone 40 dB (min) 70 dB (max)Snoring Level (Model 620): From microphone 20 dB (min) 70 dB (max) Non-Clinical Testing Conclusion: Identical (despite the stated difference in range, they concluded "Identical" in the non-clinical test summary)Equivalent - Additional low frequency range available (down to 20 dB vs. 40 dB for predicate)
    Sleep/awake Signal (Model 610): Optional EEG Sensor: ±1000 μV @ 256 samples/secSleep/awake Signal (Model 620): Optional EEG Sensor: ±1000 μV @ 240 samples/sec Non-Clinical Testing Conclusion: Identical (despite the stated difference in samples/sec, they concluded "Identical" in the non-clinical test summary)Equivalent - No impact on use
    EEG (Non-Clinical Testing Conclusion)EEG (Non-Clinical Testing Conclusion): IdenticalIdentical
    Respiration (Non-Clinical Testing Conclusion)Respiration (Non-Clinical Testing Conclusion): IdenticalIdentical

    2. Sample size used for the test set and the data provenance:

    The document describes "Comparative testing between the Predicate Device ARES Model 610, K111194 as cleared on 07/07/2011 and the proposed ARES Model 610 demonstrates substantial equivalence." However, it does not specify the sample size (number of patients or recordings) used for this comparative testing or the data provenance (e.g., country of origin, retrospective/prospective nature).

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:

    This information is not provided in the document. The document refers to "nonclinical and clinical tests" but does not detail how ground truth was established for these tests, nor the involvement or qualifications of any experts.

    4. Adjudication method (e.g., 2+1, 3+1, none) for the test set:

    This information is not provided in the document.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, if so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    The document does not describe an MRMC comparative effectiveness study involving human readers or AI assistance effect size. The comparison is between two devices, not human performance with and without AI.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done:

    The document states, "The ARES™ can auto-detect positional and non-positional obstructive and mixed apneas and hypopneas similarly to polysomnography. It can detect sleep/wake and REM and non-REM." This implies a standalone algorithmic performance for detecting and scoring respiratory events. However, no specific standalone performance metrics (e.g., sensitivity, specificity, accuracy) are reported for this automated detection/scoring. The "SpO2" and other sensor accuracy values are "device performance" but not necessarily "standalone algorithm performance" in the context of diagnostic output.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    This information is not explicitly stated. Given that the device "can auto-detect positional and non-positional obstructive and mixed apneas and hypopneas similarly to polysomnography," it is highly probable that Polysomnography (PSG) was used as the ground truth. However, the document does not explicitly confirm this or specify how the PSG data was analyzed to establish ground truth (e.g., by experts, automated scoring, etc.).

    8. The sample size for the training set:

    The document describes comparative testing and verification/validation but does not mention a "training set" or its size, which would typically be associated with machine learning model development. This implies the comparison is more about hardware and firmware functionality and existing algorithms rather than the development of a new AI model requiring a separate training set.

    9. How the ground truth for the training set was established:

    Since no training set is mentioned (see point 8), there is no information on how its ground truth was established.

    Ask a Question

    Ask a specific question about this device

    K Number
    K120320
    Date Cleared
    2012-08-14

    (194 days)

    Product Code
    Regulation Number
    870.1130
    Reference & Predicate Devices
    Predicate For
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Clinical Application's intended use is to retrospectively receive, display and store monitored vital signs parameters and related data. Additionally, it can send configuration information to Watermark home monitoring devices. Watermark devices include the Connected Care Mobile Application and MiPal. The configuration information may include a patient's vitals collection schedule and parameters to be collected. The Clinical Application displays the data and system alerts for review and interpretation by a healthcare professional. The Clinical Application is not intended for emergency use or real-time monitoring.

    Device Description

    The Connected Care Clinical Application is a cloud based, web software system. It is accessed from commercially available PC systems with a web browser and minimum performance specifications consistent with typical PC hardware and equipment specifications. The Clinical Application accepts data from Watermark Patient Monitors.

    The Connected Care Clinical Application is a medical device data system that receives, stores, and displays data received from Watermark home monitoring devices. Additionally, it can send configuration information to Watermark home monitoring devices. Watermark devices include the Mobile Application and MiPal. The configuration information may include a patient's vitals collection schedule and parameters to be collected.

    AI/ML Overview

    Here's an analysis of the provided text regarding the acceptance criteria and study for the Watermark Medical Connected Care Clinical Application (K120320):

    The provided document describes a Medical Device Data System (MDDS). For such systems, the "acceptance criteria" are not typically framed in terms of clinical performance metrics like sensitivity, specificity, or accuracy compared to a ground truth label. Instead, the acceptance criteria revolve around software validation and functional requirements. The "study" that proves the device meets these criteria is the software validation process itself.

    Based on the provided text, here's the information categorized:

    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance Criteria (Implied from text)Reported Device Performance
    Functional Requirements:
    • Receives vital signs parameters and related data
    • Stores vital signs parameters and related data
    • Displays vital signs parameters and related data
    • Sends configuration information to Watermark home monitoring devices (Mobile Application and MiPal)
    • Configuration information includes patient vitals collection schedule and parameters
    • Displays data and system alerts for review and interpretation by a healthcare professional | The software validation results demonstrated that the Clinical Application performed within its specifications and functional requirements for software. |
      | Compliance with Guidelines and Standards:
    • Adherence to FDA reviewer's guides for medical device software | The software validation results demonstrated that the Clinical Application was in compliance with the guidelines and standards referenced in the FDA reviewer's guides. |
      | Intended Use:
    • For retrospective review
    • Not for emergency
    • Not for real-time monitoring | The device's performance aligned with its stated intended use for retrospectively receiving, displaying, and storing monitored vital signs and related data for review and interpretation by a healthcare professional, and sending configuration information. |

    2. Sample size used for the test set and the data provenance

    • Test Set Sample Size: Not explicitly stated. The document refers to "software validation results," which implies a series of tests, but not a specific sample size of medical cases or data points.
    • Data Provenance: Not explicitly stated. Given the device's function (receiving data from Watermark home monitoring devices), the data would originate from these devices. The document does not specify country of origin or whether the data used for validation was retrospective or prospective.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts

    • This information is not applicable in the traditional sense for this type of device (MDDS). The "ground truth" for an MDDS primarily relates to whether the software correctly receives, stores, displays, and transmits data as per its specifications, not whether it correctly labels or diagnoses a medical condition. The validation would involve comparing the displayed data against the received data, and the transmitted configuration against the entered configuration. This typically involves software testers or quality assurance personnel verifying data integrity and functionality.

    4. Adjudication method for the test set

    • Not applicable in the traditional sense. Since the validation is software-centric (data integrity and functionality), adjudication by medical experts for discrepant interpretations wouldn't be relevant. Software testing typically involves predefined test cases with expected outcomes. Any discrepancies would be bugs to be fixed and re-tested, not adjudicated.

    5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance

    • No, an MRMC comparative effectiveness study was not done. This type of study is relevant for devices that assist in diagnosis or interpretation (e.g., AI for radiology). The Connected Care Clinical Application is an MDDS that primarily handles data management and display; it does not involve AI for interpretation or diagnosis. Therefore, there is no effect size related to human reader improvement with or without AI.

    6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done

    • Yes, in essence, the software validation for functional correctness is a standalone evaluation. The device itself is "software only" in its function, receiving and displaying data. Its performance is judged on whether it correctly executes its specified functions (receiving, storing, displaying, transmitting data) independent of human interpretation of clinical outcomes. The "algorithm" here refers to the software's logic for handling data.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)

    • The "ground truth" for this MDDS would be the expected output or behavior of the software, based on its design specifications. This means:
      • Data Integrity: The data received matches the data sent from the monitoring devices.
      • Data Storage: The stored data accurately reflects the received data.
      • Data Display: The displayed data accurately reflects the stored data according to display specifications.
      • Configuration Transmission: The configuration sent to the devices matches the configuration entered into the system.
    • This ground truth is established by software requirements specifications and design documents, against which the validated system's performance is measured.

    8. The sample size for the training set

    • Not applicable. This device is an MDDS and does not employ machine learning or AI models that require a training set. Its functionality is based on deterministic software logic, not on learning from data.

    9. How the ground truth for the training set was established

    • Not applicable, as there is no training set for this device.
    Ask a Question

    Ask a specific question about this device

    K Number
    K120325
    Manufacturer
    Date Cleared
    2012-07-18

    (167 days)

    Product Code
    Regulation Number
    870.2910
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    Mobile Application allows the user to collect vital signs data (including noninvasive blood pressure, pulse rate, weight and other data from optional add-on devices). The user can then transmit the data to a central database via a communication network. Use of the system allows retrospective review of certain physiological functions by qualified health care professionals. The Mobile Application is intended for use with adult and pediatric patients over twelve years of age.

    Device Description

    The Connected Care Mobile Application is intended to receive, display and transmit patient information on a retrospective basis. The device is not intended for real-time monitoring or emergency use by patients or caregivers.

    The mobile application is designed to operate on various platforms including tablet computers and smart phones, guiding a user through the vitals acquisition process via Bluetooth medical peripherals. Peripherals will include:

    • . Scale
    • Glucose meter
    • NiBP .
    • . SPO2
    AI/ML Overview

    The provided text details the 510(k) summary for the Watermark Medical Connected Care Mobile Application. However, it does not include specific acceptance criteria, a detailed study proving performance against those criteria, or the granular information about sample sizes, ground truth establishment, or expert qualifications that you requested.

    The document primarily focuses on the regulatory submission, device description, intended use, and substantial equivalence to a predicate device. The "Performance Data" section is very brief and general.

    Here's a breakdown of what can be extracted and what is missing based on your request:


    1. Table of Acceptance Criteria and Reported Device Performance

    Acceptance CriteriaReported Device Performance
    Not Explicitly Stated"The software validation results demonstrated that the Mobile Application was in compliance with the guidelines and standards referenced in the FDA reviewer's guides and that it performed within its specifications and functional requirements for software."
    (Implied Criteria based on Device Functionality)(The device is intended to receive, display, and transmit patient information, specifically vital signs data from connected peripherals.)
    Accuracy of data display/transmissionNot explicitly stated, but implicitly validated as part of "specifications and functional requirements."
    Data integrity during transmissionNot explicitly stated, but implicitly validated as part of "specifications and functional requirements."
    Compatibility with specified peripheralsNot explicitly stated, but implicitly validated as part of "specifications and functional requirements."

    Explanation:
    The document states that the software validation demonstrated compliance with guidelines and standards, and that it performed within its specifications and functional requirements. However, it does not list these specific specifications or functional requirements as acceptance criteria in measurable terms (e.g., "data transmission success rate > 99%," "display accuracy within X% of source"). Therefore, a detailed table with explicit acceptance criteria and corresponding performance metrics cannot be constructed from the provided text.


    2. Sample Size Used for the Test Set and Data Provenance

    • Test Set Sample Size: Not mentioned in the provided text.
    • Data Provenance (e.g., country of origin, retrospective/prospective): Not mentioned in the provided text. The device is intended for "personal use" and collects data for "retrospective review," implying the data, once collected, is historical. However, details about the origin of data used for testing are absent.

    3. Number of Experts Used to Establish Ground Truth for the Test Set and Their Qualifications

    • Number of Experts: Not mentioned in the provided text.
    • Qualifications of Experts: Not mentioned in the provided text.

    4. Adjudication Method for the Test Set

    • Adjudication Method: Not mentioned in the provided text.

    5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study

    • Was an MRMC study done? No, an MRMC comparative effectiveness study is not mentioned or implied in the provided text. This device is a data collection and display application, not an AI interpretation tool for medical images, which are typically the subject of MRMC studies.
    • Effect size of human readers with/without AI assistance: Not applicable, as no MRMC study was mentioned and the device is not an AI interpretation tool.

    6. Standalone (Algorithm Only Without Human-in-the-Loop Performance) Study

    • Was a standalone study done? The "Performance Data" section refers to "software validation results" and performance "within its specifications and functional requirements." This implies testing of the software's functionality in isolation (standalone), but no specifics of such a study are provided (e.g., methodology, metrics, results beyond a general statement of compliance).

    7. Type of Ground Truth Used

    • Type of Ground Truth: Not explicitly stated. For a device like this, ground truth would likely involve verifying the accuracy of displayed and transmitted data against the raw data received from the connected physiological sensors. However, the document doesn't detail how this was established.

    8. Sample Size for the Training Set

    • Training Set Sample Size: Not mentioned in the provided text. Given this is a mobile application for data collection and display, it's unlikely to have a "training set" in the sense of machine learning algorithms. The "training" would be more akin to software development and debugging, not data-driven model training.

    9. How the Ground Truth for the Training Set Was Established

    • Ground Truth Establishment for Training Set: Not applicable in the context of machine learning. If "training set" refers to data used during software development and testing, ground truth would be established by verifying the software's output against the expected correct output for given inputs, likely through various testing methodologies (unit tests, integration tests, system tests). The document does not provide these details.
    Ask a Question

    Ask a specific question about this device

    K Number
    K120470
    Device Name
    MIPAL
    Manufacturer
    Date Cleared
    2012-06-08

    (113 days)

    Product Code
    Regulation Number
    870.2910
    Reference & Predicate Devices
    Predicate For
    N/A
    AI/MLSaMDIVD (In Vitro Diagnostic)TherapeuticDiagnosticis PCCP AuthorizedThirdpartyExpeditedreview
    Intended Use

    MiPal allows the user to collect vital signs data (including noninvasive blood pressure, pulse rate, weight and other data from optional add-on devices). The user can then transmit the data to a central database via a communication network. Use of the system allows retrospective review of certain physiological functions by qualified health care professionals. It is not intended for real-time, emergency, or critical care monitoring of patient vital signs. The MiPal is intended for use with adult and pediatric patients over twelve years of age.

    SpO2 is to be used under the direction of licensed health care professionals, and is available only by or on the order of a physician.

    Device Description

    The MiPal is a communication hub intended to receive and transmit patient information on a retrospective basis. The device is not intended for real-time monitoring or emergency use by patients or caregivers.

    The MiPal is designed to optionally collect NiBP, pulse, and SpO2. It may also collect other vitals through the vitals acquisition process via Bluetooth medical peripherals. Peripherals will include:

    • Scale
    • o Glucose meter
    AI/ML Overview

    The Watermark Medical MiPal is a communication hub intended to receive and transmit patient information on a retrospective basis. The device is not intended for real-time monitoring or emergency use by patients or caregivers. MiPal allows the user to collect vital signs data (including noninvasive blood pressure, pulse rate, weight and other data from optional add-on devices). The user can then transmit the data to a central database via a communication network. The system allows retrospective review of certain physiological functions by qualified health care professionals. The MiPal is intended for use with adult and pediatric patients over twelve years of age. SpO2 is to be used under the direction of licensed health care professionals, and is available only by or on the order of a physician.

    The provided text only briefly discusses performance data by stating: "The verification and validation results demonstrated that the MiPal was in compliance with the guidelines and standards referenced in the FDA reviewer's guides and that it performed within its specifications and functional requirements."

    Therefore, I cannot provide detailed information for all requested sections due to the limited information in the provided text.

    Here's what can be extracted and what cannot:

    1. Table of acceptance criteria and the reported device performance:

    The document does not provide a table of acceptance criteria or specific reported device performance metrics. It only makes a general statement about compliance.

    2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective):

    This information is not available in the provided text.

    3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience):

    This information is not available in the provided text.

    4. Adjudication method (e.g. 2+1, 3+1, none) for the test set:

    This information is not available in the provided text.

    5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:

    This information is not available in the provided text. The device is described as a "communication hub" to collect and transmit data for retrospective review, not as an AI-assisted diagnostic tool for human readers.

    6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:

    The device is not described as having an algorithm that performs interpretations or diagnostics independently. Its function is to collect and transmit data for review by healthcare professionals. Therefore, a standalone performance study in the sense of an algorithm's diagnostic accuracy is not relevant to the described function of this device, and no such study is mentioned.

    7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):

    This information is not available in the provided text.

    8. The sample size for the training set:

    This information is not available in the provided text. The device is a data collection and transmission hub, not a machine learning model that would require a training set in the conventional sense for its primary function.

    9. How the ground truth for the training set was established:

    This information is not available in the provided text for the reasons stated in point 8.

    Ask a Question

    Ask a specific question about this device

    Page 1 of 1