Search Results
Found 3 results
510(k) Data Aggregation
(324 days)
MyCardio, LLC dba SleepImage.
The SleepImage System is Software as a Medical Device (SaMD) that establishes sleep quality. The SleepImage System analyzes, displays and summarizes Electrocardiogram (ECG) or Plethysmogram (PLETH) data, typically collected during sleep, that is intended for use by or on the order of a Healthcare Professional to aid in the evaluation of sleep disorders, where it may inform or drive clinical management for children, adolescents and adults.
The SleepImage Apnea Hypopnea Index (sAHI), presented when oximeter data is available, is intended to aid healthcare professionals in diagnosis and management of sleep disordered breathing.
The SleepImage System output is not interpreted or clinical action taken without consultation of a qualified healthcare professional.
The SleepImage System is a Class II Software as a Medical Device (SaMD), intended to aid in the evaluation of sleep disorders, where it may inform or drive clinical management.
The Sleeplmage System automatically analyzes and displays Electrocardiogram (ECG) and Plethysmogram (PLETH) data. When provided in addition to the ECG or PLETH data, the SleepImage System can optionally analyze and display accelerometer and oximeter data.
The results of the processed data are graphical and numerical presentations and reports of sleep latency, sleep duration, sleep quality and sleep pathology for the use by or on the order of physicians, trained technicians, or other healthcare professionals to evaluate sleep disorders where it may inform or drive clinical management taking into consideration other factors that normally are considered for clinical management of sleep disorders for children, adolescents and adults. When oximeter data is available, the Sleeplmage System will generate the Sleeplmage Apnea Hypopnea Index (sAHI) to aid healthcare professionals in diagnosis and management of sleep disordered breathing.
The SleepImage System reports results of the automated data analysis, including expected values for sleep quality, sleep duration and sleep pathology based on published peer-reviewed publications, and guidelines for sleep duration (National Sleep Foundation) and sleep apnea (American Academy of Sleep Medicine).
The clinician can view raw data for interpretation, adjust study duration, write clinical notes in the report and make recommendations to patients for further testing, recommend a referral to another clinician and/or recommendations for therapy.
The SleepImage System output is not intended to be interpreted or clinical action taken without consultation of a qualified healthcare professional. Due to the intra-night variability of sleep, it is recommended that patients track their sleep over time.
The SleepImage System is a sleep health evaluation application that is indicated for use on a general-purpose computing platform. Like the predicate device, it processes data typically recorded during sleep, using a cloud-based web application.
Here's a breakdown of the acceptance criteria and study information for the SleepImage System, based on the provided document:
1. Acceptance Criteria and Reported Device Performance
The document does not explicitly present a table of acceptance criteria with specific performance metrics (e.g., sensitivity, specificity, accuracy thresholds) for the modifications. Instead, it states that "All parameters tested exceeded the thresholds set for the tests."
However, we can infer the type of acceptance criteria based on the comparisons made:
Acceptance Criteria Type | Reported Device Performance / Assessment |
---|---|
Modification #1: PLETH vs. ECG input for CPC Analysis | |
Agreement between automated output from CPC analysis using PLETH input vs. ECG input (compared to predicate device). | "All parameters tested exceeded the thresholds set for the tests." The report references the average agreement for sleep stage scoring among expert scorers using PSG (82.6%) as a contextual benchmark for inter-scorer reliability. The clinical evaluation confirmed that CPC analysis with PLETH input is comparable to ECG input for clinical decisions. |
Modification #2: SleepImage Apnea Hypopnea Index (sAHI) vs. Manual AHI | |
Agreement between sAHI calculated by the device and manual human scoring of AHI using AASM criteria for mild, moderate, severe sleep apnea (pediatric & adult). | "All parameters tested exceeded the thresholds set for the tests." The clinical evaluation confirmed that sAHI auto-scoring algorithms generate comparable output to human manual scoring of AHI from PSG studies. |
Sensitivity and Positive Likelihood Ratio (LR+) of sAHI against pre-determined thresholds for Out Of Center (OOC) diagnostic devices (based on AASM guidelines). | "All parameters tested exceeded the thresholds set for the tests." The document states that the sAHI demonstrated agreement levels compared to manually scored AHI from PSG studies to be used to aid clinical diagnosis. |
No adverse impact on existing predicate device functionality due to software modifications. | "Validation and verification was performed to verify that the software modifications did not have any adverse impact on the functionality of the SleepImage System." "The verification and validation testing demonstrate that both new feature requirements have been satisfied and safety and effectiveness has not been inadvertently affected by modifications to the system." |
Study Details:
2. Sample Size and Data Provenance
- Test Set Sample Size: Over 2,000 sleep studies.
- Children: 1334 studies (all based on PSG studies).
- Adults: 761 studies (189 from PSG, 572 from HSAT).
- Data Provenance: The records were "obtained from prospective clinical trials," indicating prospective collection. The country of origin is not specified.
3. Number of Experts and Qualifications for Ground Truth
- The document mentions "manual human scoring of Apnea Hypopnea Index (AHI) using American Academy of Sleep Medicine (AASM) scoring criteria" as a comparator for the sAHI. It refers to the "average agreement for sleep stage scoring among expert scorers in accredited sleep centers using PSG" (82.6%) for contextual comparison but does not state the number or specific qualifications of experts who performed the manual scoring for this specific test set's ground truth. It implies that the human scoring adhered to AASM guidelines, suggesting qualified sleep experts/technicians.
4. Adjudication Method for the Test Set
- The document does not explicitly state an adjudication method (e.g., 2+1, 3+1). It refers to "manual human scoring" which typically implies one qualified scorer, but does not detail if multiple scorers or an adjudication process were used for discrepancies in the AHI ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- No, a formal MRMC comparative effectiveness study involving human readers with and without AI assistance is not explicitly described. The study compared the device's automated output to human manual scoring (for sAHI) and to the predicate device's performance (for CPC with different input). It did not assess human reader improvement with AI assistance.
6. Standalone Performance Study
- Yes, a standalone (algorithm only) performance study was done for both modifications:
- Modification #1 (PLETH vs. ECG for CPC): The study compared the automated output of CPC analysis using PLETH input directly against the automated output using ECG input (which was the basis of the predicate device).
- Modification #2 (sAHI calculation): The study compared the device's automatically calculated sAHI against manually scored AHI and evaluated its sensitivity and LR+ against AASM guidelines for OOC devices. This is a clear standalone performance evaluation.
7. Type of Ground Truth Used
- For Modification #1 (PLETH vs. ECG for CPC): The ground truth implicitly seems to be the performance of the predicate device's ECG-based CPC analysis. The comparison was to show agreement between the two automated methods.
- For Modification #2 (sAHI): The ground truth was "manual human scoring of Apnea Hypopnea Index (AHI) using American Academy of Sleep Medicine (AASM) scoring criteria" from Polysomnography (PSG) studies.
8. Sample Size for the Training Set
- The document does not specify the sample size used for the training set. The "over 2,000 sleep studies" are mentioned in the context of "clinical evaluation" which typically refers to validation/test data, not training data.
9. How the Ground Truth for the Training Set Was Established
- Since the training set size is not provided, the method for establishing its ground truth is also not detailed. However, it can be inferred that if AHI was part of the training, it would likely follow AASM scoring criteria, similar to the test set.
Ask a specific question about this device
(288 days)
MyCardio, LLC dba SleepImage.
The SleepImage System is medical software that establishes sleep quality. The SleepImage system analyzes, displays and summarizes ECG data, typically collected during sleep that is intended for use by or on the order of a Health Care Professional to aid in the evaluation of sleep disorders, where it may inform or drive clinical management.
The SleepImage System consists of an operator-independent process that automatically analyzes Electrocardiography data using a general purpose computing platform. When provided in addition to the ECG data, the SleepImage System can optionally display accelerometer and oximeter data.
The results of the processed data are graphical and numerical presentations and reports of sleep latency, sleep duration, sleep quality and sleep pathology for the use by or on the order of physicians, trained technicians, or other healthcare professionals.
The data output establishes sleep quality and allows the healthcare professional to evaluate sleep disorders, where it may inform or drive clinical management taking into consideration other factors that normally are considered for clinical management of sleep disorders.
The clinician can create unique patient reports for the patient being evaluated, and configure parameters in the software.
The SleepImage System is a standalone, sleep health evaluation application that provides automated analysis of cardiovascular and respiratory waveforms. Like the predicate device, it processes information recorded during sleep. The SleepImage System is considered to be medical software.
The SleepImage System is being updated from the predicate device, the M1 Sleep Data Recorder and CPC Application Software to operate independent of the dedicated M1 Sleep Data Recorder cleared under K092003.
The provided text describes the 510(k) submission for the SleepImage System. While it states that a study was conducted, it does not provide detailed acceptance criteria and performance data in the structured format requested (e.g., specific metrics like sensitivity, specificity, or accuracy with threshold values). The document focuses on demonstrating substantial equivalence to a predicate device, rather than proving performance against predefined clinical criteria for a novel device.
Therefore, many of the requested sections regarding acceptance criteria and detailed study results cannot be fully extracted from the provided text. However, based on the information available, here's what can be provided:
Acceptance Criteria and Study to Prove Device Meets Acceptance Criteria
Summary: The SleepImage System is medical software that analyzes ECG data to establish sleep quality for the evaluation of sleep disorders. The primary "study" mentioned is a comparison study to demonstrate substantial equivalence to a predicate device, specifically the M1 Sleep Data Recorder and CPC Application Software (K092003). The core of the evidence relies on the claim that the SleepImage System software is "identical in specifications and performance" to the previously cleared CPC Application Software and that its analysis of ECG signals from alternate sources is "statistically equivalent" to the predicate device.
1. Table of Acceptance Criteria and Reported Device Performance:
Based on the provided text, the acceptance criteria are not explicitly defined with numerical targets (e.g., "sensitivity > X%"). Instead, the acceptance is based on demonstrating substantial equivalence to a predicate device.
Metric / Criterion | Acceptance Criteria (from text) | Reported Device Performance (from text) |
---|---|---|
Substantial Equivalence | The SleepImage System must be substantially equivalent to the predicate device (M1 Sleep Data Recorder and CPC Application Software) in technological characteristics, indications for use, basic design, materials used, where used, and standards met, and present no new questions of safety or effectiveness. | - The SleepImage System software is identical in specifications and performance to the original cleared CPC Application Software (predicate - K092003). |
- A study compared ECG analysis and results of the SleepImage System based on simultaneous recordings with alternate ECG collection sources (typical hospital PSG study and home-use ECG sensors) and the predicate M1 Sleep Data Recorder.
- The result of this comparison is that the output from the ECG signals from the alternate recordings are "in every respect statistically equivalent" to the output from ECG signals recorded by the predicate M1 Sleep Data Recorder predicate device. |
2. Sample Size Used for the Test Set and Data Provenance:
- Sample Size: The exact sample size for the comparison study is not specified in the provided text. It only states "A study was conducted to compare electrocardiography (ECG) analysis and results of the SleepImage System based on simultaneous recordings with alternate ECG collection sources...".
- Data Provenance: Not explicitly stated. The text mentions "typical hospital PSG study" which suggests clinical data, but details like country of origin or whether it was retrospective/prospective are not provided.
3. Number of Experts Used to Establish Ground Truth for the Test Set and Qualifications:
- This information is not provided in the document. The study described is a comparison of instrument outputs (ECG analysis results) for "statistical equivalence," rather than a clinical ground truth established by expert review.
4. Adjudication Method for the Test Set:
- Not applicable/Not provided. The study described compares computational outputs, not human interpretations requiring adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done:
- No. The document describes a study comparing the device's output across different ECG signal sources to a predicate device, not a human reader study (MRMC) evaluating AI assistance.
6. If a Standalone (algorithm only without human-in-the-loop performance) was done:
- Yes, indirectly. The study described evaluates the SleepImage System's standalone analytical performance by comparing its ECG analysis results from different sources to the predicate device's results. The device itself is described as "operator-independent" and providing "automated analysis."
7. The Type of Ground Truth Used:
- Comparison to a predicate device's output. The "ground truth" for this study is implicitly the output from the previously cleared M1 Sleep Data Recorder predicate device. The study aimed to show that the SleepImage System's outputs using various ECG sources were "statistically equivalent" to the predicate device's outputs. There is no mention of an independent clinical ground truth (e.g., pathology, outcomes data, or fully independent expert consensus) against which the device's diagnostic accuracy was measured.
8. The Sample Size for the Training Set:
- Not applicable/Not provided. The document describes the SleepImage System software as being "identical in specifications and performance" to the previously cleared CPC Application Software. This suggests it's an updated version of existing software, rather than a de novo AI model requiring a distinct training set. If it is an AI/ML model, the details of its training set are not disclosed.
9. How the Ground Truth for the Training Set was Established:
- Not applicable/Not provided. Similar to point 8, if this is an updated version of existing software, a "training set" might not be relevant in the same way it would be for a novel machine learning algorithm. If there was a training process for underlying algorithms, the ground truth establishment method is not described.
Ask a specific question about this device
(90 days)
MyCardio, LLC
Sleep Data Recorder: The M1 sleep data recording device is intended for use by a physician or a trained technician for the collection of physiological (Actigraphy) and Electrocardiogram (ECG) recordings during sleep that will be used for screening different sleep associated disorders.
CPC WEB Application Software: The CPC application software (and associated modules) is intended for use by a physician or a trained technician for the analysis, manipulation and final presentation of physiological (Actigraphy) and Electrocardiogram (ECG) recordings during sleep.
Cardiopulmonary coupling (CPC) is currently a functional module included as one of many features in Embla's polysomnographic (PSG) presentation software of the predicate device. This can be invoked by the user to provide a CPC analysis from the EKG signals provided by the current multi-channel recorders in Embla's hardware product line. It has been decided to duplicate this presentation software module into its own product line with a separate web based application using a dedicated recorder. Therefore. MyCardio and Embla have developed a single channel (EKG) hardware recording device (M1 Sleep Data Recorder) and associated WEB site (CPC Web Application Software) to present the data and provide graphs and reports for manual diagnosis.
The model M1 Sleep Data Recorder is intended to be used as a sleep quality screening device. The M1 is a small palm size data recorder used with two commercially available EKG patient electrodes. One electrode snaps directly onto the recorder body. while the second electrode snaps onto a short cable in turn connected to a connector on the M1. The M1 is to be attached to the patient at home by the two electrodes and used to record multiple individual sleep periods. Following the home study, the M1 is returned to the clinic and the data is uploaded by the clinician to the CPC Web Application Software.
The CPC Application Software consists of three separate software programs working as a system. (1) CPC Console software - this software is similar to the predicate device presentation software CPC module and receives the data uploaded from the M1 recorder and performs the actual data graphing and reporting. (2) CPC Client software - this software manages the interaction between the user and the M1 device and uploads the analyzed M1 data from the CPC Console to the CPC Web software. (3) CPC Web software - this software creates the user interface screens, manages the data inputted by the user, keeps track of the studies and patient demographics, and provides charts, graphs and reports for manual evaluation of sleep quality screening.
A trained physician would typically review and analyze the charts, graphs and reports created and presented by the CPC Web Application Software. After this screening evaluation, the patient may require further testing and examination for possible treatment.
The battery operated M1 device will be capable of 72 hours (representing approximately 10 overnight studies) of recording to an internal memory.
The general intended environment is the patient home but the device is capable of functioning in any environment where patients can sleep reasonably comfortably.
The users are the general public, trained physicians, trained sleep technicians (RPGST) or people working under the supervision of one of these professionals. The user may or may not possess knowledge of the physiological signals or test criteria.
The M1 Sleep Data Recorder and CPC Web Application Software do not provide any alarms and are not intended as a monitor.
The provided 510(k) summary does not contain details about acceptance criteria or a specific study proving the device meets them, in the format typically required for performance claims of AI/ML-driven devices related to diagnostic accuracy.
However, based on the information provided, I can infer the implied acceptance criteria and describe the comparative study that was partially described.
Here's an analysis based on your requirements, highlighting what's present and what's missing:
Acceptance Criteria and Device Performance Study
The M1 Sleep Data Recorder and CPC Application Software are designed to be "clinically equivalent" to a predicate device (Embletta Gold) regarding their ability to record and present EKG signals and subsequently analyze them for sleep quality screening.
1. Table of Acceptance Criteria and Reported Device Performance
Note: The document does not explicitly state quantitative acceptance criteria (e.g., specific accuracy metrics, sensitivity, specificity, AUC, or concordance rates with defined thresholds). It focuses on qualitative "clinical equivalence."
Acceptance Criteria Category | Acceptance Criteria (Implied) | Reported Device Performance (as stated in the conclusion) |
---|---|---|
EKG Signal Recording | The EKG signal recorded by the M1 Sleep Data Recorder should be "clinically equivalent" to the EKG signal recorded by the predicate Embletta Gold recorder in the context of screening. | "The EKG signal recorded by the M1 Sleep Data Recorder is in every respect clinically equivalent in the context of screening to the EKG signal currently recorded by the predicate device." |
CPC Presentation & Analysis | The presentation, analysis, and reports generated by the CPC Web Application Software for EKG signals should be "clinically equivalent" to those generated by the CPC module of the predicate device's PSG application software for EKG signals, for sleep quality screening. | "The presentation, analysis and reports generated by the CPC Web Application Software... were compared by qualified sleep technicians... The result of the comparison is that the presentation, analysis and reports generated by the CPC Web Application Software are in every respect clinically equivalent to the presentation, analysis and reports generated by the predicate device." |
2. Sample Size for Test Set and Data Provenance
- Sample Size: This information is not provided in the summary.
- Data Provenance: This information is not provided in the summary. It's unclear if the data was retrospective or prospective, or from what country/clinical setting it originated.
3. Number and Qualifications of Experts for Ground Truth
- Number of Experts: This information is not explicitly stated. The conclusion mentions "qualified sleep technicians" performing the comparison of the CPC analysis and reports. It doesn't specify if these technicians established a ground truth from an independent source or simply compared the outputs of the new device against the predicate.
- Qualifications of Experts: The experts are described as "qualified sleep technicians." No further details (e.g., years of experience, specific certifications like RPGST) are given for those involved in the comparison mentioned in the conclusion. For the intended use, "trained physicians" and "trained sleep technicians (RPGST)" are mentioned as users.
4. Adjudication Method for the Test Set
- Adjudication Method: This information is not provided. The summary describes a comparison, but not an adjudication process involving multiple experts resolving discrepancies. It implies a direct comparison or verification by the "qualified sleep technicians."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
- Was an MRMC study done? No. The document does not describe a comparative effectiveness study involving human readers with and without AI assistance to quantify improvement. The study described is a comparison of the device's output to a predicate device's output, and the EKG signal itself.
- Effect Size: Not applicable, as an MRMC study was not described.
6. Standalone Performance Study
- Was a standalone study done? No, not in the traditional sense of measuring diagnostic accuracy against a clinical ground truth for only the algorithm. The device (M1 Recorder + CPC Software) was compared against a predicate system. The objective was to show clinical equivalence of the output, rather than the standalone diagnostic accuracy of the algorithm itself for a specific condition.
7. Type of Ground Truth Used for the Test Set
- Type of Ground Truth: The ground truth used was the output of the predicate device (Embletta Gold recorder and its CPC module software). The comparison was made against the EKG signal and the analysis/reports generated by the predicate, not against an independent clinical diagnosis, pathology, or direct patient outcomes. This implies a "reference standard" rather than a clinical ground truth for specific disease presence/absence.
8. Sample Size for the Training Set
- Sample Size: This information is not provided. The document makes no mention of a training set, which is typical for a device that is essentially porting an existing algorithm to a new platform/recorder.
9. How Ground Truth for Training Set Was Established
- How Ground Truth Was Established: This information is not provided as no training set or its associated ground truth establishment is described. The CPC algorithm itself appears to be a pre-existing module from the predicate device that was duplicated.
Summary of Gaps and Key Missing Information:
The provided 510(k) summary is typical for a predicate device comparison focusing on "clinical equivalence" of signal recording and output presentation rather than a de novo AI/ML diagnostic device with performance metrics based on diagnostic accuracy.
- No quantitative acceptance criteria are provided.
- No details on sample sizes (test or training sets) are given.
- No specific details on the number or detailed qualifications of experts involved in the comparison are provided beyond "qualified sleep technicians."
- No formal adjudication method is described.
- No MRMC or standalone diagnostic accuracy studies against a true clinical ground truth (e.g., sleep study diagnoses, pathology, outcomes) are described. The "ground truth" used for comparison was the predicate device's output.
Ask a specific question about this device
Page 1 of 1