Search Results
Found 3 results
510(k) Data Aggregation
(116 days)
Care Orchestrator is intended to support clinicians by tracking data on patients who are prescribed compatible therapy devices in accordance with the intended use of those therapy devices. Care Orchestrator provides remote patient data collection & viewing and is intended to be used by healthcare representatives (e.g., Physicians, Clinicians, Durable Medical Equipment providers) in conjunction with compatible non-life support therapy devices to adjust prescription and/ or performance settings. In addition, Care Orchestrator can be used for analysis (automatic and manual scoring), display, retrieval, summarization, and report generation of data received from compatible monitoring devices used to categorize sleep-related events that help aid in the diagnosis of sleep-related disorders. The Home Sleep Testing function of Care Orchestrator is indicated for Adult use only. Care Orchestrator allows read-only access to patients. Care Orchestrator is intended to be used in hospital, institutional, provider, and home care settings.
Care Orchestrator is a cloud-based software platform that allows entities including physicians, other professional home and clinical staff, and durable medical equipment providers in a patient's therapy lifecycle the ability to manage patients and referrals, control access to patient information and therapy data, enhance patient compliance management workflow, and gain efficiencies in the overall patient therapy workflow. Care Orchestrator also provides a method for sleep data acquired from a supported home sleep test (HST) devices to be imported, scored and reviewed by a qualified user. The HST function of Care Orchestrator is for adult use only.
The intent of the Care Orchestrator sleep diagnostic functionality is to provide a capability that allows users to analyze, score, review and generate clinical reports for HST acquisitions (i.e. sleep studies) from within a web browser. Users can upload acquisitions to Care Orchestrator and perform these actions all from within the browserbased Care Orchestrator Client application.
Care Orchestrator software has undergone no significant changes since in K181053. The addition of the Home Sleep Testing features, subject of this submission, add a sub-set of Home Sleep Testing functionality.
The provided document is a 510(k) summary for the "Care Orchestrator with Home Sleep Testing" device. It outlines the device's indications for use, its comparison to a predicate device, and a brief statement about performance data. However, it does not contain a detailed study proving the device meets specific acceptance criteria, nor does it provide a table of acceptance criteria with reported device performance.
The document states:
- "Software verification and validation testing was conducted and documentation was provided as recommended by FDA's Guidance for Industry and FDA Staff, "Guidance for the Content of Premarket Submissions for Software Contained in Medical Devices."
- "Software verification and validation included software code reviews, automated testing, bench verification testing and labeling review."
This indicates that general software V&V activities were performed, but the specific details requested in your prompt (acceptance criteria table, sample sizes, ground truth establishment, MRMC studies, etc.) are not included in this summary.
Therefore, many of your questions cannot be answered from the provided text.
Here's a breakdown of what can be inferred or directly stated, and what information is missing:
1. A table of acceptance criteria and the reported device performance
- Missing from the document. The document states that software verification and validation testing was done, but it does not provide a table of specific acceptance criteria or quantitative performance metrics for those criteria.
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Missing from the document. The summary mentions "bench verification testing" but does not detail the sample size of the test set, the nature of the data (e.g., patient data, synthetic data), its origin, or whether it was retrospective or prospective.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Missing from the document. The document does not specify if experts were used to establish ground truth for a test set, nor their number or qualifications.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Missing from the document. There is no information provided regarding any adjudication methods for a test set.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Missing from the document. The document does not mention any MRMC comparative effectiveness studies. The device's function involves automatic and manual scoring to aid in the diagnosis of sleep-related disorders, but how it impacts human reader performance through such studies is not discussed.
6. If a standalone (i.e. algorithm only without human-in-the loop performance) was done
- Partially addressed. The device performs "analysis (automatic and manual scoring)". The "automatic scoring" component implies standalone algorithmic functionality. However, the document does not provide specific performance metrics for this standalone component.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
- Missing from the document. The type of ground truth used for any validation of the automatic scoring algorithm is not specified.
8. The sample size for the training set
- Missing from the document. This document focuses on the regulatory submission and does not disclose details about the training set size for any machine learning components.
9. How the ground truth for the training set was established
- Missing from the document. Similar to the test set, the method for establishing ground truth for any potential training set is not provided.
Ask a specific question about this device
(110 days)
The REMbrandt software is intended for Polysomnography studies and allows recording, displaying, printing and storage of physiological signals to assist in the diagnosis of various sleep disorders and sleep related respiratory disorders. The REMbrandt software allows: Automated analysis of physiological signals that is intended for use only in adults; An optional audio/visual alert for user defined threshold on calibrated DC input. These alerts are not intended for use as life support such as vital signs monitoring or continuous medical surveillance in intensive care units. Sleep report templates which summarize recorded and scored sleep data using simple measures including count, average, maximum and minimum values as well as data ranges for trended values; The REMbrandt software does not provide any diagnostic conclusion about the patient's condition and is intended to be used only by qualified and trained medical practitioners, in research and clinical environments.
The REMbrandt software consists of three applications, DataLab, Analysis Manager and REMbrandt Manager, which run on a desktop or laptop computer and require no specialized hardware. They are Windows based applications used by trained medical professionals to investigate sleep disorders. The REMbrandt software collects and digitizes the electrical voltages of patient physiological signals. After collecting and saving the signals, it provides tools and detectors to analyze the signals, which aid in the interpretation of a sleep study. The software consists of four main functional areas: A. Data Acquisition & Display (REMbrandt DataLab), B. Scoring/Review & Analysis (REMbrandt Analysis Manager), C. Report Generation (REMbrandt Analysis Manager), D. Archiving & Data Management (REMbrandt Manager). The REMbrandt software contains eight (8) computer-assisted scoring analyzers. All automatic detection tools are provided as time saving aids to assist trained medical practitioners in the review and analysis of vast amounts of data. Each computer-assisted scoring analyzer runs a specific type of event marking or numeric value processing in the study and each can be enabled individually as needed at the discretion of the user. The scoring rule parameters used in the computer-assisted scoring analyzers depend on available input signals in the study as well as user defined settings. All output from computer assisted scoring analyzers require medical professional review and acceptance.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
Acceptance Criteria and Device Performance Study for REMbrandt
The REMbrandt software includes eight computer-assisted scoring analyzers. The provided study specifically evaluates the performance of the Respiratory, Limb Movement, and Snore Event Assisted-scoring Detectors.
1. Table of Acceptance Criteria and Reported Device Performance
The study evaluates the "Positive Percent Agreement (PPA)" and "False Detection Rate per Hour (FD/h)" for the assisted-scoring detectors. While explicit acceptance criteria (e.g., "PPA must be >X%") are not directly stated as numerical thresholds in the provided text, the objective is to establish that the REMbrandt performance is "equivalent to the performance of the predicate device" and that its performance is "comparable to the manual markings of expert reviewers" and "clinically equivalent to the Reference standard." We can infer that the reported values demonstrate this equivalence or comparability.
Table: REMbrandt Event Detection Assisted Scoring Detectors Performance
Event | PPA Mean | 95% CI (PPA) | FD/h Mean | 95% CI (FD/h) |
---|---|---|---|---|
Central Apnea | 99% | 98.3% to 99.4% | 0.7 | 0.4 to 1.5 |
Mixed Apnea | 99.5% | 98.6% to 99.8% | 0.3 | 0.1 to 0.7 |
Obstructive Apnea | 98% | 96.6% to 98.7% | 1.6 | 1.0 to 3.0 |
Hypopnea | 90.4% | 87.9% to 92.1% | 4.0 | 3.2 to 5.1 |
Arousal | 87.6% | 84.3% to 89.6% | 9.6 | 7.2 to 13.4 |
Limb Movement | 88.7% | 86.0% to 91.0% | 11.1 | 8.5 to 14.6 |
Snore | 87.1% | 84.1% to 89.5% | 12.3 | 9.5 to 16.2 |
The conclusion states that "Compared to the Reference standard, REMbrandt assisted-scoring detectors showed performance levels comparable to the manual markings of expert reviewers. The device performance is clinically equivalent to the Reference standard (majority rule) as constructed for this study, similar to results reported in the literature and to performance reported for other commercially available devices."
2. Sample Size Used for the Test Set and Data Provenance
- Sample Size: 50 diagnostic PSG sleep studies (one study per subject) for each event type evaluated.
- Data Provenance: The text states, "All subjects involved in this study were adult (>18 years old) subjects with a clinical indication for a sleep study. The subject data were de-identified and applied as subject data to this study." The country of origin is not specified, but the submission is to the US FDA, implying a potentially diverse or US-centric origin, though not explicitly stated. The study is retrospective, as existing de-identified PSG studies were collected and then analyzed by experts and the device.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
- Number of Experts: Three (3) experienced and certified PSG specialists.
- Qualifications of Experts: This group included "one board certified sleep specialist." The other two are described as "experienced and certified PSG specialists."
4. Adjudication Method for the Test Set
The adjudication method used to establish the "Reference standard" (ground truth) was a majority rule: "at least two out of three expert scorings (medical professionals certified on PSG recording and analysis) agree on the presence of an event within an epoch."
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
No, a multi-reader multi-case (MRMC) comparative effectiveness study was not conducted to assess how much human readers improve with AI vs. without AI assistance. This study focused on the standalone performance of the REMbrandt assisted-scoring detectors compared to a human-established ground truth. The software's role is described as "time saving aids to assist trained medical practitioners," implying a human-in-the-loop workflow, but a comparative effectiveness study with and without AI assistance for human readers was not part of this submission's performance testing.
6. Standalone (Algorithm Only Without Human-in-the-Loop Performance)
Yes, a form of standalone performance was implicitly done. The REMbrandt assisted-scoring detectors were run independently ("Separate from the expert review, all PSG studies were also analyzed by REMbrandt assisted-scoring detectors at default values"). The output of these detectors (algorithm only) was then compared against the "Reference standard" (expert consensus). The device is described as providing "computer-assisted event marking analyzers" and that "All output from computer assisted scoring analyzers require medical professional review and acceptance," indicating it's an aid, not a diagnostic tool unto itself. However, the performance metrics (PPA and FD/h) were derived from the algorithm's output before human review.
7. Type of Ground Truth Used
The type of ground truth used was expert consensus, specifically defined by a "majority rule" (at least two out of three expert scorings agree) on the presence of an event within an epoch. The criteria for event scoring (Apnea, Hypopnea, Limb Movement, Snore, Arousals) were clearly defined and applied by the experts.
8. Sample Size for the Training Set
The document does not specify the sample size for the training set. The clinical study summary focuses exclusively on the validation/test dataset.
9. How the Ground Truth for the Training Set Was Established
The document does not provide information on how the ground truth for the training set (if any was used for algorithm development) was established. The "Clinical Study Summary" describes the ground truth process only for the test set.
Ask a specific question about this device
(122 days)
The RemLogic software is intended for Polysomnography studies and allows recording, displaying, and storage of physiological signals to assist in the diagnosis of various sleep related respiratory disorders. The RemLogic software allows:
Automated analysis of physiological signals that is intended for use only in adults;
An optional audio/visual alert for user defined threshold on calibrated DC input. These alerts are not intended for use as life support such as vital signs monitoring or continuous medical surveillance in intensive care units.
Sleep report templates which summarize recorded and scored sleep data using simple measures including count, average, maximum and minimum values as well as data ranges for trended values;
The RemLogic software does not provide any diagnostic conclusion about the patient's condition and is intended to be used only by qualified and trained medical practitioners, in research and clinical environments.
The RemLogic Application is a software product that runs on a desktop or laptop computer and requires no specialized hardware. It is a Windows based application used by trained medical professionals to investigate sleep disorders. The RemLogic application collects and digitizes the electrical voltages of patient physiological signals. After collecting and saving the signals, it provides tools and analyzers to analyze the signals, which aid in the interpretation of a sleep study. The software consists of three main functional areas: Acquisition, Scoring & Review, and Reports. It also contains a number of computer-assisted scoring analyzers for various sleep events.
Here's a breakdown of the acceptance criteria and the study proving the device meets them, based on the provided text:
1. Table of Acceptance Criteria and Reported Device Performance
The document does not explicitly state pre-defined acceptance criteria for the PPA and FD/h metrics in a clear pass/fail format with specific numerical thresholds. Instead, it frames the conclusion as the device showing "performance levels comparable to the manual markings of expert reviewers" and being "clinically equivalent to the Reference standard."
However, we can infer the reported device performance from the "PPA and False Detection Rate Per Hour of RemLogic Event Detection Assisted Scoring Analyzers" table:
Event | Mean PPA | 95% CI PPA | Mean FD/h | 95% CI FD/h |
---|---|---|---|---|
Central Apnea | 98% | 96% to 99% | 2.1 | 1.2 to 4.1 |
Mixed Apnea | 98% | 96% to 99% | 2.4 | 1.3 to 5.1 |
Obstructive Apnea | 94% | 91% to 95% | 7.8 | 5.6 to 10.9 |
Hypopnea | 86% | 83% to 87% | 17.4 | 15.0 to 20.0 |
Arousal | 83% | 81% to 85% | 20.2 | 17.8 to 22.6 |
Limb Movement | 86% | 84% to 88% | 16.8 | 14.0 to 19.9 |
Snore | 85% | 83% to 88% | 17.8 | 14.9 to 20.9 |
2. Sample Size Used for the Test Set and Data Provenance
- Test Set Sample Size: 51 diagnostic PSG sleep studies (one study per subject) were used. This means 51 subjects were evaluated.
- Data Provenance: The document states that the data were collected from "diagnostic PSG sleep studies" and that "All subjects involved in this study were adult (>18 years old) subjects with a clinical indication for a sleep study." It does not explicitly state the country of origin, but the company (Embla Systems) is based in Canada. The study design is described as "clinical validation," implying these were existing or collected for the study, and they were de-identified and randomized for the review process. It appears to be retrospective analysis of collected physiological data.
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Qualifications of Those Experts
- Number of Experts: Three experienced and certified PSG specialists.
- Qualifications of Experts: The experts included "one board certified sleep specialist" and two other "experienced and certified PSG specialists" who independently marked events.
4. Adjudication Method for the Test Set
- Adjudication Method: "Reference standard" was defined using "majority rule, that is, at least two out of three expert scorings (medical professionals certified on PSG recording and analysis) agree on the presence of an event within an epoch." This is a form of 2-out-of-3 (2+1) adjudication.
5. If a Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, a multi-reader multi-case (MRMC) comparative effectiveness study evaluating human reader improvement with AI assistance was not performed. The study aimed to establish the standalone performance of the RemLogic automated analysis against a consensus "Reference standard" derived from expert human scoring.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) was done
- Yes, a standalone performance evaluation of the RemLogic computer-assisted scoring analyzers was done. The results in the table ("RemLogic PPA and FD/h") represent the algorithm's performance compared to the established ground truth.
7. The Type of Ground Truth Used
- The ground truth used was expert consensus. Specifically, a "majority rule" (2 out of 3 experts agreeing) on the presence of an event within an epoch was used to define the "Reference standard." The criteria applied for scoring various events (Apnea, Hypopnea, Limb Movement, Snore, Arousals) were based on established clinical guidelines.
8. The Sample Size for the Training Set
- The document does not provide information regarding the sample size of any training set used for the RemLogic software's automated analysis algorithms. The study described is a validation study, focusing on the performance of the already-developed algorithms.
9. How the Ground Truth for the Training Set Was Established
- Since information on a training set is not provided, details on how its ground truth was established are also not available in this document. The document focuses solely on the clinical validation of the completed software.
Ask a specific question about this device
Page 1 of 1