Search Results
Found 51 results
510(k) Data Aggregation
(268 days)
HCC
Ask a specific question about this device
(242 days)
HCC
Prism is a neurofeedback software device intended for relaxation and stress reduction through the use of EEG biofeedback. The device is indicated as an adjunctive treatment of symptoms associated with posttraumatic stress disorder (PTSD), to be used under the direction of a healthcare professional, together with other pharmacological and/or non-pharmacological interventions.
Prism is a software as medical device, to be prescribed for treatment of patients with PTSD by clinicians as adjunct to standard of care. Prism is a software device running on a laptop that uses EEG signal input from an EEG device (g.Nautilos PRO (K171669)). Prism therapy consists of 15, 30-minute sessions and optional periodic refresher sessions. During a session, the patient is connected to 8 or more EEG channels and views an interactive audio/visual interface.
Acceptance Criteria and Device Performance Study for Prism Device
The Prism device is a neurofeedback software device intended for relaxation and stress reduction through the use of EEG biofeedback, specifically indicated as an adjunctive treatment for symptoms associated with Posttraumatic Stress Disorder (PTSD). The acceptance criteria for its effectiveness were defined by the primary performance measure of a prospective, single-arm, open-label, unblinded study.
The primary effectiveness hypothesis was that from baseline to the 3-month follow-up, at least 50% of study participants would experience a response to the treatment, defined as a 6-point reduction in the Clinician Administered PTSD Scale (CAPS-5) score from baseline.
1. Acceptance Criteria and Reported Device Performance
Table: Acceptance Criteria and Reported Device Performance
Acceptance Criteria (Primary Endpoint) | Reported Device Performance (Efficacy Analysis Set) |
---|---|
At least 50% of study participants will experience a 6-point (or more) reduction in CAPS-5 score from baseline to the 3-month follow-up visit. | 66.67% of participants showed a ≥6-point reduction in CAPS-5 at 3 months follow-up. |
Secondary Performance Measures (additional context, not primary acceptance criteria): |
- 54.55% of participants showed a ≥10-point reduction in CAPS-5 at 3 months follow-up.
- 50.00% of participants showed a ≥13-point reduction in CAPS-5 at 3 months follow-up. |
Safety Acceptance Criteria (Implicit):
The pre-specified safety goals of the study were met, indicating an acceptable safety profile. This would implicitly mean that the incidence and severity of adverse events were within acceptable clinical limits for the target population and device type.
2. Sample Size and Data Provenance
-
Test Set (Clinical Study Participants):
- Screened: 101 subjects
- Full Analysis Dataset: 79 subjects
- Efficacy Analysis Set: 66 subjects (This is the primary sample size for evaluating the effectiveness against the acceptance criteria).
- Per-Protocol Analysis Set: 63 subjects
-
Data Provenance: The study was a prospective, single-arm, open-label, unblinded study. It was conducted at 4 sites outside the United States (OUS) in Israel and one US site. This indicates a combination of Israeli and US data.
3. Number of Experts and Qualifications for Ground Truth
The document does not explicitly state the number of experts used to establish ground truth for the test set. However, it mentions:
- Diagnosis of PTSD: Established according to the DSM-5 criteria and CAPS-5.
- Clinician assessments: Performed and documented by the investigator or qualified and trained designee.
This implies that the ground truth for PTSD diagnosis and symptom severity (CAPS-5 scores) was established by licensed healthcare professionals (investigators or their designees) who were qualified to administer and interpret DSM-5 and CAPS-5. While specific qualifications (e.g., "radiologist with 10 years of experience") are not provided, "qualified and trained designee" suggests adherence to clinical standards for diagnostic assessment.
4. Adjudication Method for the Test Set
The document does not describe a formal adjudication method (like 2+1 or 3+1 consensus) for the initial PTSD diagnosis or CAPS-5 scoring. The assessments were performed by the "investigator or qualified and trained designee." This implies that the initial assessment by a single qualified professional served as the ground truth.
5. Multi-Reader Multi-Case (MRMC) Comparative Effectiveness Study
There is no indication of a Multi-Reader Multi-Case (MRMC) comparative effectiveness study comparing human readers with and without AI assistance. The study described is a single-arm clinical study evaluating the device's adjunctive efficacy.
6. Standalone (Algorithm Only) Performance
The study evaluates the Prism software in conjunction with an EEG device (g.Nautilos PRO) as an adjunctive treatment for PTSD. Therefore, this is not a study of standalone algorithm performance without human-in-the-loop, as the device provides visual feedback to the patient to aid in learning to control EEG activity, and it is used under the direction of a healthcare professional.
7. Type of Ground Truth Used
The ground truth for effectiveness was based on:
- Clinical Assessments: Clinician Administered PTSD Scale (CAPS-5) scores. CAPS-5 is a structured interview administered by a trained clinician and is considered the gold standard for PTSD diagnosis and severity assessment.
- Self-Report Questionnaires: (Secondary measures, not primary ground truth for the acceptance criteria) PTSD Checklist for DSM-5 (PCL-5), Emotion Regulation Questionnaire (ERQ), Patient Health Questionnaire (PHQ-9), Clinical Global Impression (CGI).
8. Sample Size for the Training Set
The document does not provide information regarding the training set size or how the device's algorithms were trained. The provided text describes the clinical study for device validation, not the development or training phase of the software.
9. How Ground Truth for Training Set was Established
As information on the training set is not provided, the method for establishing its ground truth is also not described in this document.
Ask a specific question about this device
(213 days)
HCC
Freespira is intended for use as a relaxation treatment for the reduction of stress by leading the user through guided and monitored breathing exercises. The device is indicated as an adjunctive treatment of symptoms associated with panic disorder (PD) and/or posttraumatic stress disorder (PTSD), to be used under the direction of a healthcare professional, together with other pharmacological and/or non-pharmacological interventions.
Freespira® is intended for use as a relaxation treatment for the reduction of stress by leading the user through guided and monitored breathing exercises. The device is indicated as an adjunctive treatment of symptoms associated with panic disorder (PD) and/or posttraumatic stress disorder (PTSD), to be used under the direction of a healthcare professional, together with other pharmacological and/or nonpharmacological interventions. Freespira is authorized and overseen by a licensed healthcare provider. Patients are trained to use the Freespira Sensor and the Freespira Mobile App to measure and display their exhaled carbon dioxide (EtCO₂) level and respiration rate (RR) and how different breathing habits affect EtCO2 levels.
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria (Primary Effectiveness Hypothesis) | Reported Device Performance (2-month post-treatment follow-up) |
---|---|
At least 50% of study participants will experience a response to the treatment (defined as a 6-point reduction in CAPS-5 score from baseline). | 93% of participants demonstrated a response (95% CI: 77-99%). |
2. Sample size used for the test set and the data provenance
The document refers to a "prospective, single arm, un-blinded investigation of Freespira". While a specific sample size for a "test set" (implying a separate validation cohort) is not explicitly stated in the context of AI device testing, the clinical study enrolled patients with a primary diagnosis of PTSD. The number of participants in this clinical study is not directly specified by a single number, but results are presented as percentages and confidence intervals, implying a participant cohort. Given the confidence intervals provided (e.g., 95% CI between 77-99% for 93% response), the sample size is likely small, potentially in the range of 20-40 participants for this type of study.
The data provenance is prospective clinical data collected from patients with PTSD. The country of origin is not explicitly stated, but the company, Palo Alto Health Sciences, Inc., is based in Kirkland, WA, U.S.A., which suggests the study was likely conducted within the United States.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
The ground truth was established using the Clinician Administered PTSD Scale (CAPS-5). This is a clinician-administered assessment, implying that a qualified clinician (an "expert") was involved in administering and scoring the scale. The document does not specify the exact number of clinicians involved in assessing each patient or their specific qualifications (e.g., years of experience, specific medical specialty), but it implies the use of healthcare professionals capable of administering and interpreting this validated psychiatric assessment tool.
4. Adjudication method for the test set
The document does not detail an adjudication method for the CAPS-5 scores (e.g., 2+1, 3+1). Since CAPS-5 is a standardized, clinician-administered scale, it's typically administered by a single trained clinician. If there were multiple clinicians involved in assessments over time, standard clinical practice would involve training and inter-rater reliability checks, but these are not explicitly described as formal adjudication steps for the ground truth in this submission.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
No, an MRMC comparative effectiveness study involving human readers and AI assistance was not mentioned or presented in this document. The study evaluates the Freespira device as a standalone treatment, not as an assistive tool for human readers.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, a standalone performance study was done in the context of the device's clinical effectiveness. The Freespira device, which guides users through breathing exercises and measures EtCO2 and RR, was evaluated in a clinical trial to determine its efficacy as an adjunctive treatment for PTSD symptoms. The primary endpoint related to the reduction in CAPS-5 scores directly measures the outcome of using the device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The primary ground truth used was clinically validated outcome data based on the Clinician Administered PTSD Scale (CAPS-5) scores. This scale is administered by clinicians and reflects a standardized assessment of PTSD symptoms and severity. Additionally, patient-reported questionnaires like PHQ-9, SF-36, and PDSS were used as secondary assessments.
8. The sample size for the training set
The document does not explicitly mention a training set in the context of AI/algorithm development. The clinical study described appears to be an evaluation of the device as a whole (hardware + software + therapeutic protocol) rather than a specific validation of an AI algorithm trained on a dataset. The device's function involves measuring EtCO2 and RR and providing biofeedback, which might rely on pre-programmed logic rather than a machine learning model that requires a distinct "training set."
9. How the ground truth for the training set was established
As no specific "training set" for an AI algorithm is mentioned, the method for establishing its ground truth is not applicable based on the provided text. The device primarily works by guiding users based on physiological measurements, rather than making diagnostic or predictive outputs that would require a ground truth established for an AI model.
Ask a specific question about this device
(267 days)
HCC
The GrindCare System is indicated to aid in the evaluation and management of sleep bruxism by reducing the temporalis muscle EMG activity during sleep.
The GrindCare System is a portable electromyographic (EMG), electrical stimulation, and biofeedback device. It consists of a Sensor that is adhered to the skin over the temporalis muscle by means of an adhesive, disposable, single-use GelPad. The System also includes a Docking Station, USB Cable, and Power Adaptor. The Sensor and Docking Station record and store EMG activity data, which is transferred from the Docking Station to the GrindCare Mobile App, which allows the user to review grinding and stimulation data and enter diary notes. The Sensor uses EMG to sense contraction of the temporalis muscle that is associated with bruxing events. In response to the EMG-measured contraction, it delivers mild electrical stimulation that is intended to relax the muscle and inhibit the bruxing event.
The provided document describes the GrindCare System, a biofeedback device intended to aid in the evaluation and management of sleep bruxism by reducing temporalis muscle EMG activity during sleep. The substantial equivalence determination is based on a comparison to a predicate device (GrindCare, K092675), which is a previous version of the same system.
Based on the provided text, here's a breakdown of the requested information:
1. A table of acceptance criteria and the reported device performance
The document does not explicitly present a "table of acceptance criteria" in the typical sense with pass/fail thresholds. Instead, it describes various performance tests and compares the subject device's functionality to that of the predicate device. The underlying acceptance criterion for each aspect is essentially that the subject device's performance is equivalent to or better than the predicate device, or that it meets relevant safety and performance standards.
Acceptance Criteria (Implied) | Reported Device Performance and Evidence |
---|---|
Functional Equivalence to Predicate Device: | |
- Wireless Charging Functionality | Performance testing demonstrated the functionality of the wireless charging. (p. 6) |
- Communication Functionality (Sensor to Docking Station, Docking Station to App) | Performance testing demonstrated the functionality of communication between the Sensor and the Docking Station and between the Sensor and the GrindCare App. (p. 7) |
- Data Storage and Transfer Functionality | Performance testing demonstrated the functionality of the data storage and transfer. (p. 7) The Docking Station can store up to 5 years of data. (p. 7) |
- PCB Functionality (despite size reduction) | Performance testing demonstrated the functionality of the PCB. (p. 7) |
- Microprocessor Functionality (equivalent to predicate) | Performance testing demonstrated the functionality of the microprocessor. (p. 7) No differences impact functionality or performance. (p. 7) |
- GrindCare App Functionality (for data viewing) | Performance testing demonstrated the functionality of the GrindCare App. (p. 7) |
- Equivalent Grind Detection Algorithm Performance | Performance testing demonstrated equivalent performance of the grind detection algorithm to the predicate device. (p. 7) The device still provides the identical stimulation signal. (p. 7) |
Clinical Effectiveness: | |
- Reduction in EMG events per hour | The clinical study demonstrated that the modified detection algorithm decreased the number of EMG events per hour by 32.45% over baseline. (p. 8) The device demonstrated a statistically significant reduction on the number of EMG events per hour on a representative patient population. (p. 8) |
Safety and Compliance with Standards: | |
- Electrical Safety (IEC 60601-1, 60601-2-40) | Complies with IEC 60601-1: 2005 + CORR1:2006 + CORR.2:2007 + AM1:2012 and IEC 60601-2-40: 1998. (p. 7-8) |
- Usability (IEC 60601-1-6, 60601-1-11) | Complies with IEC 60601-1-6: 2010 + A1:2013 and IEC 60601-1-11: 2010. (p. 8) Usability testing demonstrated users and prescribers could perform tasks. (p. 8) |
- Electromagnetic Compatibility (IEC 60601-1-2) | Complies with IEC 60601-1-2 Edition 3: 2007-03. (p. 8) |
- Battery Safety and Certification (IEC 62133) | Complies with IEC 62133 Edition 2.0: 2012-12. (p. 8) |
- Wireless Coexistence | Testing demonstrated wireless coexistence with other FR transmitters common in the home environment. (p. 8) |
- Biocompatibility of Gelpad (ISO 10993-5, 10993-10) | Complies with ISO 10993-5:2009 and ISO 10993-10: 2010. (p. 8) |
- Shelf Life (GelPads 24 months) | Shelf life testing demonstrated the device continues to meet performance specifications for the 24-month shelf life of the gelpads. (p. 8) |
2. Sample sized used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Test Set Sample Size: The document states that the clinical data demonstrated a statistically significant reduction in EMG events per hour "on a representative patient population." However, the specific sample size for this clinical study (test set) is not provided in the document.
- Data Provenance: The document does not specify the country of origin of the data or whether the clinical study was retrospective or prospective. It is implied to be a conducted study for the purpose of this submission ("The clinical study demonstrated...").
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
This information is not provided in the document. The device automates the detection of EMG activity and applies stimulation. The "ground truth" for the clinical study appears to be the measured EMG activity itself, which the device processes. There is no mention of human experts interpreting raw EMG data or establishing a "ground truth" for bruxing events in the test set, beyond the device's own detection.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
This information is not applicable as the document does not describe a scenario involving multiple human readers or interpretations needing adjudication for the test set. The clinical study seems to be based on the device's ability to measure and respond to EMG activity.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- MRMC Study: No, an MRMC comparative effectiveness study was not explicitly described or referenced. The study described focuses on the device's direct effect on EMG activity reduction, not on how human readers (e.g., clinicians) perform better with or without AI assistance from this specific device. The device is a biofeedback system that directly interacts with the patient, not an AI diagnostic aid for human readers.
- Effect Size of Human Reader Improvement: This is not applicable as an MRMC study was not described. The provided effect size of 32.45% reduction is in EMG events due to the device's intervention, not an improvement in human reader performance.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Yes, in essence, the "clinical study" section (p. 8), which demonstrated the 32.45% reduction in EMG events, represents the standalone performance of the algorithm (as implemented in the device) in reducing temporalis muscle EMG activity. The device is designed to automatically detect and respond to bruxing events without continuous human intervention during the night.
7. The type of ground truth used (expert concensus, pathology, outcomes data, etc)
The "ground truth" for the clinical study described appears to be the measured temporalis muscle EMG activity itself. The device's performance is measured by its ability to reduce this objectively measurable physiological activity. The "grind threshold" is automatically calculated based on background noise (p. 7), suggesting an automated, objective measure rather than subjective expert consensus or pathology results.
8. The sample size for the training set
The document does not specify the sample size for any training set. Given the device is an evolution of a predicate with a "modified detection algorithm," it's possible such data was used, but it is not mentioned in this summary.
9. How the ground truth for the training set was established
This information is not provided in the document.
Ask a specific question about this device
(246 days)
HCC
The Pacifier Activated Lullaby (PAL®) encourages and reinforces effective non-nutritive sucking of premature infants. This is accomplished by giving positive feedback to the infant in the form of music or a mother's voice as auditory input in direct response to sucking.
The Pacifier Activated Lullaby ("PAL®") has a player module, pacifier sensor module and power supply. The pacifier sensor module senses the strength and duration of an infant's sucking on an attached pacifier and responds with music or a recorded sound (i.e. mother's voice) contingent to the infant's sucking. The pacifier module consists of a wired transmitter with a built in pressure transducer that connects to the pacifier and a receiver. The receiver decodes the signal and plays music or a recorded sound for a predetermined length of time via a speaker to the infant contingent on his/her sucking on the pacifier transmitter. This action occurs when the sucking strength and duration exceeds preset values. The user can control the sensitivity of the transducer.
This application describes modifications to the FDA cleared K010388 P.A.L. System.
The provided document does not contain any information about acceptance criteria or a study that proves the device meets acceptance criteria in the way typically expected for clinical performance evaluation (e.g., sensitivity, specificity, accuracy against a ground truth).
This document is a 510(k) Premarket Notification for the Pacifier Activated Lullaby (PAL®). Its primary purpose is to demonstrate substantial equivalence to a previously legally marketed device (predicate device), not to prove clinical efficacy or accuracy against specific performance metrics with a clinical study.
The "Performance Testing" section (I) lists several types of engineering and compliance tests, but these are related to software, electrical safety, electromagnetic compatibility, and physical stability, rather than clinical performance for its stated indication. The document explicitly states: "Differences do not raise different issues of safety or performance, and issues of safety are assessed in the risk analysis. The Pacifier Activated Lullaby (PAL®) is expected to perform per its Indications for Use and is substantially equivalent to the predicate device."
Therefore, I cannot populate the table or answer the other questions as the information is not present in the provided text. The document focuses on regulatory compliance and substantial equivalence, not a clinical study proving specific performance criteria.
Ask a specific question about this device
(273 days)
HCC
NFANT® Feeding Solution is intended to measure movement of the nipple during non-nutritive suck (NNS) or nutritive suck (NS).
NFANT® Feeding Solution is comprised of a portable multi-use electronics unit (NFANT SSB Sensor) powered by a non-rechargeable coin cell battery, single use bottle coupling (NFANT Coupling) and mobile application which acts as a user interface and data display (NFANT App). Nipple movement data during Non-Nutritive Suck or Nutritive Suck is transmitted to a mobile device and displayed for clinician interpretation from past feedings can also be securely transferred, stored and retrieved by authorized users from a cloud based database (NFANT DBMS) for comparisons between feedings.
When in use, the NFANT Coupling has a commercially available or prescribed bottle or pacifier nipple placed on one end and a normal bottle on the other. The NFANT SSB Sensor is snapped in place to mate with the NFANT Coupling. The user "wakes-up" (activates) NFANT SSB Sensor for streaming and data collection. An infant then undergoes normal nonnutritive sucking with a pacifier (no liquid swallow) and/or nutritive sucking with a bottle nipple during normal feeding sessions (liquid swallow). Nipple movement data is garnered and displayed on the mobile application for clinical interpretation. Safe oral feeding requires coordination of sucking, swallowing and breathing which involves integration, maturation and coordination of multiple sensorimotor systems. NFANT provides clinicians with objective data reqarding nipple movement during non-nutritive (NNS) and nutritive sucking (NS).
As the infant sucks, the NFANT Coupling allows bottle fluid (when applicable) to flow from the bottle to the nipple. The NFANT Coupling maintains a plug insert that provides a two compartment membrane sealing the NFANT Coupling and separating the bottle fluid from pressure sensors contained in the NFANT SSB Sensor housing. With assembly, ports on the NFANT SNAP housing engage each respective membrane creating a seal between the NFANT SSB Sensor and NFANT Coupling while also creating two sealed chambers between the membrane and a respective pressure sensor.
NFANT has one disposable component, the NFANT Coupling that comes in contact with the fluid. After a single use by a patient, the NFANT Coupling is disposed of in waste. The NFANT SSB Sensor does not come in contact with the feeding fluids and is reused specific to the infant with each NFANT Coupling usage. The NFANT App component is software only. NFANT is sold non-sterile.
The provided text is a 510(k) summary for the NFANT® Feeding Solution. It describes the device, its intended use, and its similarities and differences to a predicate device. However, it does not contain detailed information about specific acceptance criteria for device performance or a comprehensive study report proving the device meets those criteria in the way typically expected for a clinical trial or performance study focusing on diagnostic accuracy.
The document primarily focuses on demonstrating substantial equivalence to a predicate device by:
- Bench testing: Verifying seals, calibrating pressure inputs, measuring nipple dynamics, and validating accelerometer tilt readings.
- Biocompatibility testing: As per ISO 10993-1.
- Software verification and validation: For firmware, mobile app, and web portal.
The "Summary of Testing" section is very high-level and only states that "All of the testing results for the NFANT Feeding Solution passed within the acceptance parameters." It does not report specific quantitative acceptance criteria or the numerical performance results against those criteria. It also doesn't detail clinical performance metrics like sensitivity, specificity, or accuracy in relation to a specific clinical outcome or ground truth established by experts.
Therefore, many of the requested items (e.g., sample size for test set, number of experts for ground truth, adjudication method, MRMC study, standalone performance) are not explicitly mentioned or detailed in the provided text.
Here's a breakdown of what can be extracted and what is missing:
1. A table of acceptance criteria and the reported device performance
Acceptance Criteria | Reported Device Performance |
---|---|
Bench Testing: | |
Verify proper seals of NFANT SSB Sensors and NFANT Assembly to enable pressure measurements | Passed |
Calibrate known pressure inputs to NFANT SSB Sensor output | Passed |
Measure nipple dynamics for clinical interpretation | Passed |
Validate accelerometer tilt readings to known angular inputs to determine NFANT Assembly orientation | Passed |
Biocompatibility: | |
ISO 10993-1 requirements | Passed |
Shelf Life Testing (accelerated aging): | Passed (real-time aging to continue) |
Software Verification and Validation: (for embedded firmware, NFANT App, and Web Portal) | Passed |
Note: The document states that "All of the testing results for the NFANT Feeding Solution passed within the acceptance parameters" but does not provide the specific quantitative acceptance parameters themselves (e.g., "pressure measurements should be within X% of true pressure").
2. Sample size used for the test set and the data provenance
Sample Size for Test Set: Not specified. The document mentions "preliminary tests" and "bench testing" but does not provide the number of units or data points used in these tests.
Data Provenance: Not specified. These appear to be laboratory/bench tests, not clinical data from specific countries.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
Not applicable/Not specified. The ground truth for the bench tests appears to be established by known physical inputs (e.g., "known pressure inputs," "known angular inputs"), rather than expert interpretation of clinical data. There is no mention of experts establishing a clinical ground truth for device performance.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
Not applicable/Not specified. This pertains to clinical ground truth establishment, which isn't detailed in the provided bench testing summary.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
Not done / Not specified. This document describes a device that measures physical parameters (nipple movement via pressure sensors) for clinician interpretation, not an AI-assisted diagnostic tool that would typically undergo an MRMC study. The device provides "objective data regarding nipple movement" for clinicians to interpret, but it does not claim to offer AI-driven interpretation or diagnosis.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
Not applicable. The NFANT Feeding Solution is described as a "passive measurement system" that displays data "for clinician interpretation." It's not an algorithm that produces a diagnostic output on its own; rather, it provides data to a human clinician for their assessment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
For the bench testing, the ground truth was "known inputs" for pressure and angular measurements. For other aspects like biocompatibility and software, the ground truth would be compliance with established standards and specifications. There is no mention of a ground truth based on expert consensus, pathology, or outcomes data related to disease diagnosis or clinical prognosis.
8. The sample size for the training set
Not applicable/Not specified. This device is not described as a machine learning/AI algorithm that requires a training set. It is a sensor-based measurement device.
9. How the ground truth for the training set was established
Not applicable/Not specified. As it's not an AI/ML device with a training set, this information is not relevant.
Ask a specific question about this device
(193 days)
HCC
The Canary Breathing" System is intended for use as a relaxation treatment for the reduction of stress by leading the user through guided and monitored breathing exercises. The device is indicated as an adjunctive treatment of symptoms associated with panic disorder, to be used under the direction of a healthcare professional, together with other pharmacological and/or non-pharmacological interventions.
The CBS is a biofeedback device that provides the user with a series of tone-guided breathing exercises and an awareness of his or her physiological data. The CBS uses standard biofeedback concepts to teach the patient to regulate their end-tidal CO2 (EtCO2) and respiratory rate (RR). The user's physiological data display allows the patient to see l ) the actual rate of their breathing and 2) how changes in breathing mechanics (depth and volume) affect EtCO2 levels. The CBS consists of a biofeedback training software program (mobile app) and an EtCO2 sensor (capnometer) used with a nasal cannula. The mobile app guides the user through an exercise and displays physiological data, while the sensor collects physiological data and feeds it to the mobile application for biofeedback. The patient's EtCO2 levels and RR are relaved from the capnometer to the mobile application via Bluetooth and are displayed on a tablet device, through the mobile application.
Here's an analysis of the acceptance criteria and supporting studies for the Canary Breathing™ System, based on the provided 510(k) summary:
1. Table of Acceptance Criteria and Reported Device Performance
Testing Type | Acceptance Criteria (Implied) | Reported Device Performance |
---|---|---|
Non-Clinical Testing | ||
CBS Design Verification (Functional) | All functional requirements met, product specifications satisfied. | The CBS passed all functional testing and met all product specification requirements. |
CBS Integration Testing | Accurate and successful transmission of RR, EtCO₂, and error information to the CBS software. | RR, ETCO₂ and error information were successfully and accurately transmitted to the Nexus tablet running the CBS software. |
Software Verification and Validation | All requirements of the Software Requirements Specification (SRS) met. | The Canary CO₂ Sensor Software met all requirements of the SRS. |
Capnometer Equivalency Verification (Accuracy) | Performance equivalent to the predicate LoFlo C5 CO₂ capnometer for EtCO₂ and respiratory rate. | The performance of the Canary CO₂ Sensor is equivalent to the LoFlo predicate device, for EtCO₂ and respiratory rate measurements. |
Electrical Safety and Electromagnetic Compatibility | Compliance with IEC 60601-1, IEC 60601-1-2, IEC 60601-1-4, IEC 60601-1-11 standards. | The CBS met all acceptance criteria in accordance with: IEC 60601-1:1988, IEC 60601-1-2:2007, IEC 60601-1-4:2000, IEC 60601-1-11:2010. |
Clinical Testing | ||
Efficacy for Panic Disorder (Adjunctive Treatment) | Long-lasting reductions in panic attack frequency/severity, anxiety, avoidance behaviors; improvements in mood/quality of life; normalization of CO₂. | "Findings of these empirical studies have shown that completing the CART training led to long-lasting reductions in panic attack frequency and severity, anxiety symptoms, avoidance behaviors, along with improvements in mood and quality of life in the majority of patients. Causal analysis linked the normalization in CO2 to the observed improvements." |
Safety | No adverse events reported. | "Respiratory training using the CART protocol is safe, and no adverse events associated with the use of the device have been reported." |
2. Sample Size Used for the Test Set and Data Provenance
The 510(k) summary does not describe a separate, de novo clinical test set specifically for the Canary Breathing System (CBS). Instead, it relies on previously published clinical trials of the Capnometry-Assisted Respiratory Training (CART) protocol, which the CBS aims to implement.
Therefore:
- Sample Size for Test Set: Not explicitly stated for a CBS-specific test set. The clinical efficacy claims are based on studies of the CART protocol. The general provenance is "Dr. Meuret and colleagues at Stanford University, Boston University and Southern Methodist University" in the US.
- Data Provenance: The referenced studies are "randomized clinical trials." They appear to be prospective studies conducted in the US (Southern Methodist University, Stanford University, Boston University).
3. Number of Experts Used to Establish the Ground Truth for the Test Set and Their Qualifications
Since no de novo clinical test set for the CBS is described, this information is not directly provided for this specific device's ground truth establishment. However, the CART protocol, which forms the basis for the CBS's clinical claims, was developed by:
- Experts: Alicia Meuret, Ph.D., a clinical psychologist and Associate Professor in the Department of Psychology at Southern Methodist University, in collaboration with colleagues at Stanford University and Boston University.
- Qualifications: Dr. Meuret is a clinical psychologist and Associate Professor. The qualifications of her colleagues are not detailed but are implied to be medical/clinical researchers in relevant fields given the academic institutions.
4. Adjudication Method for the Test Set
Not applicable, as no de novo clinical test set for the CBS is described. The referenced CART studies would have had their own methodologies for data collection and outcome assessment, but this detail is not present in the 510(k) summary.
5. If a Multi Reader Multi Case (MRMC) Comparative Effectiveness Study Was Done, If So, What Was the Effect Size of How Much Human Readers Improve with AI vs Without AI Assistance
Not applicable. The Canary Breathing System is a biofeedback device that directly interacts with the user to guide breathing. It is not an AI-assisted diagnostic device where human "readers" (e.g., radiologists interpreting images) would use it to improve their performance with or without AI.
6. If a Standalone (i.e., algorithm only without human-in-the-loop performance) Was Done
The device itself is a "standalone" system in the sense that the biofeedback algorithm operates without a human operator directly interpreting its output for diagnostic purposes. The "human-in-the-loop" is the patient, who follows the guided exercises.
The "clinical data" section refers to studies of the underlying CART protocol, which demonstrably showed reductions in panic disorder symptoms. The 510(k) argues that since "The CBS and CART protocol lead the user through identical exercises, however, the user interface for the CBS device is different from that used in the CART protocol studies," the clinical results of the CART protocol can be directly applied to the CBS. This is an argument for standalone clinical effectiveness based on prior research, rather than a new standalone study of the CBS as a diagnostic algorithm.
7. The Type of Ground Truth Used
For the clinical claims, the ground truth was based on:
- Clinical Outcomes: "reductions in panic attack frequency and severity, anxiety symptoms, avoidance behaviors, along with improvements in mood and quality of life."
- Physiological Correlation: "Causal analysis linked the normalization in CO2 to the observed improvements."
- This type of ground truth is best described as clinical outcomes data and physiological measurements.
8. The Sample Size for the Training Set
Not applicable. The CBS is a predetermined biofeedback system based on the CART protocol. It doesn't describe a machine-learning model that requires a "training set" in the conventional sense of AI/ML algorithm development. The "training" in this context refers to the clinical studies that established the efficacy of the CART protocol itself.
The referenced studies are:
- Meuret AE, et al. Journal of Psychiatric Research 2008;42;560-8.
- Meuret AE, et al. Journal of Psychiatric Research 2009;43;634-41.
- Meuret AE. et al. Journal of Consulting and Clinical Psychology 2010:78:691-704.
To find the exact sample sizes, one would need to consult these publications. The 510(k) summary does not provide them.
9. How the Ground Truth for the Training Set Was Established
Since there is no "training set" in the AI/ML sense, this question refers to how the evidence for the efficacy of the underlying CART protocol was established.
- The ground truth for the efficacy of the CART protocol was established through two randomized clinical trials (as stated in the "Clinical Data" section).
- These trials involved detailed assessment of patient symptoms (panic attack frequency/severity, anxiety, avoidance behaviors), mood, quality of life, and physiological parameters (CO2 levels and respiratory rate) to demonstrate the therapeutic impact of the protocol.
- The methodology of randomized clinical trials, which often includes objective and subjective outcome measures, standardized psychological assessments, and physiological data collection, is inherently the method of establishing ground truth for therapeutic interventions.
Ask a specific question about this device
(140 days)
HCC
This device is to be used for general relaxation training when used with supported amplifier/encoders.
This software-only component of an EEG biofeedback system uses industry-accepted standard Microsoft Windows-based computers to accept EEG data from external FDAapproved amplifier/encoders and provide biofeedback information. The software does not provide any diagnostic conclusions nor provide any index, classification, diagnosis, or clinical interpretation of the data. The device processes EEG information, separates it into user-specified frequency bands, and displays the results as biofeedback indications to a user.
The provided 510(k) summary for EEGer4 does not contain the kind of detailed study information typically requested for AI/ML-based medical devices or those requiring performance validation against acceptance criteria. This submission is for a much simpler software-only device that processes EEG data for biofeedback, without making diagnostic claims.
Therefore, many of the requested categories cannot be answered from the provided text. The device's "performance" is primarily described by its specifications and comparison to predicate devices, rather than a clinical study establishing diagnostic accuracy.
Here's a breakdown of what can be extracted and what cannot:
1. Table of Acceptance Criteria and Reported Device Performance
For this specific device, a software-only EEG biofeedback system, the "acceptance criteria" are implied by its functional specifications and substantial equivalence to predicate devices, rather than specific accuracy metrics for a diagnostic task. The performance revolves around its ability to process EEG data and display biofeedback.
Acceptance Criteria Category | EEGer4 Reported Performance/Description |
---|---|
Intended Use | General relaxation training when used with supported amplifier/encoders. |
Software Name | EEGer4 |
Supported Devices | Mfr: Brainmaster (Brainmaster 2E, Atlantis versions), Mfr: Thought Technology (ProComp versions, Infiniti versions, Flexcomp), Mfr: J&J Engineering (C2 versions, C2+ versions), Mfr: Telediagnostics (A200 versions, A400 versions) |
Operating System | Microsoft Windows (XP and later) |
Computer | Generic PC computer supported by Microsoft Windows |
Sampling Rate | 256 Hz |
Number of EEG Channels | 4 |
Bandwidth | 0 - 50 Hz |
Filtering | Digital Filters |
Device Interface | Depends on amplifier/encoder used (serial, USB, Bluetooth, etc.) |
Diagnostic Claims | None. "Does not provide any diagnostic conclusions nor provide any index, classification, diagnosis, or clinical interpretation of the data." |
Method of Operation | Processes EEG information, separates it into user-specified frequency bands, and displays results as biofeedback. |
2. Sample size used for the test set and the data provenance
- Not Applicable / Not Provided. This document does not describe a clinical performance study using a test set of patient data, as it's a software for biofeedback, not diagnosis. The "test" in such a context would typically involve software verification and validation (V&V) against functional requirements, but this detail isn't in the summary.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not Applicable / Not Provided. No ground truth establishment by experts is described for a test set, as no such study is presented.
4. Adjudication method for the test set
- Not Applicable / Not Provided. No test set or adjudication process is described.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No. An MRMC study was not done. This device is not an AI-assisted diagnostic tool; it's a biofeedback display software.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not Applicable / Not Provided. While the software operates "standalone" in the sense that it processes data and displays it, this refers to its direct functional performance in processing and displaying EEG data according to its specifications, not in making an independent diagnostic decision. The document doesn't detail specific standalone performance metrics in a study context.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
- Not Applicable / Not Provided. No ground truth for a diagnostic or classification task is used because the device expressly states it "does not provide any diagnostic conclusions nor provide any index, classification, diagnosis, or clinical interpretation of the data."
8. The sample size for the training set
- Not Applicable / Not Provided. This device is not described as an AI/ML model that would require a training set. It is a deterministic software that processes and displays EEG data using established signal processing techniques (separating into frequency bands).
9. How the ground truth for the training set was established
- Not Applicable / Not Provided. As noted above, there is no mention of a training set or ground truth establishment for such a set.
Ask a specific question about this device
(105 days)
HCC
- For evaluation of the status of muscles at rest and in function
- As an aid in muscle re-education and muscle relaxation therapy
- Provides ability to compare new captured data with past data to assess progress in treating patients relaxation state
The device incorporates circuitry enabling the same capabilities as the predecessor device. It is a computer based system offering options capable of evaluating muscle groups at rest or in function by means of surface electromyography. Muscle activity is quantified by means of re-usable or disposable surface electrodes positioned over the muscle groups being studied. Up to eight sites can be monitored simultaneously and displayed in time or frequency domains. The device is essentially identical to the predecessor device except that it utilizes wireless (Bluetooth) technology to transfer EMG data to host computer without a cable and to eliminate any connection between the patient and line voltage.
The provided text is a 510(k) summary for the Myotronics-Noromed Model MES 9200 EMG System. This submission focuses on demonstrating substantial equivalence to a predicate device (Model MES 9000 EMG System) rather than clinical performance against specific acceptance criteria. The key change is the introduction of wireless (Bluetooth) technology and battery operation to improve safety and data transfer.
Therefore, the document does not contain a detailed study proving the device meets specific performance acceptance criteria in the manner you've described for AI/CADe devices. There are no reported device performance metrics, sample sizes for test or training sets, expert consensus, or information on adjudication methods for clinical performance.
Here's a breakdown of what can be extracted based on your request, and what is missing:
1. A table of acceptance criteria and the reported device performance
- Acceptance Criteria: Not explicitly stated as performance metrics. The implicit acceptance criterion for a 510(k) is "substantial equivalence" to a predicate device. This means demonstrating that the new device has the same intended use, fundamental scientific technology, and does not raise new questions of safety or effectiveness.
- Reported Device Performance: Not provided in terms of diagnostic accuracy, sensitivity, specificity, etc. The document highlights the functional equivalence and safety improvements over the predicate.
2. Sample size used for the test set and the data provenance (e.g. country of origin of the data, retrospective or prospective)
- Not Applicable / Missing: No information about a test set for clinical performance evaluation is mentioned. The submission focuses on engineering design changes and comparing them to the predicate device.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g. radiologist with 10 years of experience)
- Not Applicable / Missing: No clinical "ground truth" establishment is described for a performance study.
4. Adjudication method (e.g. 2+1, 3+1, none) for the test set
- Not Applicable / Missing: No clinical performance study requiring adjudication is described.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- Not Applicable / Missing: This is not an AI/CADe device. No MRMC study was conducted or is relevant for this type of 510(k) submission.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Not Applicable / Missing: This device is an EMG system, not an algorithm, so this concept does not apply.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- Not Applicable / Missing: No clinical ground truth is established or discussed as the focus is on device modification and safety.
8. The sample size for the training set
- Not Applicable / Missing: This device does not involve a "training set" in the context of machine learning or AI.
9. How the ground truth for the training set was established
- Not Applicable / Missing: As above, no training set or associated ground truth establishment is relevant to this submission.
Summary of the Study (as described in the 510(k) Summary):
The "study" or justification for substantial equivalence presented in the 510(k) summary is a comparison of the new device (Model MES 9200) to its legally marketed predicate device (Model MES 9000).
- Objective: To demonstrate that the Model MES 9200 EMG System is substantially equivalent to the Model MES 9000.
- Methodology: The comparison highlights that the new device has the "same intended uses and fundamental scientific technology." The primary design change addressed is the incorporation of "wireless (Bluetooth) technology to transfer EMG data to host computer without a cable and to eliminate any connection between the patient and line voltage." This change is presented as an improvement in safety and convenience without altering the core functionality or intended use.
- Proof of Meeting (Implicit) Acceptance Criteria: The FDA's issuance of a substantial equivalence determination (K111687) serves as the "proof" that the device meets the regulatory acceptance criteria for market clearance under the 510(k) pathway. This determination is based on the provided comparison and assurances that the changes do not raise new questions of safety or effectiveness.
In essence, this 510(k) is about establishing regulatory clearance for a device modification, not a clinical performance study of a new diagnostic algorithm.
Ask a specific question about this device
(183 days)
HCC
The GRINDCARE device is indicated to aid in the evaluation and management of nocturnal bruxism by reducing the temporalis muscle EMG activity during sleep.
GRINDCARE is a portable electromyographic (EMG) and electrical stimulation device. The device consists of a stimulator, a docking station and a tri-polar electrode. The electrode is placed on the forehead with three integrated electrodes in close connection to the temporalis muscle by means of a double-adhesive patch incorporating three conductive gelpads and connected to the stimulator. The device records EMG activity and processes the signal to detect a particular activity (tooth grinding/clenching). It uses EMG to sense contraction of the temporalis muscle that is associated with bruxing events. In response to the EMG-measured contraction, it delivers mild electrical stimulation that is intended to relax the muscle and inhibit the bruxing event. The EMG events are logged and stored on the device. This data can be transferred to a healthcare professional's PC for assessment of the user's bruxism.
The provided text refers to a 510(k) summary for the GRINDCARE device, a biofeedback device for nocturnal bruxism. However, the document provided does not contain detailed acceptance criteria or a study that rigorously proves the device meets specific performance criteria beyond general conformity to standards.
Here's an analysis based on the information available:
1. A table of acceptance criteria and the reported device performance
The document does not provide a table of explicit, quantitative acceptance criteria for the device's performance in reducing temporalis muscle EMG activity, nor does it report specific performance metrics against such criteria.
The "Performance Data" section states:
"Device testing was performed and the device was shown to meet its design specifications."
"Device performance will also be in conformance to the following standards prior to marketing: IEC 60601-1, IEC 60601-1-2, IEC 60601-2-10, IEC 60601-2-40."
"RF function of the device meets requirements of FCC CFR 47 Part 15, Subpart C."
These statements indicate that the device met internal design specifications and general safety/EMC standards, but they do not provide specific clinical performance metrics (e.g., % reduction in EMG activity, sensitivity, specificity for bruxing events). The "clinical data are provided to demonstrate safety and effectiveness" but the details of this clinical data (acceptance criteria, results) are not present in this summary.
2. Sample size used for the test set and the data provenance (e.g., country of origin of the data, retrospective or prospective)
This information is not provided in the given 510(k) summary. The document mentions "Clinical data are provided to demonstrate safety and effectiveness," but it does not specify the sample size, type of study (prospective/retrospective), or data provenance for any clinical test set.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts (e.g., radiologist with 10 years of experience)
This information is not provided in the given 510(k) summary. Given the nature of the device (EMG for bruxism), ground truth for bruxism events would typically involve clinical assessment or expert interpretation of polysomnography/EMG recordings, but no details are given.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
This information is not provided in the given 510(k) summary.
5. If a multi-reader multi-case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
This type of study is not applicable to the GRINDCARE device as described. The GRINDCARE is an electromyographic and electrical stimulation device that directly senses and treats bruxism; it is not an AI-assisted diagnostic tool for human readers/clinicians to interpret. Therefore, an MRMC study comparing human readers with and without AI assistance is not relevant to this device.
6. If a standalone (i.e., algorithm only without human-in-the-loop performance) was done
The device itself operates as a standalone system to detect EMG activity and deliver stimulation. The text states: "It uses EMG to sense contraction of the temporalis muscle that is associated with bruxing events. In response to the EMG-measured contraction, it delivers mild electrical stimulation..." This describes the device's standalone operation. However, no specific performance metrics for this standalone detection and stimulation (e.g., accuracy of bruxism detection, effectiveness of stimulation) are detailed in this summary.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.)
The document does not explicitly state the type of ground truth used for any clinical data that was provided. For a bruxism device, ground truth often involves polysomnography with EMG recordings, expert clinical diagnosis, or patient-reported outcomes, but this is not specified here.
8. The sample size for the training set
This information is not provided in the given 510(k) summary. The device uses signal processing to detect events, which implies some form of training or calibration, but details are absent.
9. How the ground truth for the training set was established
This information is not provided in the given 510(k) summary.
In summary, the provided 510(k) document is a high-level summary that indicates conformity to general safety and design standards and states that clinical data was provided to demonstrate safety and effectiveness for substantial equivalence. However, it explicitly lacks the detailed clinical study design, acceptance criteria, test set/training set sizes, ground truth establishment methods, and specific performance results that would answer most of your detailed questions. These details would typically be found in the full 510(k) submission, not in this brief summary.
Ask a specific question about this device
Page 1 of 6