Search Results
Found 1 results
510(k) Data Aggregation
(285 days)
Goldenear Company, Inc.
Tinnitogram™ Signal Generator is a sound generating software used in a Timitus Management Program designed to provide temporary relief for people experiencing timitus symptoms. It is intended primarily for adults over 18 years of ages, but may also be used for children 5 years of age or older.
Tinnitogram™ Signal Generator is for use by hearing healthcare professionals who are familiar with the evaluation and treatment of tinnitus and hearing losses. A hearing healthcare professional should recommend that patient listen to the Timitogram™ Signal Generator signal for 30 minutes twice a day at the barely audible level (minimally detectable level).
GOLDENEAR COMPANY's TINNITOGRAM™ SIGNAL GENERATOR is a software as a medical device recommended to use a PC (desktop or laptop computer). TINNITOGRAM™ SIGNAL GENERATOR is fitted to the patient by the healthcare professional. The software enables qualified professional to create customized sounds with specific frequency range for sound therapy/masking.
Device type is Stand-alone software as a medical device. The tinnitus masking signal is generated through a pre-process of securing the patient's customized signal. The test to find tinnitus frequencies, the pre-process, is performed automatically and this masking signal is generated at patient's barely audible level.
The provided document is an FDA 510(k) clearance letter and summary for the Tinnitogram Signal Generator, a software device intended to provide temporary relief for tinnitus symptoms.
Based on the content, the device functions as a sound generator for tinnitus management. The primary method of demonstrating acceptance and safety/effectiveness for this device is by showing substantial equivalence to an existing predicate device (KW Ear Lab's REVE134, K151719), rather than through a complex clinical study with specific performance acceptance criteria like those seen for diagnostic or therapeutic devices.
Therefore, the requested information about acceptance criteria and a study proving the device meets those criteria (especially regarding performance metrics like sensitivity, specificity, or improvement in human reader performance) is not applicable in the traditional sense for this submission. The "study" here is essentially the non-clinical performance data (software verification and validation) to establish that the new device functions as intended and safely, despite some differences from the predicate.
Here's an analysis based on the document's content, explaining why some sections of your request cannot be fulfilled and providing information where available:
1. A table of acceptance criteria and the reported device performance
This type of table, with quantitative performance metrics (e.g., sensitivity, specificity, accuracy) and corresponding acceptance thresholds, is typically required for diagnostic or AI-driven decision support devices. For the Tinnitogram Signal Generator, which is a sound-generating software for tinnitus masking, the "acceptance criteria" are related to its functional operation, safety, and equivalence to a predicate device.
-
Acceptance Criteria (Implied from the submission):
- The software generates sounds for tinnitus masking as intended.
- The software's functions (e.g., automated tinnitus frequency finding, signal generation) operate correctly.
- The software's safety and effectiveness are comparable to the predicate device, despite minor technological differences (e.g., maximum output, how tests are performed).
- The software adheres to relevant medical device software and risk management standards.
-
Reported Device Performance:
"In all verification and validation process, GOLDENEAR COMPANY's TINNITOGRAM™ SIGNAL GENERATOR functioned properly as intended and the performance observed was as expected."Note: Specific quantitative performance metrics (e.g., sound output precision, accuracy of frequency determination) are not provided in numerical form in this summary, beyond the specifications listed (e.g., max output 104 dB SPL, frequency range 262-11840 Hz). The "performance" is primarily demonstrated through successful completion of software verification and validation activities.
Table (Best approximation based on available information):
Acceptance Criteria (Implied) | Reported Device Performance |
---|---|
Software generates customized sounds for tinnitus masking. | "Functioned properly as intended." |
Software properly performs automated tinnitus frequency finding. | "Functioned properly as intended." |
Software's safety is comparable to predicate device. | "Bench performance testing... demonstrated these differences do not affect safety." |
Software's effectiveness is comparable to predicate device. | "Bench performance testing... demonstrated these differences do not affect effectiveness." |
Adherence to medical device software development standards (IEC 62304). | "Software development, verification, and validation have been carried out in accordance with FDA guidelines." |
Adherence to risk management standards (ISO 14971). | "Software Hazard analysis was completed and risk control implemented." |
All software specifications meet acceptance criteria. | "The testing results support that all the software specifications have met the acceptance criteria of each module and interaction of processes." |
2. Sample size used for the test set and the data provenance
- Sample Size for Test Set: This kind of "test set" (e.g., a set of patient data or images) is not applicable here as this is not a diagnostic or AI-based image analysis device. The "test set" in this context refers to the software testing environment.
- Data Provenance: Not applicable. The "data" being tested is the software's functionality, not patient data.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts
- Not Applicable. This device uses software verification and validation, not clinical experts establishing ground truth from patient data cases.
4. Adjudication method (e.g., 2+1, 3+1, none) for the test set
- Not Applicable. As above, this is for software verification, not expert adjudication of clinical cases.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance
- No, an MRMC study was not done. This device is a sound generator, not an AI-assisted diagnostic tool that would involve human readers.
- Effect Size: Not applicable.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done
- Yes, in spirit, a form of standalone testing was done. The "software as a medical device" was "verified and validated" for its intended functions (e.g., generating signals, performing the automated test to find tinnitus frequencies). This testing assesses the algorithm's performance in isolation from patient interaction, ensuring it produces the correct outputs for given inputs. The summary states: "The software was tested against the established Software Design Specifications for each of the test plans to assure the device performs as intended." This constitutes the "algorithm only" performance assessment.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc)
- The "ground truth" for this device is the software's design specifications and expected functional behavior. For instance, if the software is designed to generate a 1 kHz tone at 54 dB SPL, the "ground truth" is that 1 kHz tone at 54 dB SPL, and the testing verifies if the software actually produces it. It's a functional "ground truth" rather than a clinical "ground truth."
8. The sample size for the training set
- Not Applicable. This device is not described as using machine learning models that require a training set of data. It is a rule-based or algorithmic sound generator.
9. How the ground truth for the training set was established
- Not Applicable. (See #8).
Ask a specific question about this device
Page 1 of 1