Search Results
Found 1 results
510(k) Data Aggregation
(158 days)
Ricoh Company, Ltd.
The RICOH MEG non-invasively measures the magnetoencephalographic (MEG) produced by electrically active tissue of the brain. These signals are recorded by a computerized data acquisition system, displayed, and may then be interpreted by trained physicians to help localize these active areas. The locations may then be correlated with anatomical information of the brain. MEG is routinely used to identify the locations of visual, auditory, and somatosensory activity in the brain when used in conjunction with evoked response averaging devices. MEG is also used to non-invasively locate regions of epilentic activity within the brain. The localization information provided by the device may be used, in conjunction with other diagnostic data, as an aid in neurosurgical planning.
The RICOH MEG Analysis is an analysis software package used for processing and analyzing MEG data. It displays digitized MEG signals, EEG signals, topographic maps, and registered MRI images. Universal functions such as data retrieval, storage, management, querying and listing, and output are handled by the basic MEGvision Software of Eagle Technology, Inc. (K040051).
The RICOH MEG Analysis is designed to aid clinicians in the assessment of patient anatomy, physiology, electrophysiology and pathology and to visualize source localization of MEG signals.
The provided document describes the RICOH MEG, a magnetoencephalograph (MEG) device, and its acceptance criteria and the study performed to demonstrate conformance.
Here's a breakdown of the requested information:
1. Table of Acceptance Criteria and Reported Device Performance
Test Name | Acceptance Criteria (Purpose of Testing) | Reported Device Performance (Summary of results) |
---|---|---|
[1] Matching Module design test | Verify that the Matching Module meets the design specifications by a black box test, including image correction and alignment techniques of co-registration to MRI/3D digitizer data. | Satisfied the pass/fail criteria. Pass. (Planned test cases: 39; Tested cases: 39; Failures occurred: 0; Failures corrected: 0) |
[2] Analysis System design test | Verify that the Analysis System meets the design specifications by a black box test. Verify that the updated function does not affect the continued performance of the rest of the device. | Satisfied the pass/fail criteria. Pass. (Planned test cases: 20; Tested cases: 20; Failures occurred: 0; Failures corrected: 0) |
[3] Analysis System Validation | Perform validation by a person who can substitute the intended user (operates without training, has knowledge to instruct users). Validate product validity, usability validity, and software validity based on the Intended Use. | Pass/Fail: Pass. |
2. Sample size used for the test set and the data provenance:
- Sample Size: The document does not specify a "test set" in terms of patient data or number of cases for the performance evaluation. Instead, it refers to "test cases" for software verification and validation.
- Matching Module design test: 39 test cases.
- Analysis System design test: 20 test cases.
- Analysis System Validation: No specific number of cases is provided, but it states "performance the validation by a person who can substitute the intended user."
- Data Provenance: Not specified for any biological or clinical data. The tests described are bench tests and software verification/validation.
3. Number of experts used to establish the ground truth for the test set and the qualifications of those experts:
- The document primarily describes engineering/software verification and validation tests rather than clinical performance evaluation requiring expert ground truth.
- For the "Analysis System Validation," it states that the validation was performed by "a person who can substitute the intended user," characterized as someone who can "operate the product without training" and has "the knowledge to be able to instruct the user on the operation of the product." This individual acts as a proxy for an expert user, but their specific qualifications (e.g., years of experience, medical specialty) are not detailed.
4. Adjudication method for the test set:
- Not applicable. The tests described are focused on functional and design specification verification through pass/fail criteria for engineered systems, not on clinical interpretation requiring expert adjudication.
5. If a multi reader multi case (MRMC) comparative effectiveness study was done, If so, what was the effect size of how much human readers improve with AI vs without AI assistance:
- No, a multi-reader, multi-case comparative effectiveness study with or without AI assistance was not done. The device is a MEG system for measuring brain activity and aiding in localization, not an AI-assisted diagnostic tool in the typical sense of needing reader performance comparisons.
6. If a standalone (i.e. algorithm only without human-in-the-loop performance) was done:
- The RICOH MEG is a device that records and processes MEG signals. The "RICOH MEG Analysis" is an analysis software package used for processing and analyzing MEG data. It displays signals and visualizes source localization. The document states that signals "may then be interpreted by trained physicians." This indicates that human interpretation is an integral part of its intended use. Therefore, a standalone algorithm-only performance study without human-in-the-loop is not described as relevant or performed given the nature of the device.
7. The type of ground truth used (expert consensus, pathology, outcomes data, etc.):
- For the described tests, the "ground truth" is primarily based on design specifications and functional requirements for the software modules. For example, verifying that the Matching Module performs image correction and alignment according to its design.
- For "Analysis System Validation," the ground truth for "product validity, usability validity, and software validity" is established by a qualified individual (a "substitute for the intended user") assessing whether the system meets its intended purpose. No external clinical "ground truth" (e.g., pathology, outcomes data) is referenced for these specific performance tests.
8. The sample size for the training set:
- Not applicable. The document describes a medical device and its associated software for MEG data analysis. It does not mention any machine learning or AI components that would require a "training set" in the context of developing a diagnostic algorithm.
9. How the ground truth for the training set was established:
- Not applicable, as no training set for an AI/ML algorithm is mentioned.
Ask a specific question about this device
Page 1 of 1