Jin Woo Choi

and 1 more

Decoding motor imagery, an imagination of body movement, is an essential component for electroencephalogram (EEG) based brain-computer interface (BCI) applications. Recent studies have shown that acquiring brain signals within immersive VR environments can enhance motor imagery performance with more discriminant neural patterns. However, applying such signals directly to train classification models for real-life BCI systems may have limitations, as exposure to different environments may produce varying neural activity. To reduce the influence from such environmental causes, we propose a channel weight re-scaling module that refines EEG features depending on the learned separability of each electrode channel. As many pre-existing EEG classification models have their own strengths in different perspectives, our module is designed to be incorporated with previously studied convolutional neural network (CNN) based models that extract spectral-spatial features, preserving their characteristics. To analyze the performance of our module, we integrated our module with three widely used CNN models and performed evaluation using two datasets, which include our dataset containing resting state, left hand, and right hand motor imagery EEG data of participants within immersive VR and non-immersive environments. For all three baseline models, our module was able to improve classification performance. Furthermore, the re-scaled channel weights incremented towards theoretically important electrode positions. These results illustrate the potential of our module to be used not only to reduce unwanted features but also to inspect which channels were more influential to the model for classification.

Juho Lee

and 2 more