Prof. Dr. Eng. Abdulkadir ŞENGÜR
Electroencephalogram (EEG) signal data are very crucial for understanding the emotions of humans that provide a way to control and regulate behaviors. The classification of emotion from EEG signals is a challenging task due to the non-linearity and non-stationarity nature of EEG signals. Existing feature extraction methods cannot extract the deeply concealed characteristics of EEG signals from different layers for an efficient classification scheme and also hard to select appropriate and effective feature extraction methods for different types of EEG data. Hence, this study intends to develop an efficient deep feature extraction based model to automatically classify the emotional status of people. In order to discover reliable deep features, this study investigates five deep learning convolutional neural networks (CNN) models: AlexNet, VGG16, ResNet50, SqueezeNet, and MobilNetv2. The proposed scheme consists of several steps: pre-processing by using low-pass filtering for noise removing and Wavelet Transform (WT) for EEG rhythm extraction; converting the extracted EEG rhythm signals to the EEG rhythm images employing the Continuous Wavelet Transform (CWT) method; then feeding EEG rhythm images to deep five well-known pretrained aforementioned CNN models for feature extraction separately; putting the obtained features as input to support vector machine (SVM) method for classifying into binary emotion classes: valence and arousal classes. The proposed methodology is tested on the “DEAP dataset”. The experimental results demonstrate that the AlexNet features with Alpha rhythm produce better accuracy scores (91.07% in channel Oz) than the other deep features for the valence discrimination, and the MobilNetv2 features yield the highest accuracy score (98.93% in Delta rhythm (with channel C3) for arousal discrimination.