|Year : 2022 | Volume
| Issue : 3 | Page : 90-95
Paralyzed patients-oriented electroencephalogram signals processing using convolutional neural network through python
Vedat Topuz1, AK Ayša1, TŘlin Boyar2
1 Marmara University, Vocational School of Technical Sciences, Kartal, Turkey
2 Marmara University, Faculty of Technology, Department of Computer Engineering, Maltepe, Istanbul, Turkey
|Date of Submission||04-Dec-2022|
|Date of Decision||06-Dec-2022|
|Date of Acceptance||11-Dec-2022|
|Date of Web Publication||30-Dec-2022|
Vocational School of Technical Sciences, Marmara University Mehmet Genç Kulliyesi, Dragos, Kartal, Istanbul 34865
Source of Support: None, Conflict of Interest: None
Aim: Some of the systems that use brain–computer interfaces (BCIs) that translate brain activity patterns into commands for an interactive application make use of samples produced by motor imagery. This study focuses on processing electroencephalogram (EEG) signals using convolutional neural network (CNN). It is aimed to analyze EEG signals using Python, convert data to spectrogram, and classify them with CNN in this article. Materials and Methods: EEG data used were sampled at a sampling frequency of 128 Hz, in the range of 0.5–50 Hz. The EEG file is processed using Python programming language. Spectrogram images of the channels were obtained with the Python YASA library. Results: The success of the CNN model applied to dataset was found to be 89.58%. Conclusion: EEG signals make it possible to detect diseases using various machine learning methods. Deep learning-based CNN algorithms can also be used for this purpose.
Keywords: Electroencephalogram, convolutional neural network, spectrogram images
|How to cite this article:|
Topuz V, Ayša A K, Boyar T. Paralyzed patients-oriented electroencephalogram signals processing using convolutional neural network through python. J Neurobehav Sci 2022;9:90-5
|How to cite this URL:|
Topuz V, Ayša A K, Boyar T. Paralyzed patients-oriented electroencephalogram signals processing using convolutional neural network through python. J Neurobehav Sci [serial online] 2022 [cited 2023 Jan 29];9:90-5. Available from: http://www.jnbsjournal.com/text.asp?2022/9/3/90/366390
| Introduction|| |
Analysis of biomedical signals has become one of the hottest topics with advances in technology and machine learning. Researchers are working to understand and classify human biosignals to more accurately diagnose diseases or develop assistive technologies for people with disabilities. Electroencephalogram (EEG) brain signals are one of the most studied topics for developing noninvasive approaches to detect neurological abnormalities and brain–computer interface (BCI) technologies.
Today, motor imagery (MI) EEG-based BCI is studied by many researchers due to its effectiveness in both nonmedical and medical applications. MI is accomplished by imagining executing a particular command without actually doing it. MI tasks commonly used in research are movements related to the left hand, right hand, left foot, right foot, both feet, elbows, fists, and fingers. MI-based BCI applications include clarifying the EEG signals and defining the responses to these signals in real time. It has laid the groundwork for interesting studies such as the processing of EEG signals, robot movements with thought, and detection of various diseases according to brain activity.
The data set, which is frequently used in studies for the diagnosis of diseases related to brain activity, was taken from the University of Bonn database. Türk and Özerdem (2017) aimed to classify the features extracted from EEG signals using the One-Dimensional Median Local Binary Pattern method with the k-nearest neighbor (k-NN) algorithm. The data used in the study were taken from Bonn data set. In the study, the classification performance was found to be 100% for A-E datasets, 99.00% for A-D datasets, 98.00% for D-E datasets, 99.50% for CD-E datasets, and 96.00% for A-D-E datasets. When the effect of the one dimension median local binary pattern method on classification in EEG datasets is compared with other studies, it was seen that the proposed approach gave successful results. Çevik worked on an application that allows vehicle control by detecting the physical and mental states of the operators (such as fatigue, sleepiness, and inattention) through EEG signals for vehicle use. Fast Fourier transform (FFT) and power spectral density (PSD) signal processing techniques are used for feature extraction from signals. Akben conducted a comprehensive research on the characteristics and diagnosis of migraine disease using EEG signals in his study (2012). The data obtained from the KSU Faculty of Medicine used in the study belong to 30 migraine patients and 30 healthy controls. The data were first analyzed with the Fourier transform on the frequency axis. The results were analyzed with the help of classification techniques and clustering techniques. As a result, some characteristic data about migraine disease have been obtained and different suggestions have been made for the automatic diagnosis of the disease. Coşkun and Istanbullu analyzed EEG data from a patient under anesthesia with different methods such as band-pass filter, FFT, wavelet transform, and PSD.
Olivas-Paddal and Chacon-Murguia proposed two methods for multimotor image classification. Both methods use features obtained with a variant of the discriminatory Filter Bank Common Spatial Model. One method uses convolutional neural network (CNN) classification, whereas the second method uses a modular network of four expert CNNs. Hernández-Del-Toro et al. present five feature extraction methods based on fractal dimension, wavelet decomposition, frequency energies, empirical mode decomposition, and chaos theory properties to solve the task of detecting MI segments.
In this article, the classification of EEG signals has been carried out by converting them to spectrogram images. It is explained how these operations are performed with Python. A CNN is designed for the classification process. The rest of the article is organized in the following sections. The material and method are described in Section 2. The results obtained are presented in Section 3. In Section 4, a general evaluation of the study is made.
| Materials and Methods|| |
There is no need for ethics committee approval.
Electroencephalography (EEG) is a method that provides electrical induction of brain activity. EEG recording is performed by placing conductive wires on the skull with the help of a special gel. The electrical potential changes between these placed electrons are recorded on the computer by means of an analog–digital converter. This recording can be used in studies such as the detection of diseases using various transformations.
Brain activity is related to the frequencies of EEG signals. In clinical studies, the frequencies of EEG signals in the range of 0.5–30 Hz are examined. Frequencies above 30 Hz are known as gamma signals. Because their amplitudes are so low, they are not significant and are rarely used. [Table 1] shows the frequency bands and frequency ranges to which EEG signals may belong.
Delta waves are seen in infants and severe organic brain diseases. Theta waves particularly arise in children. In adults, they also occur in situations of emotional tension and frustration. Alpha waves are seen in awake, normal, and calm people. They disappear into the dormant state. If the awake person directs his attention to something special, higher frequency but lower amplitude EEG signals (Beta waves) occur instead of α waves. Beta waves occur in strong activation of the central nervous system or states of tension. They disappear with increased mental activity and are replaced by low-amplitude asynchronous signs. Gamma waves are used a lot in clinical practice when their amplitudes are very small and meaningless. Waves in different frequency ranges of EEG signals are shown in [Figure 1].
| Data reading and display|| |
Ethics committee report is not required since it has not been tested on paralyzed patients yet. The data used were sampled at a sampling frequency of 128 Hz, in the range of 0.5–50 Hz, belonging to 29 channels. The EEG log file is opened with the Python programming language MNE library [Figure 2].
Channel signals can be accessed on the “.edf” file with the Python MNE library. Image of a specific channel or more than one channel can be taken. The graph in [Figure 3] is plotted over Python using the following code; it shows the synchronous signal of all channels.
The signals of the EEG-A1 and EOG-2 channels are shown in [Figure 4] as plotted over Python using the following code.
Specific events on the data can be detected with the “events” subcommand. In some data, events are captured from STIM channels.
Some EEG/MEG systems create files in which events are stored in a separate data stream rather than pulses on one or more STIM channels. For example, the EEGLAB format stores events as a collection of arrays in the.set file. When reading these files, MNE-Python automatically converts the stored events into an annotations object and stores it as an annotation attribute of the raw object. The underlying data in an annotations object can be accessed through its three attributes: start, duration, and description.
In the EEG data used as an example, the events are stored in a separate data series. To read event information:
Convolutional neural network
Recently, studies have been carried out on the use of CNN algorithms, which are a deep learning model on EEG signals,, CNN is a feedforward neural network, which is a type of multilayer perceptron. CNN consists of convolutional layer, pool layer, and fully connected layers. [Figure 5] shows the representation of a CNN structure. When the image is given as input to a network, it feeds the system through interconnected polyconvolutions and finally produces an output.
In the convolution layer, there are filters to extract features from the image given as input. Images are saved in matrix format and feature matrix called kernel is used to extract features. Convolution is a mathematical function that expresses how one shape changes by another:
Convolution operation is performed by shifting the convolution filter on the image matrix. Conflicting numbers are multiplied and transferred to the feature map matrix. With this operation, the size of our input image is reduced. However, this also causes image distortions. To prevent data loss, the same padding method, which is performed by adding a frame of zero values around the input image, is used. Since the feature map holds a certain part of the image, a large number of feature maps are needed. After the convolution process, the ReLu function is applied to reset the negative values.
| Results|| |
Here in the “events” array received in the annotation object, events → event descriptions, events → event start time, and event identification number (ID) information are kept. To see the events on the signal, the signals received from all channels were plotted and colored during the recording and the event was detected on the signal. The image of the left leg raising and lowering event on the signal are shown in [Figure 6]. The ID-time graph of the events is shown in [Figure 7].
Since the first 21 channels contain information about the EEG data, the other nine channels' information has been removed. Annotation objects detected in the data were obtained with their ids as “events_dict.”
Drawing channel spectrograms with YASA Library
Spectrogram images of the channels can be obtained With the Python YASA library. The code block below is for obtaining the spectrogram image of channel 20 [Figure 8].
In addition, spectrogram image can be obtained for each event with “plt.specgram” (spectrogram image for ID = 1 event) [Figure 9].
Obtaining epoch objects with MNE library
The “epochs” object in the library was used to analyze the events in detail to extract the events in the data. After converting the events into epoch objects, operations such as reading the signal values of the events and saving them in the “.csv.”
Application of CNN algorithm to electroencephalogram data
First, Python's deep learning libraries Keras and TensorFlow are included for data processing with CNN. In the “.csv” file, the values obtained from 21 channels are input and events are output. The events are numbered from 1 to 6, respectively, by converting them to category type. An example image with a sample signal for each category is presented in [Figure 10].
First, 80% of the data set is reserved as training data and 20% as test data:
The CNN structure generated using the following code is shown below.
CNN was trained with the training data set and results were obtained with the test data set. The accuracy value was 89.58%. The data on the success of our model are shown in [Figure 11].
| Conclusion|| |
In this article, it is aimed to examine EEG signals with Python. For this purpose, first of all, the EEG signal and then the spectrogram of this signal were visualized with Python. A CNN structure was designed to classify the obtained spectrogram image. As a result of the trials with the test data, it was determined which movement the data belonged to with an accuracy of 89.5%. This study can be improved by studies such as device control with data from paralyzed individuals.
Patient informed consent
There is no need for patient informed consent.
Ethics committee approval
There is no need for ethics committee approval.
Conflicts of interest
There are no conflicts of interest to declare.
Financial support and sponsorship
No funding was received.
Author contribution subject and rate
- Vedat TOPUZ (35%): Design the research, Contributed with comments on manuscript organization and write-up.
- Ayça AK (30%): Contribute the research, Analyses and wrote the whole manuscript.
- Tülin BOYAR (35%): Realise the research. Implementing the software.
| References|| |
Cinar E, Sahin F. New classification techniques for electroencephalogram (EEG) signals and a real-time EEG control of a robot. Neural Comput Appl 2013;22:29-39.
Lotte F, Congedo M, Lécuyer A, Lamarche F, Arnaldi B. A review of classification algorithms for EEG-based brain-computer interfaces. J Neural Eng 2007;4:R1-13.
Al-Saegh A, Dawwd SA, Abdul-Jabbar JM. Deep learning for motor imagery EEG-based classification: A review. Biomed Signal Process Control 2021;63:102172.
Türk Ö, Özerdem MS. Epileptik EEG sinyallerinin sınıflandırılması için bir boyutlu medyan yerel ikili örüntü temelli öznitelik çıkarımı. Gazi Üniversitesi Fen Bilimleri Dergisi Part (c) 2017;5:97-107.
Çevik, Ç. Vehicle management with attention value obtained using brain signals. J Smart Syst Res (JOINSSR) 2020;1:30-8.
AkbenSB. İşaret Işleme Teknikleri Kullanarak EEG Işaretlerinden Migren Hastaliğinin Karakteristiklerinin Belirlenmesi, Thesis, Kahramanmaraş Sütçü İmam Üniversitei Fen Bilimleri Enstitüsü; 2012.
Coşkun M, İstanbullu A. Analysis of EEG Signals with FFT and Wavelet Transform, Akademik Bilişim'12- XIV. Academic Informatics Conference Proceedings, Uşak University; 2012.
Olivas-Padilla BE, Chacon-Murguia MI. Classification of multiple motor imagery using deep convolutional neural networks and spatial filters. Appl Soft Comput J. 2019;75:461-72.
Hernández-Del-Toro T, ReyesGarcia CA, Villasenor-Pineda L. Toward asynchronous EEG-based BCI: Detecting imagined words segments in continuous EEG signals. Biomed Signal Process Control 2021;65:102351.
Nacy S, Kbah S. Controlling a servo motor using EEG signals from the primary motor cortex. Am J Biomed Eng 2016;6:139-46.
Mao WL, Fathurrahman HI, Lee Y, Chang TW. EEG dataset classification using CNN method. J Phys 2019;1456. [doi: 10.1088/1742-6596/1456/1/012017].
Ma M, Cheng Y, Wei X, Chen Z, Zhou Y. Research on Epileptic EEG Recognition Based on Improved Residual Networks of 1-D CNN and indRNN. International Conference on Health Big Data and Artificial Intelligence; 2021.
Zhou M, Tian C, Cao R, Wang B, Niu Y, Hu T, et al
. Epileptic seizure detection based on EEG signals and CNN. Neuroinformatics 2018;12:95.
Şeker, A., Diri, B. and Balik, H. H. A review of deep learning methods and applications. Gazi Univ J Eng Sci 2017;3:47-64. [doi: 10.3389/fninf.2018.00095].
[Figure 1], [Figure 2], [Figure 3], [Figure 4], [Figure 5], [Figure 6], [Figure 7], [Figure 8], [Figure 9], [Figure 10], [Figure 11]