cursor navigation using eeg
TRANSCRIPT
-
7/30/2019 Cursor Navigation using EEG
1/103
TABLE OF CONTENTS
Abstract.iii
List of figures...vii
List of Abbreviations.viii
1. Introduction1
1.1Brain Computer Interface.....11.2History of the Electroencephalograph..31.3Brain Functions & EEG31.4Inner Workings of the EEG..5
1.4.1 Hardware.51.4.2 Processing software6
1.5External Applications...8
2 Literature Review & Project Background...92.1Pattern Recognition & Classification..9
2.1.1 LDA-How Does it Work?.......................102.2Noise Filtration...11
2.2.1 The Learning Filter122.2.2 Traditional Vector Quantization...132.2.3 Learning Vector Quantization..14
2.2.3.1 ANNs.........152.2.3.2 Application of ANNs to VQs17
2.3Whats New........182.3.1
Creating an Unsupervised LVQ...18
2.3.2 How Does it Work?.................................192.4Final Hypothesis.22
3 The Design...233.1Purpose23 3.2Design Criteria24
-
7/30/2019 Cursor Navigation using EEG
2/103
3.3Required Materials..243.4End User..243.5Experimental Background & Design..25
3.5.1 Real-time Use253.5.2 Training Sequence.263.5.3 Experimental Variables.27
3.6Methods...283.7Results.953.8Conclusion..96
4 Appendix.98 5 Works cited...101
-
7/30/2019 Cursor Navigation using EEG
3/103
1
1.IntroductionHuman Computer Interaction has been through various phases, progressing in parallel with
advancements in computer technology. The earliest methods were to directly manipulating the
hardware but with time the Digital Computers became more complex in multiple aspects which
led to designing of several interaction devices such as the conventional keyboards, mice, touch
screens etc. Most of these devices fall under the same paradigm. Hence any new device or
technology introduced is designed to operate in the conventional User Interfaces. One such
recent technology is brain computer interface. While the existing devices are all physical a brain
computer interface is mostly based on mental process. For the immediate use of this BCI an
efficient way would be latching them to existing navigation systems of existing user interface
instead of designing an entirely new User Interface. With this intent a module is designed to
latch the cursor navigation system with EEG device.
The modules construct follows a typical BCI. It works using the motor paradigm EEG i.e., EEG
Signals produced by physical or imaginary moment of limbs. Open Vibe, a software dedicated
for building BCI, is used in the module. MATLAB is used for Simulation using recorded EEG
signal.
1.1 Brain Computer InterfaceA Brain-Computer Interface (BCI) is a device that enables communication without movement.
People can communicate via thought alone. Since BCIs do not require movement, they may be
the only communication system possible for severely disabled users who cannot speak or use
keyboards, mice or other interfaces.
Most BCI research focuses on helping severely disabled users send messages or commands. But,
this is beginning to change. Some companies have begun offering BCI-based games for healthy
users, and other groups are developing or discussing BCIs for new purposes and for new users.
There may soon be a substantial increase in number of people using BCIs.
Any BCI or BNCI requires at least 4 components. At least one sensor must detect brain activity.
(In a BNCI, the sensor could detect other signals from the body, which might reflect activity
-
7/30/2019 Cursor Navigation using EEG
4/103
2
from the eyes, heart, muscles, etc.) Next, a signal processing system must translate the resulting
signals into messages or commands. Next, this information must be sent to an application on a
device, such as a web browser on a monitor or a movement system on a wheelchair. Finally,
there must be an application interface or operating environment that determines how these
components interact with each other and with the user.
There are often a lot of misunderstandings about what BCIs can and cannot do. BCIs do not
write to the brain. BCIs do not alter perception or implant thoughts or images. BCIs cannot work
from a distance, or without your knowledge. To use a BCI, you must have a sensor of some kind
on your head, and you must voluntarily choose to perform certain mental tasks to accomplishgoals. For example, the downloads section of this site has videos that show someone moving
through a virtual environment by thinking about moving
A BCI requires must meet four criteria to be a BCI. First, the device must rely on direct measures
of brain activity. A BNCI is a device that can also rely on indirect measures of brain activity.
Second, the device must provide feedback to the user. Third, the device must operate in real-
time. Fourth, the device must rely on intentional control. That is, the user must choose to perform
a mental task, with the goal of sending a message or command, each time s/he wants to use the
BCI.
-
7/30/2019 Cursor Navigation using EEG
5/103
3
1.2 History of the Electroencephalograph
This is a technology that, although has existed since the 1920s, has only now come into the
limelight. Electroencephalography, or EEG, was first demonstrated in 1924 by Hans Berger, who
made the first recording of brain signals using rudimentary radio equipment to amplify weak
electric signals produced by the brain. His work as a neurologist put forth many early
speculations on the device he called an elektrenkephalogramm and its medical uses. He theorized
that it could diagnose brain disorders and diseases, and this proves to be its most useful
application even today. As his work carried on with future scientists such as Edgar Adrian and
W. Gray Walter, people realized that not only could this amazing machine diagnose brain
tumors, but it could also provide much-needed medical insight into the inner complexities of the
brain. This was the first true brain-computer interface ever invented; because it provides direct
feedback from the brain, it shows promising results that could address the limitations put forth byEMG.
1.3 Brain Functions & EEG- How does it work?
EEG works by analyzing electrical signals present on the scalp that accompany thoughts, and
this begins deep within the brain itself. The brain is made up of billions of neurons, which are
specialized cells that can be electrically or chemically stimulated. These neurons are said to be
the lowest unit of information processing; everything above them in the hierarchy of the mind is
based upon these structures. In terms of electrical circuits, the objective of neurons within the
brain can be compared to a feedback loop; the information is continually processed by gathering
data from external sensors. Neurons function is gather information through input from other
neurons, process based on external stimuli, and pass it on to the next neuron through output. In
essence, neurons gather data from a variety of sensory sources and create a low-resolution,
representative, simplified output; this is what allows everything from image recognition to sound
pinpointing to occur so effortlessly within the brain. Furthermore, this unique processing
paradigm also facilitates active learning, since different sensory inputs can be assigned different
levels of importance (orweights, as will be discussed later). Because of this, all information that
flows through the central nervous system is processed by neurons. Since neurons must process
all information, there are many different types of neurons; however, the different types can be
divided into three subgroups: motor, sensory, and inter neurons. In order to communicate with
other neurons, a neuron has two vital structures that provide this means of communications:
-
7/30/2019 Cursor Navigation using EEG
6/103
4
axons and dendrites. A neuron may possess only one axon but may have many dendrites. The act
of communicating between neurons is called synapses, and the use of chemical or electrical
stimulation at axon/dendrite locations is called neurotransmission. Neurons are able to generate
electrical currents through the release or intake of ions through the neuronal membrane; this
sudden upward or downward flux in current is called the action potential. Interneuron, whose
function is to relay and process information between different neurons, communicate using
neurotransmission at axon-dendrite (chemical synapse) or dendrite-dendrite (electrical synapse)
areas. Both chemical and electrical neurotransmission involve the exchange of ions to transfer
information; whereas in an electrical circuit electricity flows through wires, neuronal charges
take place at these axon/dendrites sites. Neurons ability to speak to each other comes from
this exchanging of electrical currents which serve to transport information through the neurons
until the destination, usually the spinal cord, is reached. Because neurons process all information
that flows in or out of the nervous system, enough neurons can process the same information
such that the combined synapses are sufficient to produce a detectable electric current. This
detectable current is usually present by the time the information reaches the scalp, because
neuronal firing deep within the brain gathers enough neurons in a pyramid fashion as
information processed. These currents are known as potentials because they require a reference
point within the brain itself in order to be equated properly. Potentials produced by the brain
fluctuate many times per second, producing recognizable variants of the sine wave; these waves
are similar to those of a heartbeat, but waves produced by the brain are present in several
different frequencies at the same time. These frequencies are: Delta (30 Hz). The real
mystery of Electroencephalography lies in the differentiation of these frequency bands. For
example, alpha waves are usually only exhibited when there is no visual focal point for the user
to focus on, such as when the eyes are closed. However, they may also be present during
meditation or higher contemplation (slang daydreaming), suggesting that they are not simply
scanning or resting waves. Like the white noise on TV. The level of activity in the abovefrequency ranges accurately reflects the users state of mind or level of consciousn ess. For
instance, spikes in the Alpha range shows a relaxed state of mind, while activity in the Beta
range shows the user to be alert or active, and coincidentally also shows use of motor control
neurons. Use of this feature can be seen anywhere, from the diagnosis of a coma to lie-detector
tests.
-
7/30/2019 Cursor Navigation using EEG
7/103
5
1.4 Inner Workings of the EEG
The Electroencephalograph measures electrical activity in these frequencies, making it possible
to associate certain patterns in the waves with functions of the brain. In order to detect the
currents, surface electrodes are placed on the scalp above key regions of the brain. These regions,
orlobes, are also responsible for different aspects of the psychological makeup, such as logic and
motor control. Electrodes are placed according to the International 10-20 System, a standardized
mapping of key scalp areas; the number of electrodes in a medical-grade EEG ranges from 19 to
as many as 256. The more electrodes there are, the higher the resolution is and thoughts that
originate from deeper regions in the brain can be detected more accurately. A high-power
amplifier then increases the signal strength so that it can be analyzed by a receiver connected to a
computer, which analyzes and sorts data. However, no two people are alike, and even the same
person could experience a change in brainwave patterns as they age. Together, these problemsrepresent a manufacturing nightmare; if there are one million users, one million versions of the
software must be made! These problems are easily overcome, though, with the use of learning
software, which automatically tailors itself to the users thoughts, as is described in the following
sections.
1.4.1 Hardware
The ability to analyze and recognize patterns in brain activity is, arguably, the cornerstone of the
Electroencephalograph. To date, no other machine can achieve this resolution while maintaining
a time-signal relationship. That is, recordings can be analyzed in real time, whereas an MRI, for
example, would require time on an order of minutes between activity and recognition. Using this
unique feature, the EEG uses specialized signal processing software to recognize and
differentiate patterns in the brains electrical activity. Because EEG measures potentials on the
scalp, it requires a predefined reference channel in order to gather any data at all. This is
because, in terms of electrical circuits, the potentials are the positive source of electricity, but the
reference channel acts as agroundto complete the circuit. The reference channel itself is defined
as a single electrode placed at a key region according to the 10-20 system that allows completion
of the circuit. In most medical EEGs, this is located on the bridge of the nose, but there are some
cases where it is located on the skull just behind the ear lobes.
-
7/30/2019 Cursor Navigation using EEG
8/103
6
1.4.2 Processing Software
However, here an issue is encountered: thought processing is not the only source of electrical
spikes present in the area surrounding the brain; blinks, jaw clenching, and flexing neck muscles
produce significantly higher voltages than the original brainwave data, causing any obtained data
to become false. These false readings, called artifacts, can render a study completely useless by
fooling the software into recognizing patterns that were not present originally. Nevertheless, the
EEG is one step ahead here as well. In the process of using the signal processor to sort data into
the frequency bands, simple limiting algorithms can be used to effectively clean up the data by
limiting input values within a reasonable range. Additionally, comparative algorithms can also be
implemented, which compare the spikes within one frequency band to those within another. This
is another effective method ofparsing, or using only what is needed from the input data. This
method of parsing is achievable because of the unique visible-across-all-frequencies aspect of thebrains electrical activity. In other words, if there is a spike in the Beta band, there will also be a
similar spike in the Theta band, however small it may be. This is not because of the brain itself,
but because of the fashion of neuronal processing that allows several thoughts to be exhibited at
the same time. Despite this cleanup of the data, the brainwaves themselves fluctuate at a rate too
fast to be processed and recognized accurately by software, thereby causing all signals to seem
identical. In order to counteract this, an EEG also implements other functions in the software to
smooth the signal. Usually, some sort of lossy data compression is implemented to create a
representative signal; for instance, LVQ (Learning Vector Quantization, which is discussed later)
and several modern audio-compressors make use of this concept. Additionally, rudimentary
epoching of the signal at either a fixed or weighted (Weights determined by interval-calculating
function, also discussed in a later section) interval Furthermore, other functions such as linear
detrending can be utilized to compensate for fluctuations in hardware readings. Next, various
real-time filters and algorithms must be applied in order to make the data easier to read and map
signals to their respective frequency bands. First, a temporal filter, whose job is to use
predetermined algorithms to help parse the data and further remove embedded artifacts. The
temporal filter uses known filters such as the Butterworth and Chebychev filters (that
presumably have been determined in previous experiments carried out by others) to carry out this
process. A temporal filter also provides output signals only between predefined frequency
parameters, such as only those in the Beta (12-30 Hz) band, thereby further removing artifacts
-
7/30/2019 Cursor Navigation using EEG
9/103
7
that might have caused higher-than-normal spikes in the data. Additionally, a spatial filter is
applied to specify the location of data channels (relative to the reference voltage [defined
earlier]) and channelize it. The filter works by applying matrices in a linear equation to generate
output channels that are combinations of input channels. Spatial filters greatly simplify the
amount of processing required during pattern classification, as higher-level data and non-
sensorimotor regions can be effectively ignored. Finally, an identifier function maintains
arbitrary information about the current acquisition scheme, usually for the purposes of
experiment replication and subsequent analyses; the only time this is used is when a
configuration file is written, as described in the next section. Yet more processing methods rely
on FFTs, or Fast Fourier Transforms, to differentiate between various frequency-related
constituents; as the transform removes the temporal axis, it is not well-suited to online use, and
thus will not be discussed further (with the exception of short-term FTs, which are in fact used in
several processing modules). The last step in this process involves the training of the software
that allows it to recognize patterns in the data. This adaptation to each signal is essential, because
no two users posses the same brains or exhibit the same type of electrical activity. Therefore, a
profile that is unique to each user must be created in order to tell the software what it must
look for. This profile is created through the use of a trainer function, which takes a sample of
data elapsed over a small period of time, applies the above filters, and saves it to a configuration
file as a control group for later recognition. The way this configuration file is created by telling
the user to imagine or have a thought that would trigger an end result. The software then maps
this configuration file, along with others that have been created, to their specific triggered events.
For example, when training the software to recognize a make fist pattern, the user would
imagine closing his/her fingers for, say, 5 seconds, and the software would create a configuration
file containing the signal and mapping. That way, when the user starts an online EEG session and
imagines making a fist, the pattern is recognized by comparing the signal against the
configuration file and activating a predetermined event. At this point, in order to recognize
patterns with the greatest accuracy and speed, the software requires these new functions: a filewriter/reader, a trainer, a processor, and an event stimulator (that allows communication with
third-party programs).
-
7/30/2019 Cursor Navigation using EEG
10/103
8
1.5 External Applications
Once the EEG signal processing is complete, the stimulators (in the form of anything from a key
press to the execution of other software) can be used to effectively mind-control anything that
can be connected to the processing computer.
-
7/30/2019 Cursor Navigation using EEG
11/103
9
2.Literature Review & Project BackgroundThrough research, it has been determined that EEG can, in fact, serve as a viable brain-computer
interface for recognizing sensorimotor signals, specifically those used for the manipulation of
limbs. However, the problem that plagues todays generation ofEEG-controlled devices lies in
the signal processing portion of the system hardware and software increase in sophistication and
reliability every day, but the basic hurdles of noise reduction, signal integrity, and pattern
detection remain ever-present obstacles. The focus of this project, then, is to improve said
software in such a way that a) minimizes the presence of noise in the signal while maintaining
integrity, and b) allows any arbitrarily chosen operator to achieve similar pattern detection
accuracy by instituting a profiling, trained program to perform the classification.
2.1 Pattern Recognition & Classification
At the heart of an EEG-based brain-computer interface is a pattern detection algorithm that
allows a program to recognize certain organizations within a signal that may represent some sort
of cognitive thought. In this study, the goal of the developed software was to institute the
capability to accurately recognize low-level, intent-based signals in the sensorimotor regions of
the brain, with the purpose of using this data to manipulate a robotic limb. Normally, classifiers
regulate some sort of class system in which to categorize incoming data and make decisions
based upon the location of the data within the system. In other words, in two-dimensional
classifiers, like the ones to be discussed in this paper, use dynamic tessellations to separate data
and make decisions based upon density and proximity to class boundaries. Although there is an
entire field of study devoted to pattern recognition today, the researcher limited the focus to two
popular algorithms used in many situations today, Linear Discriminant Analysis (LDA) and
Principle Component Analysis (PCA). Generally speaking, LDA and PCA utilize similar
categorization techniques, but the idiosyncrasies of each are what determine overall accuracy.
For instance, PCA goes about classifying data by feature trends, whereas LDA better identifies
data trends; because of this, LDA generally has greater decision ability due to improved
separation of classes. Due to this discrepancy, the researcher decided to implement both PCA
and LDA as classifiers in the design software, using PCA as the control.
-
7/30/2019 Cursor Navigation using EEG
12/103
10
2.1.1 LDA: How does it Work?
As described previously, the LDA classifier algorithm implements a system of classes to
categorize various objects, or data; the variability of data is denoted by variance. Thus, the
typical goal of any classifier is to maximize variance between classes (resulting in better
decision-making ability), minimize variance within classes (an indication of data uniformity and
accurate decisions), and to maximize overall variance. There are two established types of LDA
today, class-dependent and independent classifications. In this study, class-dependent
classification was utilized due its increased decision-making ability and the inherent, unstable
nature of EEG signals that would render class-independent transformations useless. Data, in the
form of vectors, are stored in matrices, of which covariance matrices can then be calculated by
the following formulas (for a bi-class problem; this can be expanded to incorporate many classes,
as we see in the following sections). First, the within-class covariance:
Where m and n are given by the order of input matrix
.And then within-class scatter matrix Sw , representing the variance within a single class, is
computed as such:
Where p is the a priori probability of the given set
The overall mean is calculated using the a priori (A priori is a method of deriving event
probability; it dictates that, for any Mpossible events, the probability P that event Noccurs is
1/N) probabilities of each set, and the between-class covariance is calculated.
= (Xi x )m (Yi y )
m
Where X and Y are two sets of data, n is the number of data, i is an index variable, andx andy
represent the means of X and Y
Finally, the between-class scatter matrix can be computed as such, substituting the variables
ofCb :
= (x )m (y )
n
-
7/30/2019 Cursor Navigation using EEG
13/103
11
Class-dependent LDA has a simple goal- to maximize the ratio between the between-class scatter
matrix and within-class scatter matrix:
=det( )
det( )
Here, a two-class LDA transform is shown:
Source: outgraph.org
LDA is often used in conjunction with a training sequence, so that classified patterns can be
matched to dictionary of patterns generated by the user. This is implemented simply
initializing a secondary transform to classify training data based on experimental information,
then comparing results to those of the primary transform during on-line use.
2.2 Noise Filtration
Here, noise is defined as any arbitrary signal fluctuations that may or may not lead to the original
signal being compromised and the failure of subsequent processing modules to perform as
expected. As such, noise can be treated as a signal in and of itself, and can be isolated actively
from the original signal if its values can be predicted (by an equation of some sort). Therefore, a
second signal inversely proportional to the noise can be mathematically combined with the noisy
signal to produce a third, filteredsignal- a technique known as convolution. Here, signals are
represented by simple trigonometric functions; assuming the sine wave is noise, a negative sine
wave can be convolved with the noisy signal, producing a standing wave as shown:
-
7/30/2019 Cursor Navigation using EEG
14/103
12
There are several rudimentary, brute-force processing methods that are implemented in the
software of this project; these include temporal filters, spatial filters, and various signal
transformations to ease the difficulty of pattern detection. Temporal filters rely upon frequency-
attenuating linear filters (That is some frequencies are rejected without regard to weight
system), allowing some frequencies to pass into the filtered signal while blocking others entirely.
Many temporal filters today use either Chebychev- or Butterworth- type filters; in this study, the
Butterworth filter was implemented in order to retain maximum uniformity in sensitivities for the
desired frequencies. Spatial filters, normally used in optics, are generally comprised of a
Gaussian filtering algorithm that reduces the filters sensitivity to noise in the image. Since a
signal can also be quantized into a series of vectors, such a filter can also be applied to EEG
signals. In this project, spatial filters used as black box scenario since it does not change values
among different operators, i.e., it does not rely upon a system of arbitrary weights. Finally,
simple signal transformations (Different from transforms, which indicate the literal
transformation of axes (i.e. frequency-time analysis)) such as squaring the signal improve the
accuracy with which later pattern detection can be carried out.
2.2.1 The Learning Filter
The implementation of a system of weights (either affixed or permanently changing) to perform
signal filtration is a concept that could potentially yield much more accurate results than brute-
force filters like those mentioned, and this is the core focus of this study. By improving learning
filters, a much higher accuracy of pattern detection can be achieved at later stages due to the
absence of signal-compromising noise; one such filter that is the primary focus of this study is
LVQ (Learning Vector Quantization).
-
7/30/2019 Cursor Navigation using EEG
15/103
13
2.2.2 Traditional Vector Quantization
Historically, VQ is known as a lossy (Certain values in the original signal are lost, depending on
the rate of quantization) form of data compression, using a series of vectors to represent several
values. At its most basic level, it is not anything more t han a simple estimator, or rounding
device. Incoming signals are split into packets, or quanta of vectors; these are the data that will
then be grouped into different categories, orclasses, based on the relative distance between their
values. A rudimentary explanation of the concepts behind VQ:
For the sake of simplicity, only two-dimensional VQs are discussed here. The size and bounds of
classes are ever-changing, and for each class, there is a single representative vector that is the
result of VQ compression. A simple diagram of a two-dimensional compression:
In this illustration, green dots represent individual points of data while the red points reflect the
overall representative vector of every class, also called codevectors (with the codebook
representing the set of all codevectors). Therefore, the ratio of codevectors to input vectors
determines the compression rate as well as signal integrity, with the goal being to reduce the
-
7/30/2019 Cursor Navigation using EEG
16/103
14
effect of noise while maintaining an adequate amount of precision. With traditional VQ
compressors, the weights used to determine the value of the codevector of a single class
(determined by the distance between individual data and class boundaries) may be initialized but
are fixed throughout the compression process. Therefore, the location of class boundaries, and by
extension the codevectors themselves, can be accurately predicted if the signal is already known.
Furthermore, because of this unchanging nature, traditional VQs are best suited to prerecorded
signals, since weights cannot be optimized as incoming signals fluctuate. Generally, VQ design
is most easily created through the use of a training sequence, without which many complex
integration calculations would have to be performed. The problem of designing an optimal
codebook has yet to be solved, as it is an NP hard type problem; subpar procedures have been
determined to facilitate the creation of codebooks, such as that formulated by Linde, Buzo, &
Gray, with the goal being minimizing mean signal distortion. The training sequence used to
satisfy the LBG VQ design usually consists of a large sample of the signal, preferably one that
encompasses all statistical tendencies of the source (for instance, in the case of EEG signals,
muscular artifacts and electrode impedance must be simulated in order to fully train the VQ).
This algorithm is an iterative process, solving the problem of the VQ design by using an
arbitrarily sourced codebook (Found by the process of assigning an arbitrary value to a
codevector thensplittinguntil the entire codebook is filled.) as the initial training sequence.
2.2.3 Learning Vector Quantization (LVQ) and Artificial Neural Networks
As mentioned earlier, the system of weights is crucial to fine-tuning filtration procedures-
however, as the saying goes, no two are alike and it is inevitable that such system may fail for
some while perform well for others, or not at all. Therefore, it becomes necessary that the
weights (As mentioned earlier, those assigned to all data in a given class that ultimately
determine the value of the representative codevector) be variable, ensuring that resultant
codevectors represent a minimal percentage of noise in the signal; this is the crux of the entire
filtration procedure, as lossy data compression is used for the sole reason of minimizing the
representation of noise. Such a system can also be considered a Self-Organizing Map since it
creates a dynamic tessellation of class boundaries that in turn allow the positions of codevectors
to be infinitely variable.
In the case of LVQ, weights are actively modified by means of an artificial neural network
(ANN) also trained in a (normally) supervised procedure (i.e. the user presents a scenario then
-
7/30/2019 Cursor Navigation using EEG
17/103
15
gives an example of the best possible response). The point of using this type of approach is to
minimize the error between the VQ-reconstructed signal and the original signal, also called
distortion.
2.2.3.1 Artificial Neural Networks (ANNs)
In order to understand how the weights of the VQ are modified actively, it is necessary to
possess a rudimentary knowledge of ANNs. This unique processing paradigm allows many
inputs of different types to be represented by a few, low-resolution data that can be used to
provide feedback data to other programs, in this case the weight array for a VQ. ANNs are based
heavily upon the architecture of the human brain itself, and by extension the biological neurons
that make up the gray matter. Each neuron is the network (biological or artificial) returns a
single value for any number of given input values, using a weight system of its own. For
instance, a neuron found in the human brain:
.follows this same pattern, using electrochemical neurotransmitter to perform the I/O functions
discussed earlier. Neurons can easily be programmatically modeled, since a program can be
coded to behave like a neural network, eliminating the need to construct individual digital
circuits. The inputs are given respective weights, and are then mathematically combined within
the neuron by a simple summation. A simple illustration of an artificial neuron:
-
7/30/2019 Cursor Navigation using EEG
18/103
16
. And subsequent networks, sometimes following the popularfeedforwardfashion:
Akin to their biological counterparts, neurons are activated by the value of the summation,
using a set threshold value to determine the state. As such, it behaves in a binary fashion, giving
its output in terms of 0s and 1s, rendering any subsequent neurons practically useless. Therefore,
a simple sigmoid function is used to transform the activation value into an analog value that can
be used to ultimately set the weights for our VQ.
Where m is the mth neuron and n is the total number of neurons, x and w are input and weight, and am andvm are
the activation value and return value, respectively
Hence, the process of calculating the return value of any given neuron is an iterative process, as
seen in the following C function:
float neuronCalc (int[] inputs, int[] weights) {
long a, v;
for (int i=0;i
-
7/30/2019 Cursor Navigation using EEG
19/103
-
7/30/2019 Cursor Navigation using EEG
20/103
18
2.3 Whats new
2.3.1 Creating an Unsupervised LVQ
It has been discussed earlier that an ANN must be initialized with an arbitrary sequence of
weights in order to began the training process, and finally obtain the values of the VQs weights
from solutions presented by the ANN. However, this process of initialization and adjustment,
despite that it can be automated (and is in most software nowadays), can take enormous amounts
of time to calculate due to the guess and check nature of weightsetting process, especially if
the network contains a relatively large hidden layer (or multiple hidden layers). This also
contributes to the glaring inefficiency of systems that are dependent upon VQ-compression to
function, hence their being phased out within the past decade or so to be replaced with more
efficient Gaussian-filtering algorithms. After significant review of relevant literature, the
researcher hypothesized that it was possible to do away with the process entirely by
implementing a similar winner-take-all solution in weighting the ANN itself. In other words, it
could be possible to allow the network to construct the VQ dynamically(even as its own weights
are changing) and use the output signal as feedback mechanism to nudge the entire ANNs
weights in a certain direction. By initiating such a process, the ANN could learn to recognize and
underrepresent noise in the resulting signal without any supervision whatsoever. Not only would
this be capable of yielding an LBG-optimal VQ design solution, but it would require a
significantly lesser time to initialize.
Approaching the topic of signal heuristics, the researchers solution again seems to Approach
perform (theoretically) significantly greater noise reduction than traditional LVQs in such case
where not all noise is white noise. Traditional LVQs are initialized by choosing an arbitrary
input vector then modifying the weights of the VQ itself by minimizing the mean squared
distortion; this can be predicted by the following formula:
= () 2dxWhere x is a randomly chosen input vector, p(x) is the probability of choosing said vector, and r(x) is defined as the
VQ-reconstructed vector. The || operator denotes the Euclidean norm of the resultant vector.
Then, weights in the VQ are optimized to minimize the value ofD, and subsequently, individual
neurons also have their weights modified to reflect this change. However, this approach is not
only inefficient but also does not allow the heuristics of a given signal to be taken into
-
7/30/2019 Cursor Navigation using EEG
21/103
19
consideration. In other words, traditional LVQs like that described above are more oriented
toward retaining signal integrity rather than the removal of noise- two approaches to signal
processing that seem similar but in fact are enormously different. This would not be true if, in
fact, signal integrity could be maintained accurately using this method; it is the very fact that
signal integrity cannot be maintained without the consideration of specific signal heuristics that
prevents todays LVQs from becoming widely used.
Again, the researchers solution provides significant gains in the area; because the type of
learning paradigm that is utilized does not depend on the choosing of single arbitrary vectors to
optimize individual neurons, entire patterns and artifacts can be detected and compensated for
with much greater ease and efficiency. Also, this removes the probability factorp(x) from the
distortion calculation, allowing the LVQ to be implemented without respect the amount of data
in the training signal. Because of this unique behavior, all training sessions can be performedlive should the need arise: such functionality may be extremely convenient when exact event
data in the original signal is not known.
2.3.2 How does it work?
To reiterate, the distortion is used not to set the weights of the VQ itself or those of individual
neurons, but rather the entire neural network itself; the coefficient of adjustment (termed here as
Kx for each codevectorx) varies according to the Euclidean space between (i.e. proximity)
corresponding codevectors and the offending codevectors). space between (i.e. 13 . The decay
ofKwith regard to codevector proximity is determined by a Gaussian-type distribution, of the
general format
The mean equals 0 for a regular normal distribution, but optimal values can be found
experimentally; since the effect is not extremely great, it is left at 0 in this study.
Therefore, we use an approximately Gaussian, memoryless source of data as the original signal,
then intentionally convolve it with noise at random intervals, characteristic of the target signal
(in this case EEG signals, which are known to exhibit both high-frequency long-term noise and
short-term high-amplitude artifacts). Following this, we feed it through the LVQ and use the
mean distortion between the VQ-rendered signal and the original source multiplied by Kx to
-
7/30/2019 Cursor Navigation using EEG
22/103
20
adjust the weights of neurons automatically. Iterating over this process produces a convenient
feedback loop that can facilitate the ANN learning efficiently and relatively quickly. Here, an
example source of original data is shown: (a simple sinus is shown for visual aid)
Allowing the noisy signal to be the input data for the VQ, the mean distortion is found by
measuring the Euclidean space between the vectors and calculating the norm:
Therefore, the overall weight offset (termed here as Oi , representing the offset for the neuron of
index i in the network matrix) for every neuron can be calculated using the following equation,
as formulated by the researcher:
Where r(x) is generated codevector
Finally, optimization success can be easily computed by the following formula (scale in
Euclidean distance):
Where index i is at its maximum value (the total number of generated codevectors) Following
this, to further decrease ANN evolution time, it is possible to implement a control system such as
Linear Quadratic Regulation (LQR) to provide a second weight-offset to the ANN. This could
-
7/30/2019 Cursor Navigation using EEG
23/103
21
minimize the time needed to achieve a minimum Dx . Therefore, the total weight offset wi can
be calculated as follows, also formulated by the researcher:
Application of LDA Classification toward ANN Initialization
As described in the previous section, in order to carry out the novel training process as stipulated
by the researcher, a source of noise characteristic of the source data is required; however, this
process can be difficult to achieve without inadvertently presenting several forms of noise
simultaneously. This can occur due to hardware failure or arbitrary human error so it is
important that each characteristic noise-signal be represented in its entirety so as to properlyinitialize the ANN weights. Here, a Linear-Discriminant Analysis (LDA) Classifier can be
trained using intentional noise, and the subsequent training files can be convolved with the
original signal to separate the noise component. This can then be used to provide accurate,
representative noise data to the ANN training process. In this respect, LDA can be used to
actively aid in artifact and noise suppression
Furthermore, because of LDAs class tessellation system, it is possible to use distance between
the classified data and the class boundaries (also known as the hyperplane) as the coefficient of
matching, thereby resulting in an analog value that can be used to control joint positions of
robotic limbs in later experimentation.
-
7/30/2019 Cursor Navigation using EEG
24/103
22
2.4 Final Hypothesis
With the goal of obtaining greater pattern detection accuracy in EEG signals, is it possible to
train an LVQ automatically and more efficiently in a direct feedback loop from output signals in
a way that yields greater accuracy of codevector representation even in the presence of
significant noise and artifacts? Additionally, can LDA be used as a further weight offset in the
LVQ, allowing active noise and artifact suppression?
-
7/30/2019 Cursor Navigation using EEG
25/103
23
3. The Design
3.1 Purpose
This project sought to prove that a widely used medical device, the Electroencephalograph, has
applications as a suitable brain-computer interface. This device, which is normally used todiagnose various brain disorders and abnormal activity in hospitals, could potentially be applied
in the field of HCI as a means of brain-machine interfacing. Electroencephalography requires
relatively simple hardware; at its most basic level, it requires electrodes, amplifiers, and a
processing unit. The unit, which encompasses complex digital signal processing, is the heart of
the system and is what allows signals from such a complex, high-level device like the human
brain to be decomposed into low-level sensorimotor commands. As promising as it may sound,
EEG-based brain-computer interfaces have suffered greatly in popularity and support due to their
immense complexity and hardware-related problems; the two major hurdles that have been
plaguing the EEG for the last few decades are noise filtration and accurate pattern recognition.
These two ever-present obstacles have caused scientists and physicians alike to shy away from
this technology, which could be the most informative direct brain-computer interface to date.
This project seeks to improve the usability of EEG hardware in the field of assistive
technologies, 1) by the synthesis of custom software that improves the efficiency and
accuracy of existing software through the use of novel processing methods, and 2) by
demonstrating that EEG technology can be used as a navigation device in the conventional
user interfaces.
Goal
1. To generate a program that can sufficiently parse, sort, analyze, and recognize patterns in(based on user configuration files) data obtained from an Electroencephalograph (EEG)
using novel methods as prescribed by the researcher in previous sections.
2. To demonstrate, as a means of proof-of-concept, that EEG has real application in Humancomputer Interaction with the conventional user interfaces and building User Interfaces
solely based on EEG would be promising.
-
7/30/2019 Cursor Navigation using EEG
26/103
24
3.2 Design Criteria Software Synthesis:
The interfacing software, which is written in C and executed in OpenViBE and EEGLAB, must
conform to the following design criteria, which have been optimally stated for use in real-world
application in the field of prosthetics.
1. Accuracy- although there is no numerical value assigned to this criterion, the accuracy ofthe pattern detection program must be high enough such that there is the least possible
number of false positives; also there must be a noticeable improvement over the control
(PCA and temporal filters only)
2. Ease of Use- the analysis of behavioral patterns in the signal, through several brieftraining sessions
3.Applicability- the above 3 criteria must be combined in the optimal configuration thatprovides a robust, intuitive software interface
3.3 Required Materials
Signal-processing environment or GUI for grouping code. In this study, OpenViBE andEEGLAB (with Neural Network Toolbox, which contains the libraries necessary to
perform LVQ), both open-source free GUIs, were used in conjunction to execute code
and perform signal processing
A laptop computer capable of the processing power required to run OpenViBE andEEGLAB An EEG device with at least 10 channels; the Emotiv Epoc, a 14-channel
consumer-use headset, was used here. Saline solution and electrode contacts are also
required. Epoc software is also required for use as a control group later on. It is preferable
to use EEG devices having OpenVibe driver.
Source of EEG datasets; must include subjects performing some sort of motor control,and extensive experimental information from the source is required in order for the
training modules to function properly.
3.4 End UserThe primary audience for the information gleaned from this project will be those who are
interested in making convenient user interfaces in BCI.
-
7/30/2019 Cursor Navigation using EEG
27/103
25
3.5 Experimental Background & Design
OpenViBE is a GUI interface for grouping several programs together in a fashion that cause
EEG signals to pass through each one before returning a result. The entities into which code is
grouped are called modules within the program; thus, stringing together one or more modules
allows the signal to go through various stages of processing. Because of this nature, it was fairly
easy to use OpenViBE as in live; one such module within OV that allows devices to be
connected and share data is called the Acquisition Server. EEGLAB offers an encapsulated GUI
from which files can be executed using buttons or scripts; this functionality is demonstrated later
during signal processing. Furthermore, due to the fact that many programs to execute the
mathematical processing functions described earlier have already been created within EEGLAB
and OV, the synthesis portion of the study consists of manipulating and editing existing code to
reflect the researchers hypothesis (i.e. modifying/replacing the functions used for weightsettingthe ANN).
As mentioned previously, both OpenViBE and EEGLAB will be used perform processing. The
reason for this was the requirements of the processing software, as described in The Current
Focus. To summarize, the software requires: several spatial and temporal filters to perform
preprocessing; file readers and writers to read EEG datasets, training data, and create
experimental information; an LDA-based classification system with a custom output for ANN
weightsetting, as well as outputs for measuring intra-class Euclidean distances; an LVQ utilizingnovel training paradigms and weight-offset formulas; and finally, methods of exchanging data
between the two programs as well as via serial port. The following sections describe the general
flow of data between the two programs.
3.5.1 Normal Real-Time Use
Due to OpenVibes existing implementation of drivers for Emotive Epoc headset, OVs
acquisition server was used to gather data real-time from the device. Then, the string of
programming in OVs UI channels the streamed data into various temporal and spatial filters
(preprocessing); from here, the data is sent off to EEGLAB to undergo the LVQ compression
process. Since there was no way of directly streaming the data between the two programs at the
time of writing, OV simply uses a script to dump 16-byte files of streamed data within an
-
7/30/2019 Cursor Navigation using EEG
28/103
26
allocated folder. A corresponding script in EEGLAB reads each of these files, which contain
vital experimental information, in succession, deleting them as they are read by the program.
The LVQ-rendered signal from EEGLAB then returns to OpenViBE through the same dumping-
and-reading process. Because of this method of transferring data and classification lag, projected
lag between signal input and classification is about 15 ms. once the signal is read by OV again, it
goes through the LDA classification process, which yields two values: the classification state and
distance from data to the LDA hyper plane. Finally, these values are transferred via OVs built in
VRPN server to VRPN client that is used to navigate the mouse cursor.
3.5.2 Training Sequence
During the training sequence, the flow of data through the program is significantly different;
since the LVQ has not been setup or trained, the data never reaches the EEGLAB processing
portion of the system. First the source of noise is collected in order to train the LVQ the user is
requested to remain calm, still, and simply act normally. At this point, the signal only goes
through preprocessing (temporal/spatial filtering). Then, the user is asked to simulate a series of
possible artifacts, such as muscular artifacts from clenching the jaw or blinking. After a series of
successive trails, the rest signals and similar artifacts are averaged to provide representative
signals. Then, the rest signal is convolved with the artifacted signal to provide information
about noise only; this noise is then convolved with a sample source of clean, generated data to
-
7/30/2019 Cursor Navigation using EEG
29/103
27
initiate the training sequence of the LVQ. Following this, the formulas used calculate neuronal
weight offset as described in section Whats New are applied, and training process runs until
the end of the source. A flowchart showing this process:
3.5.3 Experimental Variables
The experimental procedure consists of two portions, simulation testing using prerecorded EEG
datasets from various institutions, namely the BCI Competition (see Sources), and real-time
testing with the researcher as the subject
Control Manipulated
Software
simulation
PCA-based processing
system
Custom LDA/LVQ-based processing
system
Real-time testing Proprietary software suite Custom LDA/LVQ-based processing
system
As seen above, there are two different controls; this is due to the fact that the PCA-based system
returns several errors when used in conjunction with the built-in Epoc driver on the Acquisition
Server.
-
7/30/2019 Cursor Navigation using EEG
30/103
28
3.6 Methods
For the sake of simplicity, the following section is divided into two parts: Software Synthesis &
Observation and Simulation.
Software synthesis summarizes the creation of software as well as several details and
justifications that were not mentioned earlier.
Software Synthesis & Observation
1. Start OpenViBE Designer, found in C:\xxxx\Program Files\openvibe\.2. To begin a processing scenario, a source of data is required. This can be accomplished
through the use of theAcquisition Client, GDF File Reader,or the Generic Stream Reader
module. The Acquisition Client is meant for use with real-time online processing and the
Generic stream reader only works with OpenViBE (.ov) files, so the CSV File Reader will
be used for now.
3. Select the source file to be read by double-clicking on the box to open its attributes andclicking the Browse button. In this project, the source of EEG Data was found in the public
domain (cited at the end of this paper).
4. Next, an identity module is used in order to mix the streams of data from different sourcesinto a single output signal. In addition, the identity box also stores redundant experiment
information for different sources so that a complete file can be written for later data
retention. Connect the output signal stream from the Generic Stream Reader to the Identity
box. In order to initialize the visualization in the latter part of this program, an XML
stimulation scenario playeris required, and is provided with Openvibe. Open its attributes
and browse to select the stimulation scenario, also provided with Openvibe. Tie its output
stimulations to the identity box as well.
5. A reference channel is needed in order to receive any data, because this serves as the ground(negative) for the electric potentials generated on the scalp. Therefore, drag it into the
scenario window and tie its input to identitys output. Open its attributes and select the
appropriate channel (corresponding to the source of EEG data). In this case, channel 2 was
used, with is the Nz channel, the electrode that rests on the bridge of the nose. As explained
-
7/30/2019 Cursor Navigation using EEG
31/103
29
earlier, the reference, or base, channel is usually located on the bridge of the nose or on the
skull behind the earlobes.
6. In order to reduce required processing power and gather the data from only thesensorimotor cortex, a channel selectoris needed. Drag it into the window and tie its input
to the reference channel. Open the attributes and select the channels to be used. Any
channels which correspond with electrode placement over the sensorimotor cortex of the
brain can be used. However, for simplicitys sake, all channels except the reference channel
were selected. The scenario should now consist of what is seen in Fig. 1.1
7. Aspatial filteris now required in order to mix the channels into a number smaller than wasgiven in the input. For this purpose, its function is to arrange the data and compact it from
ten channels to two so that later processing and feedback becomes much less complicated to
achieve. Additionally, it implements a filter called a surface laplacian filterthat improves
spatial resolution of the signal. In other words, it removes noise in relation to the reference
channel. The mixing of channels occurs using this equation:
= ( )
Where a is the kth output channel, is jth input channel and Sjk is the coefficient for jth
input channel and kth output channel in the spatial filter matrix.
Drag it into the window and tie its input to the output of the channel selector module.
Change its attributes so that it matches what is seen in Fig. 1.2 . Although any number of
output channels less than ten is possible, processing power requirement is increased greatly;
also, a signal average module will be implemented later, defeating the purpose of the
increased number of channels. On a side note, portability will decrease as required
processing power increases, so this module is somewhat vital to this program.
8. Next, to parse the data of any unwanted artifacts, such as blinks and muscle twitches, afrequency filter needs to be used to discard high spikes in the signal caused by false data.
Also, since this project focuses on motor control rather than other functions of the brain, the
data needs to be limited to the Beta band, which is associated with working functions of the
brain, namely the motor cortex. This frequency filter is known in OpenViBE as a Temporal
Filter, because frequency is, in fact, based on time. Drag it into the scenario window and tie
-
7/30/2019 Cursor Navigation using EEG
32/103
30
its input to the output of the spatial filter module. Change its attributes to that it matches
those in Fig. 1.3 .
9. In EEG signals, many ripples could signify an event, but it is far easier and faster to analyzea general trend (line of best fit) rather than the amplitude frequency. Therefore, the
incoming signal needs to be split into time-based chunks so that a relative curve can be
realized. This process, relative to signal processing software, is called epoching. OpenViBE
contains two different modules for epoching, and these are Time Based Epoching and
Stimulation Based Epoching. Since the immediate goal is filtering and not feedback, time
based epoching is used. Stimulation based epoching is epoching that is triggered after an
external stimulus, such as a key press, and it serves a different purpose whose explanation is
outside the scope of this paper. The difference between a signal with epoching and a normal
signal is shown in Fig. 1.4 . Drag the time based epoching module into the scenario window,and configure its attributes so it matches those seen in Fig. 1.5
10. In order to smooth the output signal further, averages of the epochs are taken, reverting thevisualization graph from a choppy stream to a smooth sine-like curve. OpenViBE includes
such a module to accomplish this, called Epoch Average. Drag it into the scenario window
and modify its attributes. Testing on this specific box could not occur simply because of the
complexity of the software at this point, so values were left at the defaults. However, its
default connections are streamed matrices, which is acceptable for visualization modules(such as signal display), but cannot connect its output to other modules which require a
signal type input and output needs to be configured to a signal type rather than a
streamed matrix of vectors type. This is accomplished by right-clicking the module and
configuring the outputs to the signals type in the dialog box that appears, as seen in Fig. 1.6
Doing so will change both the input and output signal type. Open up the attributes and
change the averaging type to moving epoch average, if it was not done so already. This type
of averaging is necessary because the signal is not stationary. Tie its input to the output of
the Time Based Epoching module.
11. Next, to show a greater differentiation between two different patterns, the signal is squared.This has the effect of making the space between epoch-points much clearer than before, as
seen in Fig. 1.7 . This also has the added benefit of the streamed signal positive, paving the
way for later equations required to process the signal (since a logarithm of the signal will be
-
7/30/2019 Cursor Navigation using EEG
33/103
31
taken later, a negative signal could produce an error message or a generally flat signal).
There is a generic module in OpenViBE for passing the signal through equations, called
Simple DSP. Drag this into the scenario window and enter the following equation after
opening the attributes:
12. To take an average of everything done so far in terms of processing the signal, drag theSignal Average module into the window and tie its input to the output of the Simple DSP
box module. This is advantageous because it allows for greater still differentiation between
behavioral patterns in the streamed signal and more advanced EEG synchronicity.
13. Using another Simple DSP module, the logarithm of the signal is taken. Drag the SimpleDSP module into the window, open its attributes, and enter the following equation:
14.Now, the matrix stream coming from the Simple DSP box, now a signal, must be convertedinto unit vectors so that classification can occur. The Feature Aggregatormodule does this
for us.
15. Implement the CSV File Writer here using the custom timing script that allows filedumping. [EEGLAB LVQ processing is not described here since it is only a GUI; for further
details, look at attached computer code].
16. In the same scenario, implement another CSV reader that allows reading of EEGLAB-dumped files; also bring in experimental information through its module.
17. Here, LDA is instituted. The feature vectors from the previous module allow for easyclassification of the streamed signal through the use of a module that compares incoming
vectors against a previously constituted user configuration file. This configuration file,
stored in the source of the designing platform (OpenViBE), consists of a recording of
vectors over a set period of time, creating a control group, as it were. Such a module, called
the Classifier Processor in OpenViBE, uses Linear Discriminant Analysis, or LDA, as a
simple yet efficient method of comparison and discrimination of patterns against the user
configuration. LDA operates by looking for specific features between the vectors through
analysis of class probabilities, assuming that the probability of any point in time of the
-
7/30/2019 Cursor Navigation using EEG
34/103
32
signal has a certain value is normally distributed as seen in Fig. 1.8 . In addition, it is also
assumed that the covariance of the two features is equal. From here, the Bayes Theorem
shows that:
WherePis the probability that objectx belongs to groups i orj From here, the LDA
formula is derived:
The Classifier Processor module in OpenViBE makes use of LDA to recognize patterns in
the incoming signal. Drag the module into the scenario window, and load the previously
created configuration file into the attributes. Set the class labels and the classifier as shown
in Fig. 1.9 .
18. Finally, the visualization process begins. First, instantiate the XML Stimulation ScenarioPlayer and tie its output to stimulation input on the identity module implemented at the
beginning of the scenario. This module provides cues by means of left- or right-arrows and
stimulates the last module in the scenario to provide visual feedback. The timing and
animation of the cues are provided in the C:\xxxx\openvibe\share\openvibe-
plugins\stimulation\graz_stimulation.xml directory, courtesy of an earlier experiment
conducted by the Graz University of Technology. Lastly, a Graz Visualization module,
native to OpenViBE, is used at the end of the scenario and is stimulated by the XML
Stimulation Scenario Player module and receives signals in the form of a streamed matrix
from the Classifier Processor module. See Fig. 1.10 for the resulting scenario.
19. To run the scenario, click the play button located at the top of the scenario window. Due tothe way the previous Graz University experiment was completed, the Graz Visualization
window will start the feedback process at 00:40. The threshold for determining success or
failure is the end of the visible x-axis line in either direction.
20. Perform training as prescribed earlier, and then run the datasets through the trained scenario.Each of the 26 motor-imagery datasets (containing specific experimental information,
-
7/30/2019 Cursor Navigation using EEG
35/103
33
including goals, event times, etc.) were run 4 times, for a total of 104 trials in simulation.
Each dataset was restricted to 15 events, for a total of 1560 candidate events during the trial
process.
21. Setup the PCA-based processing scenario and repeat the experimentation process.22. Later the vrpn server is employed to communicate with the module to move the cursor.
Fig. 1.1
Fig. 1.2
-
7/30/2019 Cursor Navigation using EEG
36/103
34
Fig. 1.3
Fig. 1.4
Fig. 1.5
Fig. 1.6
-
7/30/2019 Cursor Navigation using EEG
37/103
35
Fig. 1.7
Fig. 1.8
Fig. 1.9
-
7/30/2019 Cursor Navigation using EEG
38/103
36
Fig. 1.10
Simulation
The entire procedure is simulated in Matlab following the methods described for usage in
OpenViBE. The simulation keenly illustrates each of these procedures individually. The code for
the simulation is as follow:
Initiating execution:
function executebci
S.fh = figure('units','pixels',...
'position',[800 500 230 100],...
'menubar','none',...
'numbertitle','off',...
'name','BCI',...
'resize','off','color','w');
S.pb(1) = uicontrol('style','push',...
'units','pixels',...'position',[10 10 100 30],...
'fontsize',14,...
'string','Training');
S.pb(2) = uicontrol('style','push',...
'units','pixels',...
-
7/30/2019 Cursor Navigation using EEG
39/103
37
'position',[120 10 100 30],...
'fonts',14,...
'str','Simulation');
S.txt1 = uicontrol('style','text',...
'unit','pix',...
'position',[30 70 170 21],...
'string','BCI Simulation Project',...
'backgroundcolor','w',...
'fontsize',12);
set(S.fh,'CloseRequestFcn',{@winclose});
set(S.pb(:),'callback',{@pb_call,S})
csvwrite('flagbits.txt',[0 0 0 0 0 0 0 0 0 0]);
function [] = pb_call(varargin)
if varargin{1}==S.pb(2)
eval('bci_simulation');
elseif varargin{1}==S.pb(1)
eval('bci_training');
end
end
function [] = winclose(varargin)
delete('flagbits.txt');
delete(S.fh);
end
end
Training module:
function []= bci_training
flag.input=0;
flag.epoch=0;
Y_OFFSET=-50;
-
7/30/2019 Cursor Navigation using EEG
40/103
38
scr_size=get(0,'screensize');
S.fh = figure('units','pixels',...
'position',scr_size,...
'menubar','none',...
'name','BCI-Training',...
'numbertitle','off',...
'resize','off','color',[0.6 0.8 0.8]);
S.dspls = uicontrol('style','list',...
'unit','pix',...
'position',[30 Y_OFFSET+150 180 180],...
'min',0,'max',2,...
'fontsize',14,...
'string',{'Quadratic';'Logarithmic'});
S.dsptx = uicontrol('style','tex',...
'unit','pix',...
'position',[30 Y_OFFSET+350 40 20],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','DSP');
S.fltls = uicontrol('style','list',...
'unit','pix',...
'position',[270 Y_OFFSET+150 180 180],...
'min',0,'max',2,...
'fontsize',14,...
'string',{'Spatial Filtering'; 'Temporal Filtering'});
S.flttx = uicontrol('style','tex',...
'unit','pix',...
'position',[270 Y_OFFSET+350 80 20],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
-
7/30/2019 Cursor Navigation using EEG
41/103
39
'string','Filtering');
S.clsls = uicontrol('style','list',...
'unit','pix',...
'position',[720 Y_OFFSET+150 180 180],...
'min',0,'max',2,...
'fontsize',14,...
'string',{'LVQ'});
S.clstx = uicontrol('style','tex',...
'unit','pix',...
'position',[720 Y_OFFSET+350 110 20],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Classification');
S.eptx = uicontrol('style','tex',...
'unit','pix',...
'position',[470 Y_OFFSET+340 80 20],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Epoching');
S.epinput = uicontrol('style','edit',...
'units','pix',...
'position',[470 Y_OFFSET+300 190 30],...%'min',0,'max',2,... % This is the key to
multiline edits.
'string','',...0
'fontweight','bold',...%'horizontalalign','left',...'fontsize',11);
S.eppb = uicontrol('style','push',...
'units','pix',...
'position',[600 Y_OFFSET+275 60 20],...
-
7/30/2019 Cursor Navigation using EEG
42/103
40
'fontsize',10,...
'string','Get');
S.avgpb = uicontrol('style','push',...
'units','pix',...
'position',[520 Y_OFFSET+200 150 30],...
'fontsize',10,...
'string','Averaging');
BGCOLOR = get(gcf, 'color');
S.frame1= uicontrol('Units','points', ...
'BackgroundColor',BGCOLOR, ...
'ListboxTop',0, ...
'HorizontalAlignment', 'left',...
'Position',[400 Y_OFFSET+300 600 250], ...
'Style','frame', ...
'Tag','Frame1');
S.bcitext = uicontrol('style','tex',...
'unit','pix',...
'position',[400 Y_OFFSET+730 510 50],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',15,'fontweight','bold',...
'string','Brain Computer Interface Training Module ');
S.inputtx = uicontrol('style','tex',...
'unit','pix',...
'position',[10 Y_OFFSET+700 110 20],...'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Input : ');
S.inputed = uicontrol('style','edit',...
'units','pix',...
-
7/30/2019 Cursor Navigation using EEG
43/103
41
'position',[100 Y_OFFSET+700 190 30],...%'min',0,'max',2,... % This is the key to
multiline edits.
'string','',...
'fontweight','bold',...%'horizontalalign','left',...
'fontsize',11);
S.inputpb = uicontrol('style','push',...
'units','pix',...
'position',[300 Y_OFFSET+700 150 30],...
'fontsize',14,...
'string','Get');
S.indata = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[10 Y_OFFSET+430 510 200],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Load Data ');
S.bg = uibuttongroup('units','pix',...
'pos',[190 Y_OFFSET+650 260 40]);
S.rd(1) = uicontrol(S.bg,...
'style','rad',...
'unit','pix',...
'position',[10 5 70 30],...
'string','Training 1');
S.rd(2) = uicontrol(S.bg,...
'style','rad',...'unit','pix',...
'position',[90 5 70 30],...
'string','Training 2');
S.rd(3) = uicontrol(S.bg,...
'style','rad',...
-
7/30/2019 Cursor Navigation using EEG
44/103
42
'unit','pix',...
'position',[170 5 70 30],...
'string','Training 3');
framepos = get(S.frame1,'position');
framexoffset=framepos(1)+150;
frameyoffset=framepos(2)+80;
S.ftext(1) = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[framexoffset+10 frameyoffset+300 300 20],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Tracking the process... ');
S.ftext(2) = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[framexoffset+10 frameyoffset+210 300 80],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','DSP : none ');
S.ftext(3) = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[framexoffset+10 frameyoffset+130 300 80],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Filtering : none ');
S.ftext(4) = uicontrol('style','tex','horizontalalign','left',...'unit','pix',...
'position',[framexoffset+10 frameyoffset+50 300 80],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Epoching : none ');
-
7/30/2019 Cursor Navigation using EEG
45/103
43
S.ftext(5) = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[framexoffset+10 frameyoffset+30 300 20],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Averaging : none ');
S.ftext(6) = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[framexoffset+320 frameyoffset+210 400 100],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','Classification :');
S.ftext(7) = uicontrol('style','tex','horizontalalign','left',...
'unit','pix',...
'position',[framexoffset+320 frameyoffset+110 400 100],...
'backgroundcolor',get(S.fh,'color'),...
'fontsize',12,'fontweight','bold',...
'string','');
set(S.inputpb,'callback',{@inputpb_call,S,flag});
end
function [] = inputpb_call(varargin)
[S,flag]=varargin{[3,4]};
ipstr=get(S.inputed,'string');
if isempty(ipstr)
msgbox('Enter valid Input');
elseif strcmp(ipstr,'Subject1_2D.mat')load(ipstr);
D.lb1(:,1)=LeftBackward1(:,5);D.lb1(:,2)=LeftBackward1(:,6);D.lb1(:,3)=LeftBackward1(:,13);
D.lb1(:,4)=LeftBackward1(:,18);
D.lb2(:,1)=LeftBackward2(:,5);D.lb2(:,2)=LeftBackward2(:,6);D.lb2(:,3)=LeftBackward2(:,13);
D.lb2(:,4)=LeftBackward2(:,18);
-
7/30/2019 Cursor Navigation using EEG
46/103
44
D.lb3(:,1)=LeftBackward3(:,5);D.lb3(:,2)=LeftBackward3(:,6);D.lb3(:,3)=LeftBackward3(:,13);
D.lb3(:,4)=LeftBackward3(:,18);
D.lf1(:,1)=LeftForward1(:,5);D.lf1(:,2)=LeftForward1(:,6);D.lf1(:,3)=LeftForward1(:,13);D.lf1(:
,4)=LeftForward1(:,18);
D.lf2(:,1)=LeftForward2(:,5);D.lf2(:,2)=LeftForward2(:,6);D.lf2(:,3)=LeftForward2(:,13);D.lf2(:
,4)=LeftForward2(:,18);
D.lf3(:,1)=LeftForward3(:,5);D.lf3(:,2)=LeftForward3(:,6);D.lf3(:,3)=LeftForward3(:,13);D.lf3(:
,4)=LeftForward3(:,18);
D.rb1(:,1)=RightBackward1(:,5);D.rb1(:,2)=RightBackward1(:,6);D.rb1(:,3)=RightBackward1(:,
13);D.rb1(:,4)=RightBackward1(:,18);
D.rb2(:,1)=RightBackward2(:,5);D.rb2(:,2)=RightBackward2(:,6);D.rb2(:,3)=RightBackward2(:,
13);D.rb2(:,4)=RightBackward2(:,18);
D.rb3(:,1)=RightBackward3(:,5);D.rb3(:,2)=RightBackward3(:,6);D.rb3(:,3)=RightBackward3(:,
13);D.rb3(:,4)=RightBackward3(:,18);
D.rf1(:,1)=RightForward1(:,5);D.rf1(:,2)=RightForward1(:,6);D.rf1(:,3)=RightForward1(:,13);D.
rf1(:,4)=RightForward1(:,18);
D.rf2(:,1)=RightForward2(:,5);D.rf2(:,2)=RightForward2(:,6);D.rf2(:,3)=RightForward2(:,13);D.
rf2(:,4)=RightForward2(:,18);
D.rf3(:,1)=RightForward3(:,5);D.rf3(:,2)=RightForward3(:,6);D.rf3(:,3)=RightForward3(:,13);D.
rf3(:,4)=RightForward3(:,18);
set(S.dspls,'callback',{@dsppb_call,S,flag,D});
set(S.fltls,'callback',{@fltpb_call,S,flag,D});
set(S.indata,'string',strrep(readme,' For further information, please contact [email protected]
.',' '),'fontsize',9);
end
end
function [] = dsppb_call(varargin)S=varargin{3};
D=varargin{5};
L=get(S.dspls,{'string','value'});
switch L{2}
case 1
-
7/30/2019 Cursor Navigation using EEG
47/103
45
switch findobj(get(S.bg,'selectedobject'))
case S.rd(1)
sizelb1=size(D.lb1);
for i= 1 : sizelb1(2)
set(S.ftext(2),'string','DSP : Applying.....');
pause(0.1);
for j = 1 : sizelb1(1)
DSP.lb1(j,i)=D.lb1(j,i)*D.lb1(j,i);
DSP.lf1(j,i)=D.lf1(j,i)*D.lf1(j,i);
DSP.rb1(j,i)=D.rb1(j,i)*D.rb1(j,i);
DSP.rf1(j,i)=D.rf1(j,i)*D.rf1(j,i);
end
end %msgbox('training 1');
case S.rd(2)
sizelb1=size(D.lb1);
for i= 1 : sizelb1(2)
set(S.ftext(2),'string','DSP : Applying.....');
pause(0.1);
for j = 1 : sizelb1(1)
DSP.lb1(j,i)=D.lb1(j,i)*D.lb1(j,i);
DSP.lf1(j,i)=D.lf1(j,i)*D.lf1(j,i);
DSP.rb1(j,i)=D.rb1(j,i)*D.rb1(j,i);
DSP.rf1(j,i)=D.rf1(j,i)*D.rf1(j,i);
DSP.lb2(j,i)=D.lb2(j,i)*D.lb2(j,i);
DSP.lf2(j,i)=D.lf2(j,i)*D.lf2(j,i);
DSP.rb2(j,i)=D.rb2(j,i)*D.rb2(j,i);
DSP.rf2(j,i)=D.rf2(j,i)*D.rf2(j,i);
end
end
% msgbox('training 2');
case S.rd(3)
-
7/30/2019 Cursor Navigation using EEG
48/103
46
sizelb1=size(D.lb1);
for i= 1 : sizelb1(2)
set(S.ftext(2),'string','DSP : Applying.....');
pause(0.1);
for j = 1 : sizelb1(1)
DSP.lb1(j,i)=D.lb1(j,i)*D.lb1(j,i);
DSP.lf1(j,i)=D.lf1(j,i)*D.lf1(j,i);
DSP.rb1(j,i)=D.rb1(j,i)*D.rb1(j,i);
DSP.rf1(j,i)=D.rf1(j,i)*D.rf1(j,i);
DSP.lb2(j,i)=D.lb2(j,i)*D.lb2(j,i);
DSP.lf2(j,i)=D.lf2(j,i)*D.lf2(j,i);
DSP.rb2(j,i)=D.rb2(j,i)*D.rb2(j,i);
DSP.rf2(j,i)=D.rf2(j,i)*D.rf2(j,i);
DSP.lb3(j,i)=D.lb3(j,i)*D.lb3(j,i);
DSP.lf3(j,i)=D.lf3(j,i)*D.lf3(j,i);
DSP.rb3(j,i)=D.rb3(j,i)*D.rb3(j,i);
DSP.rf3(j,i)=D.rf3(j,i)*D.rf3(j,i);
end
end
% msgbox('training 3');
otherwise
disp('wrong option') % Very unlikely I think.
end
set(S.ftext(2),'string',{'DSP : Done';'Function used : Quadratic'});
%msgbox('F(x)=x*x');
case 2switch findobj(get(S.bg,'selectedobject'))
case S.rd(1)
sizelb1=size(D.lb1);
for i= 1 : sizelb1(2)
set(S.ftext(2),'string','DSP : Applying.....');
-
7/30/2019 Cursor Navigation using EEG
49/103
47
pause(0.1);
for j = 1 : sizelb1(1)
DSP.lb1(j,i)=log(D.lb1(j,i)+1);
DSP.lf1(j,i)=log(D.lf1(j,i)+1);
DSP.rb1(j,i)=log(D.rb1(j,i)+1);
DSP.rf1(j,i)=log(D.rf1(j,i)+1);
end
end %msgbox('training 1');
case S.rd(2)
sizelb1=size(D.lb1);
for i= 1 : sizelb1(2)
set(S.ftext(2),'string','DSP : Applying.....');
pause(0.1);
for j = 1 : sizelb1(1)
DSP.lb1(j,i)=log(D.lb1(j,i)+1);
DSP.lf1(j,i)=log(D.lf1(j,i)+1);
DSP.rb1(j,i)=log(D.rb1(j,i)+1);
DSP.rf1(j,i)=log(D.rf1(j,i)+1);
DSP.lb2(j,i)=log(D.lb2(j,i)+1);
DSP.lf2(j,i)=log(D.lf2(j,i)+1);
DSP.rb2(j,i)=log(D.rb2(j,i)+1);
DSP.rf2(j,i)=log(D.rf2(j,i)+1);
end
end
% msgbox('training 2');
case S.rd(3)sizelb1=size(D.lb1);
for i= 1 : sizelb1(2)
set(S.ftext(2),'string','DSP : Applying.....');
pause(0.1);
for j = 1 : sizelb1(1)
-
7/30/2019 Cursor Navigation using EEG
50/103
48
DSP.lb1(j,i)=log(D.lb1(j,i)+1);
DSP.lf1(j,i)=log(D.lf1(j,i)+1);
DSP.rb1(j,i)=log(D.rb1(j,i)+1);
DSP.rf1(j,i)=log(D.rf1(j,i)+1);
DSP.lb2(j,i)=log(D.lb2(j,i)+1);
DSP.lf2(j,i)=log(D.lf2(j,i)+1);
DSP.rb2(j,i)=log(D.rb2(j,i)+1);
DSP.rf2(j,i)=log(D.rf2(j,i)+1);
DSP.lb3(j,i)=log(D.lb3(j,i)+1);
DSP.lf3(j,i)=log(D.lf3(j,i)+1);
DSP.rb3(j,i)=log(D.rb3(j,i)+1);
DSP.rf3(j,i)=log(D.rf3(j,i)+1);
end
end
% msgbox('training 3');
otherwise
disp('wrong option') % Very unlikely I think.
end
set(S.ftext(2),'string',{'DSP : Done';'Function used : Logarithmic'});
%msgbox('F(x)=log(x+1)');
otherwise
msgbox('select correct option');
end
set(S.fltls,'callback',{@fltpb_call,S,flag,D,DSP});
assignin('base','dsp',DSP);end
function []= fltpb_call(varargin)
S=varargin{3};
flag=varargin{4};
D=varargin{5};
-
7/30/2019 Cursor Navigation using EEG
51/103
49
DSP=varargin{6};
CO.val=[0.1 0.2 0.3 0.4];
L=get(S.fltls,{'string','value'});
switch L{2}
case 1
sizeofdata=size(DSP.lb1);
switch findobj(get(S.bg,'selectedobject'))
case S.rd(1)
for k = 1 : sizeofdata(2)
set(S.ftext(3),'string','Filtering : Applying.....');
pause(0.1);
for j = 1 : sizeofdata(1)
FLT.lb1(j,k)= (1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+...
(3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4)));
FLT.lf1(j,k)= (1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+...
(3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4)));
FLT.rb1(j,k)= (1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+...
(3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4)));
FLT.rf1(j,k)= (1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+...
(3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4)));
end
end
case S.rd(2)
for k = 1 : sizeofdata(2)
set(S.ftext(3),'string','Filtering : Applying.....');
pause(0.1);for j = 1 : sizeofdata(1)
FLT.lb1(j,k)= (1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+...
(3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4)));
FLT.lf1(j,k)= (1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+...
(3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4)));
-
7/30/2019 Cursor Navigation using EEG
52/103
50
FLT.rb1(j,k)= (1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+...
(3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4)));
FLT.rf1(j,k)= (1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+...
(3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4)));
FLT.lb2(j,k)= (1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+...
(3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4)));
FLT.lf2(j,k)= (1*(CO.val(k)*DSP.lf2(j,1)))+(2*(CO.val(k)*DSP.lf2(j,2)))+...
(3*(CO.val(k)*DSP.lf2(j,3)))+(4*(CO.val(k)*DSP.lf2(j,4)));
FLT.rb2(j,k)= (1*(CO.val(k)*DSP.rb2(j,1)))+(2*(CO.val(k)*DSP.rb2(j,2)))+...
(3*(CO.val(k)*DSP.rb2(j,3)))+(4*(CO.val(k)*DSP.rb2(j,4)));
FLT.rf2(j,k)= (1*(CO.val(k)*DSP.rf2(j,1)))+(2*(CO.val(k)*DSP.rf2(j,2)))+...
(3*(CO.val(k)*DSP.rf2(j,3)))+(4*(CO.val(k)*DSP.rf2(j,4)));
end
end
case S.rd(3)
for k = 1 : sizeofdata(2)
set(S.ftext(3),'string','Filtering : Applying.....');
pause(0.1);
for j = 1 : sizeofdata(1)
FLT.lb1(j,k)= (1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+...
(3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4)));
FLT.lf1(j,k)= (1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+...
(3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4)));
FLT.rb1(j,k)= (1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+...
(3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4)));FLT.rf1(j,k)= (1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+...
(3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4)));
FLT.lb2(j,k)= (1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+...
(3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4)));
FLT.lf2(j,k)= (1*(CO.val(k)*DSP.lf2(j,1)))+(2*(CO.val(k)*DSP.lf2(j,2)))+...
-
7/30/2019 Cursor Navigation using EEG
53/103
51
(3*(CO.val(k)*DSP.lf2(j,3)))+(4*(CO.val(k)*DSP.lf2(j,4)));
FLT.rb2(j,k)= (1*(CO.val(k)*DSP.rb2(j,1)))+(2*(CO.val(k)*DSP.rb2(j,2)))+...
(3*(CO.val(k)*DSP.rb2(j,3)))+(4*(CO.val(k)*DSP.rb2(j,4)));
FLT.rf2(j,k)= (1*(CO.val(k)*DSP.rf2(j,1)))+(2*(CO.val(k)*DSP.rf2(j,2)))+...
(3*(CO.val(k)*DSP.rf2(j,3)))+(4*(CO.val(k)*DSP.rf2(j,4)));
FLT.lb3(j,k)= (1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+...
(3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.val(k)*DSP.lb2(j,4)));
FLT.lf3(j,k)= (1*(CO.val(k)*DSP.lf3(j,1)))+(2*(CO.val(k)*DSP.lf3(j,2)))+...
(3*(CO.val(k)*DSP.lf3(j,3)))+(4*(CO.val(k)*DSP.lf3(j,4)));
FLT.rb3(j,k)= (1*(CO.val(k)*DSP.rb3(j,1)))+(2*(CO.val(k)*DSP.rb3(j,2)))+...
(3*(CO.val(k)*DSP.rb3(j,3)))+(4*(CO.val(k)*DSP.rb3(j,4)));
FLT.rf3(j,k)= (1*(CO.val(k)*DSP.rf3(j,1)))+(2*(CO.val(k)*DSP.rf3(j,2)))+...
(3*(CO.val(k)*DSP.rf3(j,3)))+(4*(CO.val(k)*DSP.rf3(j,4)));
end
end
end
set(S.ftext(3),'string',{'Filtering : Done';'Function used : Spatial filtering'});
%msgbox('spatial filtering');
case 2
sizeofdata=size(DSP.lb1);
switch findobj(get(S.bg,'selectedobject'))
case S.rd(1)
for k = 1 : sizeofdata(2)
set(S.ftext(3),'string','Filtering : Applying.....');
pause(0.1);for j = 1 : sizeofdata(1)
FLT.lb1(j,k)= ((1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+...
(3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))))/4;
FLT.lf1(j,k)= ((1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+...
(3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))))/4;
-
7/30/2019 Cursor Navigation using EEG
54/103
52
FLT.rb1(j,k)= ((1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+...
(3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))))/4;
FLT.rf1(j,k)= ((1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+...
(3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))))/4;
end
end
case S.rd(2)
for k = 1 : sizeofdata(2)
set(S.ftext(3),'string','Filtering : Applying.....');
pause(0.1);
for j = 1 : sizeofdata(1)
FLT.lb1(j,k)= ((1*(CO.val(k)*DSP.lb1(j,1)))+(2*(CO.val(k)*DSP.lb1(j,2)))+...
(3*(CO.val(k)*DSP.lb1(j,3)))+(4*(CO.val(k)*DSP.lb1(j,4))))/4;
FLT.lf1(j,k)= ((1*(CO.val(k)*DSP.lf1(j,1)))+(2*(CO.val(k)*DSP.lf1(j,2)))+...
(3*(CO.val(k)*DSP.lf1(j,3)))+(4*(CO.val(k)*DSP.lf1(j,4))))/4;
FLT.rb1(j,k)= ((1*(CO.val(k)*DSP.rb1(j,1)))+(2*(CO.val(k)*DSP.rb1(j,2)))+...
(3*(CO.val(k)*DSP.rb1(j,3)))+(4*(CO.val(k)*DSP.rb1(j,4))))/4;
FLT.rf1(j,k)= ((1*(CO.val(k)*DSP.rf1(j,1)))+(2*(CO.val(k)*DSP.rf1(j,2)))+...
(3*(CO.val(k)*DSP.rf1(j,3)))+(4*(CO.val(k)*DSP.rf1(j,4))))/4;
FLT.lb2(j,k)= ((1*(CO.val(k)*DSP.lb2(j,1)))+(2*(CO.val(k)*DSP.lb2(j,2)))+...
(3*(CO.val(k)*DSP.lb2(j,3)))+(4*(CO.