SlideShare a Scribd company logo
Neural Control of Drone Using
Steady-State Visually Evoked Potentials
Sam Kinn
Department of Electrical Engineering
University of Washington
Seattle, Washington
Email: samrkinn@uw.edu
Jon Lundlee
Deparment of Electrical Engineering
University of Washington
Seattle, Washington
Email: jlundlee@uw.edu
Reni Magbagbeola
Department of Electrical Engineering
and Department of Physics
University of Washington
Seattle, Washington
Email: reni magbagbeola@yahoo.co.uk
Michael Schober
Department of Electrical Engineering
University of Washington
Seattle, Washington
Email: schober4@uw.edu
Abstract—This paper describes the design and implementation
of a brain-computer interface (BCI) system that utilizes steady-
state visually evoked potentials (SSVEP) to control a drone. The
system uses recursive least squares (RLS) adaptive filtering and
canonical correlation analysis to interpret the signal acquired
via electroencephalography (EEG). The system was able to
discriminate between 5 different frequencies in the range of 13-
17 Hz and map their detection to five commands for the drone:
forward, up, down, yaw left, and yaw right. The implementation
was successful and the user was able use neural signals to provide
the desired command to the drone, usually within 5-10 seconds.
Index Terms—brain-computer interface (BCI), steady-state
visually evoked potentials (SSVEP), electroencephalography
(EEG), canonical correlation analysis, recursive least squares
(RLS) adaptive filter, drone control
I. INTRODUCTION
This report documents the design and implementation of
a brain-computer interface (BCI) that utilizes steady-state
visually evoked potentials from the occipital lobe to control
a drone, specifically a Solo 3DR quadcopter. It was used as
the project for the Systems, Controls, and Robotics capstone
(EE 448-9) at the University of Washington Department of
Electrical Engineering.
The motivation for the pursuit of this project, in addition to
applying what had been learned in previous EE coursework,
was to provide a proof-of-concept of methods that use brain
signals to control external devices. While this has been shown
to be possible in plenty of previous experiments using a
variety of different methods for acquiring useful signals, the
design of these systems has largely been limited to researchers
with many years of experience in neural engineering and
computational neuroscience or similar fields. The goal was
to demonstrate that one of these systems could be developed
by undergraduate engineering students using the methods and
techniques used in recent research literature.
This report is broken down into five sections (not including
the introduction): background, design, results, discussion and
conclusion, and acknowledgment. The background section
describes the underlying science behind the acquisition of the
neural signals used, as well as the theory for filtering and
classifying those signals. It also details the communication
protocols and the capabilities of the drone being used. The
design section describes how these different components were
put together and the reasoning behind the choices that were
made, including issues that were raised over the course, or
how they might be addressed given additional time to work
on the project.
The results section demonstrates how the system performed,
from accuracy of signal classification to the rate at which
commands are sent to the drone and the drone’s responses to
said commands. The conclusions and discussions summarizes
the project and provides some insight into future directions
this project can take and what could potentially have been
accomplished with an extra quarter of two or with more
financial assistance.
II. BACKGROUND
A. Electroencephalography (EEG)
Electroencephalography (EEG) is a very well-known and
popular method for measuring neural signals. Electrodes are
placed at many points on a patients scalp to pick up elec-
trical field changes due to neural activity. These points are
standardized to make analysis and comparison easier. It is a
noninvasive technique for monitoring brain activity [1]. Other
methods of measuring include electrocorticography (ECoG),
magnetoencephalography (MEG), and functional magnetic
resonance imaging (fMRI). For the purposes of our project,
EEG compares favorably to these other methods for a number
of reasons. Compared to the other modalities, EEG is cheaper
and more attainable. For instance, MEG requires a large
machine in a magnetically-shielded room and ECoG requires
surgical implantation of the electrodes onto the surface of
the brain. The downside is that EEG tends to be noisier and
provide less information than these other modalities.
Typically, around 20 electrodes are used, which was the
number of electrodes used with our particular EEG system.
However, for clinical applications, larger arrays of recording
leads are used to obtain more information and better spatial
resolution. The addition of electrodes can generally improve
the signal and information about the signal, but these denser
electrode arrays can be more expensive and more difficult to
use. It is generally ideal to use as few electrodes as possible.
The accepted standard for electrode placement for EEG is
the 10-20 system, which is shown in Figure 1 [2].
Fig. 1. A diagram of the 10-20 system. The gray circles indicated electrode
positions introduced in later additions to the system, while the black circles
represent the positions of the original 10-20 system, which are the ones used
for this project.
B. Steady-state visually evoked potentials (SSVEP)
Steady-state visually evoked potentials (SSVEP) are un-
conscious and natural responses to visual stimuli where an
increase in power at a specific frequency when there is visual
activity also oscillating at that frequency (such as a flickering
light). These signals can be quite strong relative to the rest
of the frequency spectrum and have recently been viewed as
useful signals for application in BCIs [3].
C. Recursive least-squares (RLS) adaptive filtering
An adaptive filter is a type of filter that utilizes a feedback
loop to adapt according to the error until said error is min-
imized according to a given criterion. It refines its transfer
function using this error signal, which is generally known
and separable from the received signal. One example of the
application of adaptive filtering is in the removal of artifacts
from an electrocardiograph (ECG) signal in order to attain
more accurate arrhythmia detection [4].
A recursive least-squares (RLS) filter is a type of adaptive
filter, which aims to minimize the cost function associated with
the error. The error at each iteration n is computed as follows:
ε(n) =
n
i=1
λn−1
e2
(i) =
n
i=1
λn−1
(d(i) − y(i))2
(1)
Where d(i) is the desired response to the filter, y(i) is the
actual response, and e(i) is the error between the two. λ is
the called the ”forgetting factor” and is used to determine how
much emphasis is placed on more recent samples relative to
earlier ones.
Figure 2 shows the block diagram of a general adaptive
filter.
Fig. 2. Block diagram of an adaptive filter. This specific example
is of a recursive least-squares, but most adaptive filters follow this
basic model. Source: http://ocw.mit.edu/courses/mechanical-engineering/
2-161-signal-processing-continuous-and-discrete-fall-2008/study-materials/
rls.pdf
D. Canonical correlation analysis (CCA)
Canonical correlation analysis (CCA) is a technique used to
compare the similarities between two multidimensional data
matrices, X and Y. This is accomplished by finding basis
vectors for each matrix that maximize the correlations between
the data matrices when projected onto these bases. Mathemat-
ically, this means maximizing the following function:
ρ =
E[wT
XXYT
wY ]
E[wT
XXXT wX]E[wT
Y XYT wX]
(2)
=
wT
Y CXY wY
wT
XCXY wY wT
XCXY wY
(3)
Where CXY = Cov(X, Y) is the covariance between X
and Y, ρ is correlation, and wY wx are the aforementioned
basis vectors.
E. Solo 3DR Quadcopter
The Solo 3DR Quadcopter is an open core platform smart
quadcopter created by 3D Robotics. It has dual built-in 1 GHz
Linux companion computers that monitor 500 parameters of
flight data at a rate of 20 Hz. The Solo is able to be modified
by use of open source software and developer tools produced
by 3DR. It is also built with an accessory bay that allows for
the solo to be equipped with external devices such as LED
arrays, ballistic parachute systems, and external companion
computers.
F. Micro Air Vehicle Link (MAVLink)
A communication protocol for unmanned vehicles, the
inter-communication of the subsystem of the vehicles, and/or
ground control stations. The protocol consists of sending
multiple bytes that includes a start-of-frame, payload-length,
packet sequence, system ID, component ID, message ID,
payload, and a checksum of the entire packet [5].
III. DESIGN
Design of this project begins with the high level concept
in the form of a block diagram. The realization of creating
a neural controller for an external, real device provides a
obvious separation for design subsections. Our first section
is the command interpreter which includes an EEG, a method
to create SSVEP signals, and a classifier. The second section
the external device controller that can take in a command
and convert it to a useful action while providing some sort
of feedback to the user. Figure 3 shows a high-level block
diagram of the system containing each of these subsystems.
Fig. 3. High-level block diagram of the neural control system.
From the diagram we can see the split, but also common
node for each of the subsystems. This node is essential for
developing a closed-loop system of separate components: a
laptop that provides device feedback and visual stimuli to the
user while simultaneously sending information from the user
to the device.
A. Neural Command Subsystem Design
The main task of the neural command subsystem is to
interpret commands from the user’s neural signals as accu-
rately as possible. Commands are then displayed and sent
to a central control laptop. There are three key factors to
obtaining and classifying signals from SSVEP. First, we need
a light stimuli in 13-25 Hz range for optimal detection and
noise reduction [6]. Second, a EEG system that is capable of
recording and transmitting real-time data for analysis. Finally,
a signal processor that filters and classifies that signal as an
integer command.
Five light-emitting diodes (LEDs) flashing at frequencies of
13, 14, 15, 16 and 17 Hz were chosen to act as a visual stimuli
from which we could record neural activity using the 20-
electrode EEG system from B-Alert. When looking directly at
one of these LEDs, neural activity in the occipital lobe spikes
at the same frequency allowing us to capture a distinguishable
signal (SSVEP). Figure 4 shows the arrangement of LEDs
to be used for acquisition of SSVEPs which was controlled
by an Arduino for system independence. Because this project
focused on controlling a drone the LEDs needed to be bright
enough for outdoors. Thus, we used ultra-bright green LEDs:
green appearing brighter with the same energy as other colors.
We used the B-Alert system specifically for its low electrode
impedance and real-time data capture capabilities which satis-
fied one of our key requirements for accurate signal analysis.
Fig. 4. LED holders that shows the placement of the LEDs around the user’s
laptop.
While the B-Alert system is more than capable of obtaining
low noise, accurate data, it cannot send data directly to
MATLAB in real-time for data analysis. There was a very
easy, simple solution: the lab streaming layer (LSL) system in
Matlab and underlying C code allowed us to access the EEG
data in real time via the USB socket. The B-Alert application
within LSL allowed us to effectively interface the system with
the EEG software and export a live stream of the recording
data into MATLAB.
Once the raw data has been acquired, it goes through the
typical pre-processing stages of a bandpass filter from 5-75
Hz and a notch filter at 60 Hz (to account for the power
line). Then, an RLS filter attempts to filter noise from an
electrode capturing an ECG signal in the raw signal f(n). The
filter follows Figure 2, where the ”desired” signal d(n) is the
ECG signal and the ”error” signal e(n) is the filtered output
used for classification After applying an adaptive filter to this
data we could see the distinct spikes associated with SSVEP
and corresponding to specific LEDs at the correct frequency
from the EEG equipment. Figure 6 shows the output of these
filtering techniques on a signal from a user looking at a 13Hz
flashing LED.
Canonical correlation analysis was used to correlate the
signal with a variety of matrices for each frequency that
consisted of simple sine waves oscillating at that specific
frequency and its harmonics. Figure 5 shows an example of
one of the matrix used to compare with the signal.
Fig. 5. Visualization of the matrix used in CCA for determining how well the
provided signal compares with a 13-Hz sine wave and its harmonics. Each
sine wave corresponds to a row in the matrix.
This resulted in a list of correlation values for each fre-
quency. If there was a high enough standard deviation between
the correlations, then the frequency with the highest correlation
was chosen as the command. This was to ensure that there
truly was a command and not just noise or some random
artifact. The standard deviation threshold was chosen based on
experimentation comparing the standard deviations produced
when looking at a light versus looking at nothing specific.
After commands were classified, they were sent over an IP
address socket as a single character between ’0’ and ’5’
inclusively. A ’0’ acted as an unsure command or an unfocused
command and results in no movement from the drone. When
the commands are received on the control laptop they are re-
written into a text file until used by the external device or
overwritten by a new command.
B. External Device Subsystem Design
Each external device that could potentially be controlled
by our neural command system has its own set of internal
control requirements. To keep our systems general and easy
to use the number of commands is limited to 5 active and
1 null command. Whether we are working with a drone, a
wheelchair, or any other device the number of commands will
not change. In our case we are working with a drone. This
limited number of commands means that the device itself
is responsible for acting in a controlled, safe, and reliable
manner. Additionally, the device needs to provide some sort
of feedback to the user. This feedback can be in the form of
telemetry, video, or haptic and is necessary to close the control
loop.
Once the controller sees that there is data at the socket it
retrieves and sends the initial velocity commands for the drone
that corresponds to the numerical data. For safety concerns
at least two of the same consecutive commands needed to
be sent before an action was taken. This would mean that
Fig. 6. Frequency spectrum of signal in occipital lobe when viewing LED
flashing at 13 Hz after processing using typical pre-processing techniques plus
RLS filtering.
an LED light flashing at 13 hertz can be assigned a value
of 1 that is sent to the drone and corresponds to a positive
velocity in the X direction every two or more ’1’ commands
in a row. The five directions associated with the five numerical
commands were, up, down, forward, yaw left, and yaw right.
Using a combination of these commands the drone can fly in
all directions.
The built in drift control is nulled when the BCI controller
takes over and starts sending commands. We needed to have a
stable system, so it was necessary to implement a drift control
of our own. To do this the controller was designed so that max
velocity values are sent out when data is seen at the socket
and when no data is seen it enters into the velocity decay
mode. Built in sensors allow for real time acquisition of flight
data such as acceleration and velocity. A loop is entered that
constantly decays the current velocity in every iteration and
variables can be changed to control the initial velocity and the
rate of decay. This allows the ability to control how far the
drone flies upon seeing data at the socket, setting an initial
velocity, and then coming to a complete stop.
In order to tune all of the values used for controlling the
drone we developed our own simulation visualizer. In the
simulation we were able to hook up a user with an EEG to
the control laptop and have said user run through an obstacle
course. Because of this we were able to decide on turning
the drone 45 degrees with each turn command and moving
forward about 2 meters with each forward command. The up
and down commands are tuned to move about 0.5 meters in
their respective directions.
Connected to the bottom front of the drone is a small GoPro
camera that is used to provide real-time visual feedback to the
user. Because the user is required to look at flashing LEDs
with consistent focus there needed to be some sort of visual
reference. While the drone sends telemetry feedback to the
internal controller for stabilization, camera data is sent either
through a video pipeline to the laptop or to a large tablet. At
the moment the video pipeline has a bit of a latency issue
causing video to lag a few seconds. However, when video is
sent to a large tablet the lag nearly disappears.
IV. RESULTS
Our final system consisted of two laptop computers, a tablet,
an Arduino hooked up to 5 LEDs, and a B-Alert wet EEG
system. Figure 7 shows this final setup with a user wearing
the EEG focusing on different frequency flashing LEDs. With
everything in place and outdoors the EEG was able to pick
up the SSVEP signals and the CCA classifier was able to
accurately interpret the commands before sending them to the
control laptop. Once the commands were received the drone
was able to move forward, yaw left, yaw right, move up, and
move down without any outside assistance.
Fig. 7. Flashing LED lights.
Control over the drone in an outdoors environment was
noticeably lacking compared to control in an indoors and
simulated environment. This is because of a drop in classifica-
tion accuracy stemming from bright natural lighting interfering
with the LED lighting. Another possible issue was the safety
redundancy requiring too high of accuracy from the CCA
classifier to allow the drone to move naturally. However,
when the drone was given a command it acted exactly as we
expected it to.
In general, it was difficult to get a reliable and consistent
value for the accuracy of the classifier since there were so
many variables that could affect this. As mentioned previously,
the natural lighting could interfere with the brain’s ability to
focus on the LEDs. There was also the issue of other stimuli,
such as people talking to the user, that would interfere with
the accuracy. But the system was set up in such a way that the
drone rarely, if ever, received and acted on a false command.
This was because every command that was sent required the
same confirmation command in order to act on it. It took 3.5
seconds for a reasonably accurate command to be processed
and sent to the drone. As a result, a focused user was able to
move the drone every 5-10 seconds, with the movement being
quicker if the user was sending the drone in a single direction.
Lastly, it should be mentioned that all code (both for
simulated and actual control), images, videos, plots, etc. will
Fig. 8. Drone flying while being controlled by flashing light system
be available on GitHub in the following repository: https:
//github.com/samrkinn/NeuroDrone. This repository will also
be the source of any updates to the project as they become
available, as well as more information than can be provided
in this report.
V. DISCUSSION
Over the course of the final quarter we learned a lot of
techniques that we wish we could have implemented into this
project. With a few more weeks, or months, we would have
attempted to switch over to a dry electrode system, optimize
the code in C to run from a single laptop, and developed a
Q-Learning algorithm to enhance speed and accuracy of signal
classification.
The reason for a switch to a dry electrode system is largely
because they are cheaper and therefore more accessible, as
well as being easier to assemble and use. The reason one
wasn’t used is because they are generally noisier and it can
be much harder to get a good signal. Extra time would have
allowed us to perfect the device on better equipment and then
use the lessons and proofs of concept and apply them to this
cheaper system.
Another future improvement would have been to utilize
parallel processing libraries in C to be able to perform all of the
acquisition, processing, and control on a single machine. This
would have been more efficient for obvious reasons and likely
would have also resulted in faster code since we would have
been using C where we had been using Python and MATLAB
in many of the functions.
A Q-Learning algorithm would vectorize the state of the
drone and the EEG signals into a few main features. Based on
these features we could train the algorithm to learn policies
for handling small chunks of data. For instance, we could set
one vectorized state of the EEG signal to include: the singular
value decomposition values at 13Hz, 14Hz, 15Hz, 16Hz, and
17Hz along with a couple of their harmonics along with what
the most recent classification was, within the last second, and
a few other features. The algorithm would then attempt to
classify this vector with what it already knows. After showing
the user its guess, the user can then tell the learning algorithm
whether it was correct or incorrect and update the algorithm
with new information. Training sets can be used to increase the
speed of this process and in the end this classifier should be
able to work based on policies with smaller EEG data samples
at a high accuracy.
When such a high-speed, accurate signal classifier exists we
will then be able to remove most of the safety constraints. This
will allow the user to fly the drone, or operate any other device
with a higher resolution. It would allow the user to potentially
have more commands, and thus more functionality and better
control of the drone.
VI. CONCLUSION
This project provided us the opportunity to work on the
still-developing technology of BCIs. As undergraduates with
limited research experience, we were able to successfully taken
the methods and techniques used by neuroscience researchers
and implement a BCI to control the flight of a drone. It
did not take years, but did require coming up to speed fast
through research of recent literature in neuroscience and neural
engineering. SSVEP signals were obtained with the use of an
EEG headset and special software while a user looks at visual
stimuli. The SSVEP signals were then classified by their power
spectrum densities through filtering techniques in MATLAB
and a corresponding signal was able to be sent over a socket
to a drone controller. Once this process had been completed,
we were able to control this external device, the drone, with
the use of neural signals.
ACKNOWLEDGMENT
We would like to take the time to give thanks to the
many people and organizations that have helped us succeed
in completing this project:
• The UW Department of Electrical Engineering and Col-
lege of Engineering for providing funding to help get our
project off the ground.
• Advanced Brain Monitoring for allowing us to use B-
Alert X24 EEG System hardware and software.
• Rajesh Rao for lending the B-Alert X24 EEG System to
us for 6 months and Dimitros Gklezakos for training us
in how to use it.
• 3D Robotics for giving us a university discount on the
drone.
• And finally the UW Biorobotics Lab, the Center for
Sensorimotor Neural Engineering, Howard Chizeck,
Niveditha Kalavakonda, and Christine Shon for technical
help and support.
REFERENCES
[1] Niedermeyer E, da Silva FL (2004). Electroencephalography: Basic
Principles, Clinical Applications, and Related Fields. Lippincot Williams
& Wilkins. ISBN 0-7817-5126-8.
[2] Oostenveld R, Praamstra P (2001). The five percent electrode system
for high-resolution EEG and ERP measurements. Clinical Neurophys-
iology 112: 713719. doi:10.1016/S1388-2457(00)00527-7. CiteSeerX:
10.1.1.116.7379.
[3] Beverina F, Palmas G, Silvoni S, Piccione F, Giove S (2003). User
adaptive BCIs: SSVEP and P300 based interfaces. PsychNology Journal
1: 33154.
[4] Thakor NV, Zhu, YS (1991). Applications of adaptive filtering to ECG
analysis: noise cancellation and arrhythmia detection. IEEE Transactions
on Biomedical Engineering 38 (8): 785794. doi:10.1109/10.83591. ISSN
0018-9294.
[5] Meier L (2009). MAVLink Micro Air Vehicle Communication Protocol.
http://qgroundcontrol.org/mavlink/start.
[6] Ku R, Duszyk A, Milanowski P, abcki M, Bierzyska M, Radzikowska
Z, et al. (2013). On the Quantification of SSVEP Frequency Responses
in Human EEG in Realistic BCI Conditions. PLoS ONE 8(10): e77536.
doi:10.1371/journal.pone.0077536

More Related Content

PDF
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
PDF
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
PDF
Y4502158163
PDF
Deep Learning personalised, closed-loop Brain-Computer Interfaces for mu...
PDF
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
PPTX
Artificial neural network for load forecasting in smart grid
PDF
Ijarcet vol-2-issue-3-916-919
PDF
E44082429
COMPARATIVE STUDY OF BACKPROPAGATION ALGORITHMS IN NEURAL NETWORK BASED IDENT...
Levenberg marquardt-algorithm-for-karachi-stock-exchange-share-rates-forecast...
Y4502158163
Deep Learning personalised, closed-loop Brain-Computer Interfaces for mu...
Artificial Neural Networks (ANNS) For Prediction of California Bearing Ratio ...
Artificial neural network for load forecasting in smart grid
Ijarcet vol-2-issue-3-916-919
E44082429

What's hot (20)

PPTX
Neural Network Classification and its Applications in Insurance Industry
PDF
Efficiency of Neural Networks Study in the Design of Trusses
PDF
Energy efficient clustering in heterogeneous
PDF
An energy-efficient cluster head selection in wireless sensor network using g...
PDF
Neural wavelet based hybrid model for short-term load forecasting
PDF
Face Recognition Using Neural Networks
PDF
Energy aware clustering protocol (eacp)
PDF
A040101001006
PDF
A review on energy efficient clustering routing
PDF
Adaptive Monitoring and Localization of Faulty Node in a Wireless Sensor Netw...
PDF
Iv3515241527
PDF
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
PDF
Power_and_Data_08-16-12_Final_Color
PDF
Application of CI in Motor Modeling
PDF
Dy4301752755
PDF
An Ant colony optimization algorithm to solve the broken link problem in wire...
PDF
V01 i010405
PPTX
AI IEEE
PDF
Data Accuracy Models under Spatio - Temporal Correlation with Adaptive Strate...
PDF
Solution for intra/inter-cluster event-reporting problem in cluster-based pro...
Neural Network Classification and its Applications in Insurance Industry
Efficiency of Neural Networks Study in the Design of Trusses
Energy efficient clustering in heterogeneous
An energy-efficient cluster head selection in wireless sensor network using g...
Neural wavelet based hybrid model for short-term load forecasting
Face Recognition Using Neural Networks
Energy aware clustering protocol (eacp)
A040101001006
A review on energy efficient clustering routing
Adaptive Monitoring and Localization of Faulty Node in a Wireless Sensor Netw...
Iv3515241527
A Learning Automata Based Prediction Mechanism for Target Tracking in Wireles...
Power_and_Data_08-16-12_Final_Color
Application of CI in Motor Modeling
Dy4301752755
An Ant colony optimization algorithm to solve the broken link problem in wire...
V01 i010405
AI IEEE
Data Accuracy Models under Spatio - Temporal Correlation with Adaptive Strate...
Solution for intra/inter-cluster event-reporting problem in cluster-based pro...
Ad

Viewers also liked (20)

PDF
Hacking a Professional Drone
DOTX
Plantilla ieee documentos tecnicos
PPTX
Quadcopter
PPTX
Advertisement agency
PPTX
Fchn.com doctor database
PPTX
Derivatives and it’s simple applications
PPTX
Grupo 1JB - PPT
PPTX
Hg.org data scraping
PDF
CV_CEB_2017_en
PPT
Hola a todos!!
PPTX
Anemia
PDF
projectDocumentaion
PDF
Grupo 1JB - Revista
PDF
Marsun power-systems
DOC
REVISTA DIGITAL DE LA CARRERA DE DERECHO
PPTX
Collage victor romero 1j
PPT
Com 411A week Two
PPTX
Nolo.com lawyers database
PDF
Innovaciones inspiradoras en educación
Hacking a Professional Drone
Plantilla ieee documentos tecnicos
Quadcopter
Advertisement agency
Fchn.com doctor database
Derivatives and it’s simple applications
Grupo 1JB - PPT
Hg.org data scraping
CV_CEB_2017_en
Hola a todos!!
Anemia
projectDocumentaion
Grupo 1JB - Revista
Marsun power-systems
REVISTA DIGITAL DE LA CARRERA DE DERECHO
Collage victor romero 1j
Com 411A week Two
Nolo.com lawyers database
Innovaciones inspiradoras en educación
Ad

Similar to neural-control-drone (20)

PDF
Hybrid neural networks in cyber physical system interface control systems
PDF
Motor Imagery based Brain Computer Interface for Windows Operating System
PDF
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...
PDF
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...
PDF
International Journal of Computational Engineering Research(IJCER)
PPTX
Biomedical Signals Classification With Transformer Based Model.pptx
PDF
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
PDF
Optimal neural network models for wind speed prediction
PDF
Optimal neural network models for wind speed prediction
PDF
Optimal neural network models for wind speed prediction
PDF
Switchgear and protection.
PDF
Transfer learning for epilepsy detection using spectrogram images
PDF
IRJET- Three Phase Line Fault Detection using Artificial Neural Network
PDF
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
PDF
A machine learning algorithm for classification of mental tasks.pdf
PDF
New artificial neural network design for Chua chaotic system prediction usin...
PDF
Electroencephalography-based brain-computer interface using neural networks
PDF
Reni_and_Sam_Poster_Final_449
PDF
Implementation of Feed Forward Neural Network for Classification by Education...
PDF
1.meena tushir finalpaper-1-12
Hybrid neural networks in cyber physical system interface control systems
Motor Imagery based Brain Computer Interface for Windows Operating System
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...
A Topology Control Algorithm Taking into Account Energy and Quality of Transm...
International Journal of Computational Engineering Research(IJCER)
Biomedical Signals Classification With Transformer Based Model.pptx
NEURAL NETWORK FOR THE RELIABILITY ANALYSIS OF A SERIES - PARALLEL SYSTEM SUB...
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
Optimal neural network models for wind speed prediction
Switchgear and protection.
Transfer learning for epilepsy detection using spectrogram images
IRJET- Three Phase Line Fault Detection using Artificial Neural Network
Surface Electromyography (SEMG) Based Fuzzy Logic Controller for Footballer b...
A machine learning algorithm for classification of mental tasks.pdf
New artificial neural network design for Chua chaotic system prediction usin...
Electroencephalography-based brain-computer interface using neural networks
Reni_and_Sam_Poster_Final_449
Implementation of Feed Forward Neural Network for Classification by Education...
1.meena tushir finalpaper-1-12

neural-control-drone

  • 1. Neural Control of Drone Using Steady-State Visually Evoked Potentials Sam Kinn Department of Electrical Engineering University of Washington Seattle, Washington Email: samrkinn@uw.edu Jon Lundlee Deparment of Electrical Engineering University of Washington Seattle, Washington Email: jlundlee@uw.edu Reni Magbagbeola Department of Electrical Engineering and Department of Physics University of Washington Seattle, Washington Email: reni magbagbeola@yahoo.co.uk Michael Schober Department of Electrical Engineering University of Washington Seattle, Washington Email: schober4@uw.edu Abstract—This paper describes the design and implementation of a brain-computer interface (BCI) system that utilizes steady- state visually evoked potentials (SSVEP) to control a drone. The system uses recursive least squares (RLS) adaptive filtering and canonical correlation analysis to interpret the signal acquired via electroencephalography (EEG). The system was able to discriminate between 5 different frequencies in the range of 13- 17 Hz and map their detection to five commands for the drone: forward, up, down, yaw left, and yaw right. The implementation was successful and the user was able use neural signals to provide the desired command to the drone, usually within 5-10 seconds. Index Terms—brain-computer interface (BCI), steady-state visually evoked potentials (SSVEP), electroencephalography (EEG), canonical correlation analysis, recursive least squares (RLS) adaptive filter, drone control I. INTRODUCTION This report documents the design and implementation of a brain-computer interface (BCI) that utilizes steady-state visually evoked potentials from the occipital lobe to control a drone, specifically a Solo 3DR quadcopter. It was used as the project for the Systems, Controls, and Robotics capstone (EE 448-9) at the University of Washington Department of Electrical Engineering. The motivation for the pursuit of this project, in addition to applying what had been learned in previous EE coursework, was to provide a proof-of-concept of methods that use brain signals to control external devices. While this has been shown to be possible in plenty of previous experiments using a variety of different methods for acquiring useful signals, the design of these systems has largely been limited to researchers with many years of experience in neural engineering and computational neuroscience or similar fields. The goal was to demonstrate that one of these systems could be developed by undergraduate engineering students using the methods and techniques used in recent research literature. This report is broken down into five sections (not including the introduction): background, design, results, discussion and conclusion, and acknowledgment. The background section describes the underlying science behind the acquisition of the neural signals used, as well as the theory for filtering and classifying those signals. It also details the communication protocols and the capabilities of the drone being used. The design section describes how these different components were put together and the reasoning behind the choices that were made, including issues that were raised over the course, or how they might be addressed given additional time to work on the project. The results section demonstrates how the system performed, from accuracy of signal classification to the rate at which commands are sent to the drone and the drone’s responses to said commands. The conclusions and discussions summarizes the project and provides some insight into future directions this project can take and what could potentially have been accomplished with an extra quarter of two or with more financial assistance. II. BACKGROUND A. Electroencephalography (EEG) Electroencephalography (EEG) is a very well-known and popular method for measuring neural signals. Electrodes are placed at many points on a patients scalp to pick up elec- trical field changes due to neural activity. These points are standardized to make analysis and comparison easier. It is a noninvasive technique for monitoring brain activity [1]. Other methods of measuring include electrocorticography (ECoG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI). For the purposes of our project, EEG compares favorably to these other methods for a number of reasons. Compared to the other modalities, EEG is cheaper and more attainable. For instance, MEG requires a large machine in a magnetically-shielded room and ECoG requires surgical implantation of the electrodes onto the surface of
  • 2. the brain. The downside is that EEG tends to be noisier and provide less information than these other modalities. Typically, around 20 electrodes are used, which was the number of electrodes used with our particular EEG system. However, for clinical applications, larger arrays of recording leads are used to obtain more information and better spatial resolution. The addition of electrodes can generally improve the signal and information about the signal, but these denser electrode arrays can be more expensive and more difficult to use. It is generally ideal to use as few electrodes as possible. The accepted standard for electrode placement for EEG is the 10-20 system, which is shown in Figure 1 [2]. Fig. 1. A diagram of the 10-20 system. The gray circles indicated electrode positions introduced in later additions to the system, while the black circles represent the positions of the original 10-20 system, which are the ones used for this project. B. Steady-state visually evoked potentials (SSVEP) Steady-state visually evoked potentials (SSVEP) are un- conscious and natural responses to visual stimuli where an increase in power at a specific frequency when there is visual activity also oscillating at that frequency (such as a flickering light). These signals can be quite strong relative to the rest of the frequency spectrum and have recently been viewed as useful signals for application in BCIs [3]. C. Recursive least-squares (RLS) adaptive filtering An adaptive filter is a type of filter that utilizes a feedback loop to adapt according to the error until said error is min- imized according to a given criterion. It refines its transfer function using this error signal, which is generally known and separable from the received signal. One example of the application of adaptive filtering is in the removal of artifacts from an electrocardiograph (ECG) signal in order to attain more accurate arrhythmia detection [4]. A recursive least-squares (RLS) filter is a type of adaptive filter, which aims to minimize the cost function associated with the error. The error at each iteration n is computed as follows: ε(n) = n i=1 λn−1 e2 (i) = n i=1 λn−1 (d(i) − y(i))2 (1) Where d(i) is the desired response to the filter, y(i) is the actual response, and e(i) is the error between the two. λ is the called the ”forgetting factor” and is used to determine how much emphasis is placed on more recent samples relative to earlier ones. Figure 2 shows the block diagram of a general adaptive filter. Fig. 2. Block diagram of an adaptive filter. This specific example is of a recursive least-squares, but most adaptive filters follow this basic model. Source: http://ocw.mit.edu/courses/mechanical-engineering/ 2-161-signal-processing-continuous-and-discrete-fall-2008/study-materials/ rls.pdf D. Canonical correlation analysis (CCA) Canonical correlation analysis (CCA) is a technique used to compare the similarities between two multidimensional data matrices, X and Y. This is accomplished by finding basis vectors for each matrix that maximize the correlations between the data matrices when projected onto these bases. Mathemat- ically, this means maximizing the following function: ρ = E[wT XXYT wY ] E[wT XXXT wX]E[wT Y XYT wX] (2) = wT Y CXY wY wT XCXY wY wT XCXY wY (3) Where CXY = Cov(X, Y) is the covariance between X and Y, ρ is correlation, and wY wx are the aforementioned basis vectors. E. Solo 3DR Quadcopter The Solo 3DR Quadcopter is an open core platform smart quadcopter created by 3D Robotics. It has dual built-in 1 GHz Linux companion computers that monitor 500 parameters of flight data at a rate of 20 Hz. The Solo is able to be modified by use of open source software and developer tools produced by 3DR. It is also built with an accessory bay that allows for the solo to be equipped with external devices such as LED arrays, ballistic parachute systems, and external companion computers.
  • 3. F. Micro Air Vehicle Link (MAVLink) A communication protocol for unmanned vehicles, the inter-communication of the subsystem of the vehicles, and/or ground control stations. The protocol consists of sending multiple bytes that includes a start-of-frame, payload-length, packet sequence, system ID, component ID, message ID, payload, and a checksum of the entire packet [5]. III. DESIGN Design of this project begins with the high level concept in the form of a block diagram. The realization of creating a neural controller for an external, real device provides a obvious separation for design subsections. Our first section is the command interpreter which includes an EEG, a method to create SSVEP signals, and a classifier. The second section the external device controller that can take in a command and convert it to a useful action while providing some sort of feedback to the user. Figure 3 shows a high-level block diagram of the system containing each of these subsystems. Fig. 3. High-level block diagram of the neural control system. From the diagram we can see the split, but also common node for each of the subsystems. This node is essential for developing a closed-loop system of separate components: a laptop that provides device feedback and visual stimuli to the user while simultaneously sending information from the user to the device. A. Neural Command Subsystem Design The main task of the neural command subsystem is to interpret commands from the user’s neural signals as accu- rately as possible. Commands are then displayed and sent to a central control laptop. There are three key factors to obtaining and classifying signals from SSVEP. First, we need a light stimuli in 13-25 Hz range for optimal detection and noise reduction [6]. Second, a EEG system that is capable of recording and transmitting real-time data for analysis. Finally, a signal processor that filters and classifies that signal as an integer command. Five light-emitting diodes (LEDs) flashing at frequencies of 13, 14, 15, 16 and 17 Hz were chosen to act as a visual stimuli from which we could record neural activity using the 20- electrode EEG system from B-Alert. When looking directly at one of these LEDs, neural activity in the occipital lobe spikes at the same frequency allowing us to capture a distinguishable signal (SSVEP). Figure 4 shows the arrangement of LEDs to be used for acquisition of SSVEPs which was controlled by an Arduino for system independence. Because this project focused on controlling a drone the LEDs needed to be bright enough for outdoors. Thus, we used ultra-bright green LEDs: green appearing brighter with the same energy as other colors. We used the B-Alert system specifically for its low electrode impedance and real-time data capture capabilities which satis- fied one of our key requirements for accurate signal analysis. Fig. 4. LED holders that shows the placement of the LEDs around the user’s laptop. While the B-Alert system is more than capable of obtaining low noise, accurate data, it cannot send data directly to MATLAB in real-time for data analysis. There was a very easy, simple solution: the lab streaming layer (LSL) system in Matlab and underlying C code allowed us to access the EEG data in real time via the USB socket. The B-Alert application within LSL allowed us to effectively interface the system with the EEG software and export a live stream of the recording data into MATLAB. Once the raw data has been acquired, it goes through the typical pre-processing stages of a bandpass filter from 5-75 Hz and a notch filter at 60 Hz (to account for the power line). Then, an RLS filter attempts to filter noise from an electrode capturing an ECG signal in the raw signal f(n). The filter follows Figure 2, where the ”desired” signal d(n) is the ECG signal and the ”error” signal e(n) is the filtered output used for classification After applying an adaptive filter to this data we could see the distinct spikes associated with SSVEP and corresponding to specific LEDs at the correct frequency from the EEG equipment. Figure 6 shows the output of these filtering techniques on a signal from a user looking at a 13Hz flashing LED. Canonical correlation analysis was used to correlate the signal with a variety of matrices for each frequency that consisted of simple sine waves oscillating at that specific frequency and its harmonics. Figure 5 shows an example of one of the matrix used to compare with the signal.
  • 4. Fig. 5. Visualization of the matrix used in CCA for determining how well the provided signal compares with a 13-Hz sine wave and its harmonics. Each sine wave corresponds to a row in the matrix. This resulted in a list of correlation values for each fre- quency. If there was a high enough standard deviation between the correlations, then the frequency with the highest correlation was chosen as the command. This was to ensure that there truly was a command and not just noise or some random artifact. The standard deviation threshold was chosen based on experimentation comparing the standard deviations produced when looking at a light versus looking at nothing specific. After commands were classified, they were sent over an IP address socket as a single character between ’0’ and ’5’ inclusively. A ’0’ acted as an unsure command or an unfocused command and results in no movement from the drone. When the commands are received on the control laptop they are re- written into a text file until used by the external device or overwritten by a new command. B. External Device Subsystem Design Each external device that could potentially be controlled by our neural command system has its own set of internal control requirements. To keep our systems general and easy to use the number of commands is limited to 5 active and 1 null command. Whether we are working with a drone, a wheelchair, or any other device the number of commands will not change. In our case we are working with a drone. This limited number of commands means that the device itself is responsible for acting in a controlled, safe, and reliable manner. Additionally, the device needs to provide some sort of feedback to the user. This feedback can be in the form of telemetry, video, or haptic and is necessary to close the control loop. Once the controller sees that there is data at the socket it retrieves and sends the initial velocity commands for the drone that corresponds to the numerical data. For safety concerns at least two of the same consecutive commands needed to be sent before an action was taken. This would mean that Fig. 6. Frequency spectrum of signal in occipital lobe when viewing LED flashing at 13 Hz after processing using typical pre-processing techniques plus RLS filtering. an LED light flashing at 13 hertz can be assigned a value of 1 that is sent to the drone and corresponds to a positive velocity in the X direction every two or more ’1’ commands in a row. The five directions associated with the five numerical commands were, up, down, forward, yaw left, and yaw right. Using a combination of these commands the drone can fly in all directions. The built in drift control is nulled when the BCI controller takes over and starts sending commands. We needed to have a stable system, so it was necessary to implement a drift control of our own. To do this the controller was designed so that max velocity values are sent out when data is seen at the socket and when no data is seen it enters into the velocity decay mode. Built in sensors allow for real time acquisition of flight data such as acceleration and velocity. A loop is entered that constantly decays the current velocity in every iteration and variables can be changed to control the initial velocity and the rate of decay. This allows the ability to control how far the drone flies upon seeing data at the socket, setting an initial velocity, and then coming to a complete stop. In order to tune all of the values used for controlling the drone we developed our own simulation visualizer. In the simulation we were able to hook up a user with an EEG to the control laptop and have said user run through an obstacle course. Because of this we were able to decide on turning the drone 45 degrees with each turn command and moving forward about 2 meters with each forward command. The up and down commands are tuned to move about 0.5 meters in their respective directions. Connected to the bottom front of the drone is a small GoPro camera that is used to provide real-time visual feedback to the user. Because the user is required to look at flashing LEDs with consistent focus there needed to be some sort of visual reference. While the drone sends telemetry feedback to the internal controller for stabilization, camera data is sent either
  • 5. through a video pipeline to the laptop or to a large tablet. At the moment the video pipeline has a bit of a latency issue causing video to lag a few seconds. However, when video is sent to a large tablet the lag nearly disappears. IV. RESULTS Our final system consisted of two laptop computers, a tablet, an Arduino hooked up to 5 LEDs, and a B-Alert wet EEG system. Figure 7 shows this final setup with a user wearing the EEG focusing on different frequency flashing LEDs. With everything in place and outdoors the EEG was able to pick up the SSVEP signals and the CCA classifier was able to accurately interpret the commands before sending them to the control laptop. Once the commands were received the drone was able to move forward, yaw left, yaw right, move up, and move down without any outside assistance. Fig. 7. Flashing LED lights. Control over the drone in an outdoors environment was noticeably lacking compared to control in an indoors and simulated environment. This is because of a drop in classifica- tion accuracy stemming from bright natural lighting interfering with the LED lighting. Another possible issue was the safety redundancy requiring too high of accuracy from the CCA classifier to allow the drone to move naturally. However, when the drone was given a command it acted exactly as we expected it to. In general, it was difficult to get a reliable and consistent value for the accuracy of the classifier since there were so many variables that could affect this. As mentioned previously, the natural lighting could interfere with the brain’s ability to focus on the LEDs. There was also the issue of other stimuli, such as people talking to the user, that would interfere with the accuracy. But the system was set up in such a way that the drone rarely, if ever, received and acted on a false command. This was because every command that was sent required the same confirmation command in order to act on it. It took 3.5 seconds for a reasonably accurate command to be processed and sent to the drone. As a result, a focused user was able to move the drone every 5-10 seconds, with the movement being quicker if the user was sending the drone in a single direction. Lastly, it should be mentioned that all code (both for simulated and actual control), images, videos, plots, etc. will Fig. 8. Drone flying while being controlled by flashing light system be available on GitHub in the following repository: https: //github.com/samrkinn/NeuroDrone. This repository will also be the source of any updates to the project as they become available, as well as more information than can be provided in this report. V. DISCUSSION Over the course of the final quarter we learned a lot of techniques that we wish we could have implemented into this project. With a few more weeks, or months, we would have attempted to switch over to a dry electrode system, optimize the code in C to run from a single laptop, and developed a Q-Learning algorithm to enhance speed and accuracy of signal classification. The reason for a switch to a dry electrode system is largely because they are cheaper and therefore more accessible, as well as being easier to assemble and use. The reason one wasn’t used is because they are generally noisier and it can be much harder to get a good signal. Extra time would have allowed us to perfect the device on better equipment and then use the lessons and proofs of concept and apply them to this cheaper system. Another future improvement would have been to utilize parallel processing libraries in C to be able to perform all of the acquisition, processing, and control on a single machine. This would have been more efficient for obvious reasons and likely would have also resulted in faster code since we would have been using C where we had been using Python and MATLAB in many of the functions. A Q-Learning algorithm would vectorize the state of the drone and the EEG signals into a few main features. Based on these features we could train the algorithm to learn policies for handling small chunks of data. For instance, we could set one vectorized state of the EEG signal to include: the singular value decomposition values at 13Hz, 14Hz, 15Hz, 16Hz, and 17Hz along with a couple of their harmonics along with what the most recent classification was, within the last second, and a few other features. The algorithm would then attempt to classify this vector with what it already knows. After showing the user its guess, the user can then tell the learning algorithm whether it was correct or incorrect and update the algorithm
  • 6. with new information. Training sets can be used to increase the speed of this process and in the end this classifier should be able to work based on policies with smaller EEG data samples at a high accuracy. When such a high-speed, accurate signal classifier exists we will then be able to remove most of the safety constraints. This will allow the user to fly the drone, or operate any other device with a higher resolution. It would allow the user to potentially have more commands, and thus more functionality and better control of the drone. VI. CONCLUSION This project provided us the opportunity to work on the still-developing technology of BCIs. As undergraduates with limited research experience, we were able to successfully taken the methods and techniques used by neuroscience researchers and implement a BCI to control the flight of a drone. It did not take years, but did require coming up to speed fast through research of recent literature in neuroscience and neural engineering. SSVEP signals were obtained with the use of an EEG headset and special software while a user looks at visual stimuli. The SSVEP signals were then classified by their power spectrum densities through filtering techniques in MATLAB and a corresponding signal was able to be sent over a socket to a drone controller. Once this process had been completed, we were able to control this external device, the drone, with the use of neural signals. ACKNOWLEDGMENT We would like to take the time to give thanks to the many people and organizations that have helped us succeed in completing this project: • The UW Department of Electrical Engineering and Col- lege of Engineering for providing funding to help get our project off the ground. • Advanced Brain Monitoring for allowing us to use B- Alert X24 EEG System hardware and software. • Rajesh Rao for lending the B-Alert X24 EEG System to us for 6 months and Dimitros Gklezakos for training us in how to use it. • 3D Robotics for giving us a university discount on the drone. • And finally the UW Biorobotics Lab, the Center for Sensorimotor Neural Engineering, Howard Chizeck, Niveditha Kalavakonda, and Christine Shon for technical help and support. REFERENCES [1] Niedermeyer E, da Silva FL (2004). Electroencephalography: Basic Principles, Clinical Applications, and Related Fields. Lippincot Williams & Wilkins. ISBN 0-7817-5126-8. [2] Oostenveld R, Praamstra P (2001). The five percent electrode system for high-resolution EEG and ERP measurements. Clinical Neurophys- iology 112: 713719. doi:10.1016/S1388-2457(00)00527-7. CiteSeerX: 10.1.1.116.7379. [3] Beverina F, Palmas G, Silvoni S, Piccione F, Giove S (2003). User adaptive BCIs: SSVEP and P300 based interfaces. PsychNology Journal 1: 33154. [4] Thakor NV, Zhu, YS (1991). Applications of adaptive filtering to ECG analysis: noise cancellation and arrhythmia detection. IEEE Transactions on Biomedical Engineering 38 (8): 785794. doi:10.1109/10.83591. ISSN 0018-9294. [5] Meier L (2009). MAVLink Micro Air Vehicle Communication Protocol. http://qgroundcontrol.org/mavlink/start. [6] Ku R, Duszyk A, Milanowski P, abcki M, Bierzyska M, Radzikowska Z, et al. (2013). On the Quantification of SSVEP Frequency Responses in Human EEG in Realistic BCI Conditions. PLoS ONE 8(10): e77536. doi:10.1371/journal.pone.0077536