neuroexplore - visualizing brain patterns...e utilidade por parte de utilizadores e experts. no...
TRANSCRIPT
NeuroExplore - Visualizing Brain Patterns
A Physiological Computing InfoVis
Daniel Jose dos Santos Rocha
Thesis to obtain the Master of Science Degree in
Information Systems and Computer Engineering
Supervisors: Prof. Sandra Pereira GamaProf. Hugo Alexandre Teixeira Duarte Ferreira
Examination Committee
Chairperson: Prof. Luıs Manuel Antunes VeigaSupervisor: Prof. Sandra Pereira Gama
Members of the Committee: Prof. Joao Antonio Madeiras Pereira
November 2017
Acknowledgments
I would like to thank my parents for their friendship, encouragement and caring over all these years,
for always being there for me through thick and thin and without whom this project would not be possible.
I would also like to thank my sibblings, grandparents, aunts, uncles and cousins for their understanding
and support throughout all these years.
I would also like to acknowledge my dissertation supervisors Prof. Sandra Gama and Prof. Hugo Fer-
reira for their constant technical insight, motivational support and overall enthusiastic sharing of knowl-
edge that has made this Thesis possible.
Last but not least, to all my friends and colleagues - particularly database participants: Antonio
Amorim, Pedro Costa and Jose Fernandes - that helped me grow as a person and were always there
for me during the good and bad times in my life. Thank you.
To each and every one of you – Thank you.
Abstract
The objective of this project is the creation of a Physiological Computing Information Visualization
(InfoVis) interface through which, interactively, users can visually decipher one’s intricate emotions and
complex mind-state.
To this end, we cooperated closely with Neuroscience experts from Instituto de Biofısica e Engen-
haria Biomedica (IBEB) throughout our work. Consequently, we assembled a Brain Computer Inter-
face (BCI), from a Bitalino Do It Yourself (DIY) hardware kit, for retrospective and real-time biosig-
nals visualization alike. The resulting wearable biosensor was successfully deployed in an extensive
Database (DB) acquisition process, consisting of activities with concrete, studied brain-pattern correla-
tions.
This big-data InfoVis foundation’s magnitude and its, at times saturated, physical signal accredited
the development of a data-processing pipeline. Indeed, our solution - entitled NeuroExplore - converts
and presents this large number of digitalized, raw biosignal items into more recognizable visual idioms.
The system interaction was intentionally designed in order to augment users’ discoveries and rea-
soning regarding visually recognizable metrics, as well as subsequently derived trends, outliers and
other brain patters. Strengthening this intent, we adopted an iterative development process in which,
recurrently, expert needs and user suggestions were equated as orienting guidelines.
This all culminated in a final version we deemed worthy of extensive functional and utility user testing
and expert validation. In the end, our project achieved both excellent user usability scores as well as
expert interest, some already relying on our solution for their own research.
Keywords
InfoVis, Physiological Computing, Affective Computing, Brain-Computer Interface, biosignals, qEEG
iii
Resumo
O objectivo deste projecto e a criacao de uma interface InfoVis para Computacao Fisiologica, atraves
da qual, interactivamente, os utilizadores consigam decifrar visualmente, emocoes intrınsecas e com-
plexos estado de mente.
Para este fim, trabalhamos consistentemente em proximidade com especialistas em Neuroscien-
cia do IBEB. Consequentemente, construımos uma Brain Computer Interface (BCI), usando um kit de
hardware DIY Bitalino, para a visualizacao retrospectiva ou em tempo-real de biosinais. O biosensor
wearable resultante foi empregue, com sucesso, num extenso processo de aquisicao de base de dados,
dividido em actividades concretas cujas correlacoes com padroes cerebrais foram estudadas.
A magnitude destes dados - fundacao do nosso InfoVis - associada a possıvel saturacao destes jus-
tifica a implementacao de preprocessamento. De facto, a nossa solucao - NeuroExplore - filtra, deriva e
apresenta este enorme numero de dados fisiologicos em idiomas visuais mais facilmente reconhecıveis.
A interaccao com o sistema foi desenhada, intencionalmente, para aumentar as descobertas e
raciocınio do utilizador - explicitamente - ao reconhecimento visual de metricas, bem como outros
padroes - subsequentemente - visualmente derivaveis. Fortalecendo esta intencao, adoptamos um
metodo iterativo de desenvolvimento onde, recorrentemente, equacionamos as necessidades de ex-
perts bem como sugestoes de utilizadores como linhas mestras.
Tudo isto culminou num prototipo, considerado apto para uma extensa validacao de funcionalidade
e utilidade por parte de utilizadores e experts. No final, alcancamos notas de usabilidade excelentes
bem como interesse por parte de especialistas, alguns dos quais ja utilizam a nossa solucao para a sua
propria investigacao.
Palavras Chave
Visualizacao de Informacao, Computacao Fisiologica, Computacao Afectiva, Interface Cerebro Com-
putador, Biosinais, Eletroencefalografia Quantitativa
v
Contents
1 Introduction 1
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2.1 KickUP Sports Accelerator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.2 GrowUP Gaming recording sessions . . . . . . . . . . . . . . . . . . . . . . . . . . 6
1.2.3 Instituto de Apoio as Pequenas e Medias Empresas e a Inovacao (IAPMEI) Schol-
arship . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2 Background 9
2.1 Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Affective Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2.1 Affect Visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 Physiological Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Brain Computer Interfaces (BCI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3 Related Work 17
3.1 Emotion’s Visual Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Affective Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.1 Text Analytics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.2.2 Video Content Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.2.3 Physiological Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4 Proposed Solution 37
4.1 Biosignal Input Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.1.1 Device Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
4.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.2.1 Users . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
4.3 Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
vii
4.3.1 Biosignal Dataset Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.3.1.A International Affective Picture System (IADS) . . . . . . . . . . . . . . . . 47
4.3.2 Data preprocessing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
4.3.2.A Saturation Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
4.3.2.B EEG Power Spectrum Calculus . . . . . . . . . . . . . . . . . . . . . . . 49
4.3.2.C Photopletismography (PPG) Peak-Finding Algorithm . . . . . . . . . . . . 49
4.3.3 Derived Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.3.A Brain-wave extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
4.3.3.B Heart-Rate metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3.C Emotional-Valence metric . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3.D Meditation metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
4.3.3.E Engagement metric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.4 InfoVis Development . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.4.A Low Fidelity Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
4.3.4.B High Fidelity Prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
4.3.4.C β Version InfoVis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4.3.4.D Final InfoVis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
5 Evaluation 59
5.1 Context: European Investigators’ Night 2017 . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.2 Usability Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.2.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
5.2.2.A Task Completion Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.2.2.B Errors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.2.2.C System Usability Scale (System Usability Scale (SUS)) . . . . . . . . . . 68
5.3 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.3.1 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
5.3.2.A User 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71
5.3.2.B User 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
6 Conclusion and Future Work 75
6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
A Code of Project 87
viii
B NeuroTech Artifacts 89
B.1 NeuroTech Business Model Canvas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
B.2 Gaming Tournament Conversation Feedback . . . . . . . . . . . . . . . . . . . . . . . . . 89
ix
x
List of Figures
1.1 Cognitive Science’s comprising fields of study. Each line joining two disciplines represents
interdisciplinary synergy [1]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1 Paul Ekman’s 6 categorical emotions: Happy, Sad, Fear, Anger, Suprised, Disgust. [2] . . 12
2.2 Valence/Arousal Emotional Space [3]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.3 International 10–20 system. An internationally recognized method to describe and apply
the location of scalp electrodes in the context of an EEG test or experiment.A = Ear lobe,
C = central, F = frontal, FP = frontal-polar, O = occipital, P = parietal, T = temporal [4] . . 13
3.1 Thayer’s Arousal/Valence model [5] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Lee et al. newly proposed visualization [5] . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.3 Sanchez et al. proposal for emotion’s representation [6] . . . . . . . . . . . . . . . . . . . 20
3.4 Sanchez et al. newly proposed visualization [6] . . . . . . . . . . . . . . . . . . . . . . . . 20
3.5 Staked Area Chart for each of Ekman emotions’ intensity over time [7] . . . . . . . . . . . 21
3.6 Visualization (VIS) screenshot displaying the following idioms: Line chart, Pie chart,
Radar chart [8] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.7 Screenshots of four distinct text-extracted emotion VIS: a) avatars; b) emoticons; c)
Hooloovoo; d) Synemania. [9] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3.8 On the left: Sadness over age Area chart; On the right: keyword’s usage during valentine’s
day Line Chart [10] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.9 InfoVis Interface: input blog (left), document structure and recognized emotions (middle)
and the sentence’s emotion table (right) [11] . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.10 Chicago geo-tagged emotional Twitter messages VIS [3] . . . . . . . . . . . . . . . . . . . 24
3.11 USA aggregated emotion tweets VIS [3] . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
3.12 Sentiment before Sep 2013 Autralian Election InfoVis [12] . . . . . . . . . . . . . . . . . . 25
3.13 Sentiment after Sep 2013 Autralian Election InfoVis [12] . . . . . . . . . . . . . . . . . . . 25
3.14 Emotional Heat Map InfoVis [13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
xi
3.15 Emotional Saccade MapInfoVis [13] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.16 Emotient InfoVis displaying Emotional Engagement current value with an angular meter
idiom and evolution with a line chart (left) as well as Ekman’s emotions in a retrospective
stacked area chart and its overall values in pie charts (right) 1. . . . . . . . . . . . . . . . 27
3.17 EmoVu InfoVis usage of a area-chart which represent each of Ekman’s emotions with a
distinctive color 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
3.18 Affdex InfoVis comparing a user’s line chart for Expressiveness with other users according
to age groups 4. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
3.19 Kairos InfoVis presenting a subjects’ facial recording side by side with multiple retrospec-
tive area charts, one for each of Ekman’s emotions 5. . . . . . . . . . . . . . . . . . . . . 29
3.20 Nviso’s InfoVis usage of an area chart to represent each of Ekman’s emotions detected
over time as a result of facial expressions’ detection (left) 6. . . . . . . . . . . . . . . . . . 29
3.21 EEG’s Power Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.22 Volumetric bull’s eye idiom [14]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
3.23 Gavrilescu [14] proposed Electroencephalography (EEG) InfoVis . . . . . . . . . . . . . . 30
3.24 Valence/Arousal space-visualization mapping [15]. . . . . . . . . . . . . . . . . . . . . . . 31
3.25 Emotional heat map; Valence/Arousal bar-chart [15]. . . . . . . . . . . . . . . . . . . . . . 31
3.26 Screenshot of AffectAura’s InfoVis [16] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.27 Museum’s top view heat map [17]. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3.28 Emotional Intensity bar charts and scale under painting [17]. . . . . . . . . . . . . . . . . 32
3.29 Screenshot examples of iMotions’ InfoVis for distinct emotions 7 . . . . . . . . . . . . . . 33
3.30 Screenshot of Neurosky’s MindWave InfoVis 8 . . . . . . . . . . . . . . . . . . . . . . . . 34
3.31 Screenshot of bitalino’s OpenSignalsInfoVis. In it we can observe a line-chart for each of
bitalino’s channel raw data over time [18]. . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.1 Bitalino board with Electromyography (EMG) [18] . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 BrainBIT prototype Outside and Inside headband view detailing: (1) The Power Block; (2)
Battery; (3) PPG; (4) Electrodes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.3 BrainBIT BCI final prototype used in this project. . . . . . . . . . . . . . . . . . . . . . . . 41
xii
4.4 System architecture. Different, arrow colors and directions represent different system in-
teractions: In Black (a), we see the users’s (1) electrical and chemical biosignals (EEG
and PPG) and the physical accelerometer input. In Red (b), BrainBit (2) listens for re-
quests - sent over Bluetooth by ClientBIT (3) - which sends a raw, discrete domain, dig-
italized version of the user’s biosignals, accordingly. This version is then sent to Server-
BIT (4) for post-processing. Indeed, in Green (c), we see the the Python Server (4),
JSON-formatted, preprocessed data answer the request it received (in identical fashion)
from ClientBIT (3) - a web-based, Javascript/HTML InfoVis interface.In Yellow, the input
means of exploring our InfoVis - typically a computer mouse - is depicted. Finally, a large
blue Blue, one-directional arrow, represents the optical NeuroExplore (5) InfoVis output,
through which the user can visually analyze their own, previously collected, or real-time,
BrainBIT Psycho-Physiological data. Finally, in Yellow, the input means of exploring our
InfoVis - typically a computer mouse - is depicted. . . . . . . . . . . . . . . . . . . . . . . 43
4.5 Recording Comma Separated Value (CSV) file screenshot snip depicting a six-line long,
respective to one second, selection (a) with one thousand CSV each (b). . . . . . . . . . 46
4.6 Raw biosignals (left) versus Preprocessed (right) database sizes, in GigaByte (GB). . . . 47
4.7 Initial NeuroExplore conceptual paper prototype artifact, dating February 2017 . . . . . . 53
4.8 Initial NeuroExplore software prototype displaying live data acquisition, April 4th, 2017. . . 54
4.9 NeuroExplore retrospective mode beta-version InfoVis, featuring the VIS of post-processed,
derived physiological metric’s charts. On the left we can see the maximazed menu. On
top, its show/hide interface next to the file selection interface. On the bottom, the previ-
ous recording time slider with respective play/pause buttons is visible. The plotted idioms
y-axis log-scale usage is also visible. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
4.10 NeuroExplore retrospective mode final-version InfoVis, introducing a broader VIS space
due to input file-selection menu integration and mouse-over interaction, . . . . . . . . . . 56
5.1 IBEB’s Physiological Computing table at European Investigators’ Night 2017. On the
bottom-right we can see a participant filling the SUS . . . . . . . . . . . . . . . . . . . . . 61
5.2 Box Plot for user task completion times. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.3 Box Plot for number of user errors in each Task completion. . . . . . . . . . . . . . . . . . 67
5.4 Our project’s final system usability score presented in Bangors’ Adjective Rating Scale [19] 69
B.1 NeuroTech Business Model Canvas (BMC) . . . . . . . . . . . . . . . . . . . . . . . . . . 91
xiii
xiv
List of Tables
2.1 Brain-wave frequency bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.1 Bitalino’s Technical Specifications [20] . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
4.2 Depiction of our Bitalino Channel-Sensor configuration . . . . . . . . . . . . . . . . . . . . 41
4.3 Depiction of each session’s activities, besides meditation . . . . . . . . . . . . . . . . . . 45
5.1 Planned User Tasks Description . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
5.2 Final User Tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
5.3 Task Completion Times . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66
5.4 User Errors Detailed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.5 SUS scores from each user . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
xv
xvi
List of Algorithms
4.1 EEG Raw Signal Saturation Detection for each second of input data . . . . . . . . . . . . 48
xvii
xviii
Acronyms
AC Alternate Current
AI Artificial Intelligence
BMC Business Model Canvas
BCI Brain Computer Interface
BMI Brain Machine Interface
bpm Beats per minute
CIO Chief Information Officer
CNS Central Nervous System
CSEA Center for the Study of Emotion and Attention
CSV Comma Separated Value
D3.js Data Driven Documents
DB Database
DIY Do It Yourself
EDA Electrodermal Activity
EEG Electroencephalography
ECoG Electrocorticography
ECG Electrocardiography
EMG Electromyography
FP1 Left Frontal Polar
xix
FP2 Right Frontal Polar
FPS First Person Shooter
FFT Fast Fourier Transform
fMRI Functional Magnetic Resonance Imaging
fNIRS Functional Near-infrared Spectroscopy
GB GigaByte
GUI Graphical User Interface
GSR Galvanic Skin Response
HCI Human Computer Interfaces
IBEB Instituto de Biofısica e Engenharia Biomedica
IADS International Affective Picture System
IAPMEI Instituto de Apoio as Pequenas e Medias Empresas e a Inovacao
IQ Intelligence Quotient
InfoVis Information Visualization
ICE Intra-cortical Electrodes
MCU Micro-Controller Unit
MEG Magnetoencephalography
MIT Massachusetts Institute of Technology
PPG Photopletismography
SUS System Usability Scale
SVG Scalable Vector Graphics
UBA User Behavior Analytics
UI User Interface
VIS Visualization
qEEG Quantitative Electroencephalogram
xx
1Introduction
Contents
1.1 Objectives . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Document Structure . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1
2
Despite the advent of Computer Science, historically, Humanity has consistently relied on a more
organic form of processing power: Our brain! This computer’s intricate psycho-physiological inner work-
ings seem ever elusive. Contemporaneously, the quest to study and better understand our mind and its
processes is the purpose [1] of the interdisciplinary scientific field titled Cognitive Science (Fig. 1.1).
Figure 1.1: Cognitive Science’s comprising fields of study. Each line joining two disciplines represents interdisci-plinary synergy [1].
In this project, we are predominantly interested in two of its sub-disciplines: Computer Science and
Neuroscience - the scientific study of the nervous system. Specifically, we are concerned with:
1. The relation between measurable user Physiological data, typically from the Central Nervous
System (CNS), and Emotions. Neuroscientists have documented and exemplified such rela-
tions typically via the usage of an EEG and its spectral analysis - Quantitative Electroencephalo-
gram (qEEG).
2. How to interactively, intuitively and graphically present an InfoVis of the subsequent, derived
Psycho-Physiological data.
An emotion is a mental state and an affective reaction towards an event based on subjective experi-
ence [21]. Emotions are a quintessential part of the Human nature. Crucial to communication between
humans, they also play a fundamental role in rational and intelligent behavior [22]. Their omnipresent
nature influences and determines many aspects of our daily lives. In many situations, emotional intel-
ligence is more important than Intelligence Quotient (IQ) for successful interaction [23]. There is also
significant evidence that rational learning in humans is dependent on emotions [24]. Thus, the study of
emotions and their recognition is well justified as evident by psychology research dating back to the 19th
century [25].
We live in an age where collecting personal user data is ever increasingly valued. The Web is evolving
toward an era where virtually represented communities will define future products and services [26].
3
Additionally, awareness of and reflection on emotional states has proven particularly important in
collaborative contexts as “group self-awareness, of emotional states, strengthens and weakens, modes
of interaction, and task processes is a critical part of group emotional intelligence that facilitates group
efficacy” [27]. Furthermore, awareness of emotional states inside a group has been correlated with
improved communication, decision support and mitigation of group conflicts, while offering an addi-
tional level of rich contextual information and also supporting collaborative learning. At the same time,
lower values of emotional intelligence and negative emotion contagion inside groups have been linked
to declining intra-group communication and collaboration, suggesting the need for emotion awareness
solutions [15].
Finally, the study of people’s affective and emotional reaction to an advertisement is booming and
increasingly trending towards simple, user-friendly software applications from complex, expensive labo-
ratory user studies [28].
In sum, user data is relevant not exclusively for commercial purposes: it can improve corporate
communication and collaboration [27]; personalize Human Computer Interfaces (HCI) interactions; and
improve government intelligence applications [23] - able to monitor increases in hostile communications
or model cyber-issue diffusion.
As seen in these examples, the case for acquisition and analysis of user Psycho-Physiological data
in an attempt to determine one’s mental and emotional information spans several areas. This is un-
derstandable due to the contemporary high value of user information for Big Data User Behavior Ana-
lytics (UBA) [29]. The study of these phenomena can be achieved through different approaches, most
notably:
Natural Language Processing The computational analysis of texts and linguistics [3,7–12,30].
Facial Recognition Systems Video analysis of user’s facial expressions [13,15,16,31,32]
Physiological Computing Usage of physiological data from humans as input to a computer system
[14–18].
Some solutions draw from more than one of these approaches, integrating multiple data sources in
a process known as Data Fusion. This multi-modal, multi-sensor approach is usually combined with
machine learning Artificial Intelligence (AI) techniques, intended to increase the accuracy of the mea-
surements by training a program to better identify an emotion according to its past experiences.
Unfortunately, the consequences of these efficacy increases are: the scatter of different data in dis-
tinct devices; the overall tremendous amount of data to take into account; and the difficulty to cognitively
interpret the data, typically abstract by itself, specifically in pattern recognition cases. Additionally, noisy
data filtering also needs to be taken into account.
4
In order to overcome this heavy cognitive load, InfoVis practices can be implemented. InfoVis is the
field of visually and interactively representing otherwise abstract data in order to amplify cognition [33]. It
enables compact graphical presentations for manipulating large numbers of items, possibly filtered from
larger datasets. Additionally, it allows users to interactively discover, decide and explain different and
previously concealed patterns such as: trends, clusters, gaps, outliers and more.
1.1 Objectives
The goal of this work is the development of an interactive visualization so that users better
understand one’s mental state and emotions via Psycho-Physiological input data.
Throughout this project, cooperation and mentorship from IBEB faculty members, actively participat-
ing as Neuroscience experts, is paramount to the understanding of Physiological Computing concepts,
necessities and applications. In other words, IBEB contributions were indispensable in our journey to
visually present the uncensored and inherently complex insight of our mental inner workings.
Thus, a InfoVis assisted analysis, search and comparison of current and previous results could po-
tentially identify new trends, outliers and features. In order to achieve our goal we must:
1. Research and choose a methodology to acquire a Physiological data collection for further analysis.
2. Research and develop algorithms to derive Psycho-Physiological information from the original raw
data
3. Develop an interactive visual interface to examine such data through the display of:
(a) Real-Time Information
(b) Retrospective Information
4. Validate our work through both user and expert feedback and its statistical analysis.
1.2 Contributions
While the events described in this document took place, IBEB colleagues showed interest in utiliz-
ing our project. Specifically, our hardware - BrainBIT - and software -NeuroExplore - have been used
in hands-on demos and pitches. This was intended to exemplify an early version of a product of an
hypothetical start-up company project - entitled NeuroTech - which won a KickUp Sports Accelerator
Program1. This event took place at Estadio da Luz offices and stages2.
1http://kickupsports.eu/accelerator/2http://portugalstartups.com/2017/02/benfica-partners-kickup-launch-sports-accelerator/
5
1.2.1 KickUP Sports Accelerator
For an equity stake of 8%, KICKUP Sports Accelerator is an international acceleration program that
aims to scout, accelerate and invest in early stage sports business startups, focused on sports perfor-
mance, lifestyle activities, entertainment and others. This information and further details are accessible
online.3.
In practice, the not-so hypothetical business case for a Mindfulness Coaching, professional sports,
BCI assisted company was put to the test in a twelve week long incubation, mentoring event - starting
17th of April, 2017. This took form in several distinct processes, such as;
Business Model Debating Several workshops and conferences presenting success cases and dis-
cussing topics such as brand recognition took place. From this resulted a BMC detailing Costumer
Segments and Relationships, key Parters, Activities and Resources as well Value Propositions,
Cost Structure and Revenue Streams
Client Meetings On May third, we met with the responsible of an e-sports club association 4. We
prepared and successfully executed a live demo for our guest. Upon seeing its functionality, and
as a performance training device, the client showed interest in partnering with the project.
Stakeholders Networking Finally, the event allowed us to collect valuable insight by meeting people
with experience actually manufacturing a Physiological Computing device as well as ergonomics
and product design experts.
In the end the whole event was very productive. Much to our delight, the NeuroExplore live demo
was presented flawlessly and led to the scheduling of a gamer tournament recoding session with one of
our interested clients - using our system solution.
1.2.2 GrowUP Gaming recording sessions
On June 18th, a meeting followed by live recordings took place in a video-game tournament lounge.
We interviewed participants in an attempt to probe product adoption, as seen in Appendix B, section
B.2. Then, using NeuroExplore, we captured the corresponding Psycho-Physiological data.
Our test subject was a 23 year-old gamer participating in a fantasy card-game tournament. We
helped him correctly place his BrainBIT and, keeping it on, progressed through three matches and their
three respective recordings. In an unfortunate feat, our tester lost all matches. Regardless, for future
referencing, we saved its raw biosignal data and shared it with fellow NeuroTech project colleagues and
fellow researchers alike5.3http://kickupsports.eu/4http://growupesports.com/about/5https://drive.google.com/drive/folders/0B3AeqOjiwGU3QmhhRm9Ra1g3R2c
6
1.2.3 IAPMEI Scholarship
At the end of June, we were contacted to be part of another IBEB Physiological Computing project:
EmotAI. The project applied to funding via a Competitiveness and Inovation agency - IAPMEI. This
program is entitled StartUp Voucher scholarship; more details can be found on its web-site6.
Due to unrelated events, one of the two scholarship applicants could not accept the challenge. As the
project was in its infancy, it was proposed we filled in the vacant position. This entailed documentation
being gathered and sent. However, upon receiving mixed-messages, it was confirmed to us that, for
regulatory reasons, the scholarship recipient could not be changed at this phase.
As such, for now, our participation in this project is of a voluntary nature. In this fashion, we have
contributed to the project via the the development of NeuroExplore as well as by recording and submitting
pitch videos demonstrating the technology7.
1.3 Document Structure
This document abides by the ensuing structural arrangement:
Chapter 2 presents an expansive number of research fields, within this projects’ context, whose achieve-
ments are of fundamental interest for our project’s existence.
Chapter 3 extensively illustrates the distinct industry and academy Psycho-Physiological VIS examples.
Every example encompasses one or more categories defined in the Background (Chapter 2).
Chapter 4 meticulously reports all the project developments. Specifically, we present our wearable BCI
prototype - BrainBIT; we detail our project’s architecture; finally, we account each phase of the
iterative development process - from data acquisition to final version.
Chapter 5 entails the project’s evaluation methodology followed by its results’ scrutiny. Explicitly, these
assessments consisted of usability and functionality user tests and case studies.
Chapter 6 concludes our chronicle by retrospectively summing up this projects results. Additionally,
future work, usages and possibilities for NeuroExplore are examined - including a simultaneous
start-up company project of which we are members.
6https://www.iapmei.pt/PRODUTOS-E-SERVICOS/Empreendedorismo-Inovacao/Empreendedorismo/Startup-Voucher.aspx7https://photos.app.goo.gl/fEt2NH2U2U4SIVls1
7
8
2Background
Contents
2.1 Sentiment Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.2 Affective Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
2.3 Physiological Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2.4 Brain Computer Interfaces (BCI) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
9
10
In 1872 [34], Charles Darwin wrote: “facial expressions of emotions are universal, not learned differ-
ently in each culture”. There have been arguments both in favor and against this view ever since [35,36].
In recent years, different technologies have been developed whose purpose is the capture and interpre-
tation of the various emotions users might experience. This is accomplished using facial expression
recognition - a clear nod towards Darwin - but also written text, speech and intonation, electro-dermal
response, brain-wave signals and other physiological signals [15].
In this section we introduce related IT fields of study - within this context - whose concept’s compre-
hension is of underlying interest for our project.
2.1 Sentiment Analysis
”An important part of our information-gathering behavior has always been to find out what other
people think ” [37].
Sentiment Analysis, also known as opinion mining, is the field of study that analyzes people’s opin-
ions, sentiments, evaluations, attitudes, and emotions. It has evolved from basic polarity classification
based on thumbs up/down to natural language processing, text analysis and computational linguistics in
which keyword spotting is used to retireve term frequencies and their presence [38,39]
Meanwhile, a broader approach entitled Multimodal Sentiment Analysis also takes into account audio
content, typically converting it to text first, as well as audiovisual contents. Numerous new technologies
have potential usages in opinion mining, such as: facial expression, body movement, or a video blogger’s
choice of music or color filters [40].
2.2 Affective Computing
Traditionally, “affect” was seldom linked to lifeless machines and was normally studied by psycholo-
gists. Currently, Affective Computing is a branch of Computer Science concerned with the theory and
construction of machines which can recognize, interpret and process human emotional states - the focus
of our work - as well as simulate them [41]. Additionally, it is an important topic for harmonious human-
computer interaction, by increasing the quality of human-computer communication and improving the
intelligence of the computer [42].
It was only in recent years that affect features were captured and processed by computers. An “affect
model” is built based on the various sensors - typically cameras - which acquire information and build
a personalized computing system with the capability of perception and interpretation of human feelings
as well as giving us intelligent, sensitive and friendly responses [42]. Common examples for this include
applications that react to emotions extracted from facial expressions or speech interactions as well as
11
systems that change the style of a responsive artistic visualization based on the machine’s perceived
emotional state of the viewer.
This contrasts with Sentiment Analysis’ approach of trying to find specific emotional hints or key-
words. Affective computing focuses on a broader set of emotions or the detection and estimation of
continuous emotion primitives [38].
2.2.1 Affect Visualization
The majority of contemporary works use Ekman’s discrete model to represent emotion [43]. The
model uses six basic emotion labels that are universal among different cultures. The basic emotions are
anger, disgust, fear, happiness, sadness and surprise (see Figure 2.1). There is an ongoing controversy
in the field of psychology about how to adequately discrete labels categorize emotion [44].
Figure 2.1: Paul Ekman’s 6 categorical emotions: Happy,Sad, Fear, Anger, Suprised, Disgust. [2] Figure 2.2: Valence/Arousal Emotional Space [3].
As an alternative (see Figure 2.2, dimensional models of emotions have also been proposed. In
these, an emotion is expressed by a normalized number scale for each dimension, typically two: Valence
(pleasantness of emotion) and Arousal (intensity of the emotion). Using dimensional models to represent
emotions is less “black and white” than using coarse emotion labels as it allows blends of different
emotional states, the natural inclusion of the neutral emotional state (zeroes in all dimensions) and any
value between neutral and full emotion [3].
2.3 Physiological Computing
”The human body is chemical, electrical, mechanical, thermal and magnetic in nature” [45]. Physio-
logical sensors can be used to detect body information such as heart rate and brain-wave signals that
12
reflect changes in a user’s mood and/or environment.
EEG recording devices are much less expensive and portable than other brain activity recording
techniques such as Magnetoencephalography (MEG),Functional Magnetic Resonance Imaging (fMRI)
and Functional Near-infrared Spectroscopy (fNIRS), they are still much too expensive for a daily use
from a customer point of view, especially for disabled people. This is why plenty of commercial EEG
devices are now available such as Neurosky, Mindflex, Emotiv Epoc, etc [6]. According to [6], the best
low-cost EEG device in terms of usability is the Emotiv Epoc headset.
Physiological computing represents a mode of human–computer interaction where the computer
monitors, analyzes and responds to the user’s psycho-physiological activity in real-time [46]. The mea-
surement of physiological signals can take form in a multitude of distinct techniques. These techniques
usually fall under one of two categories: 1) nervous system activity analysis; 2) cardiovascular system
activity analysis [47]. The former encompasses methods such as: Galvanic Skin Response (GSR),
EEG, Electrocorticography (ECoG), MEG, Intra-cortical Electrodes (ICE), fMRI, fNIRS; the latter, typi-
cally Electrocardiography (ECG) or PPG. [48]. Additionally, these can be grouped in two probing cate-
gories: invasive and non-invasive. Invasive approaches such as ECoG and ICE need electrodes to be
surgically implanted in the cerebral cortex. In contrast, non-invasive techniques require no surgery. [49].
Most notably, EEG detects voltages, in volts, resulting from ionic density fluctuation within the neu-
rons of the brain. These measurements occur in different scalp locations, where an electrode is placed.
(Fig. 2.3). Despite poor spatial resolution, EEG’s relative affordability and small, non-invasive sensors
paired with very good temporal resolution crest this technique as one of the most widely used in con-
temporary BCI [50]. Indeed, EEG signals and their spectrum have been previously documented in
what is known as qEEG: the field concerned with the numerical analysis of this data and its associated
behavioral correlations [51,52].
Figure 2.3: International 10–20 system. An internationally recognized method to describe and apply the location ofscalp electrodes in the context of an EEG test or experiment.A = Ear lobe, C = central, F = frontal, FP= frontal-polar, O = occipital, P = parietal, T = temporal [4]
13
2.4 Brain Computer Interfaces (BCI)
Electronics’ ever improving processing power and decreasing hardware sensor sizes [53] allow for
contemporary wearable computers to achieve what was previously only possible with expensive scientific
equipment in a medical laboratory. Wearable body monitoring is a story about data and data analysis,
as much as it is a story about HCI form factors and size reduction [54].
A BCI is a complex application of signal processing and neuroscience. The system uses mental
activity, involuntarily produced by the user, to control a computer or an embedded system typically
via EEG signals which allow communication or interaction with the surrounding environment [55]. BCI
technologies can be a used to decipher the user’s state of mind the state of mind of the user (arousal,
emotional valence, attention, meditation, among others) [50].
Alternatively known as Brain Machine Interface (BMI), these systems also enable humans to inter-
act with their surroundings via control signals generated from EEG activity, without the intervention of
peripheral nerves and muscles [55].
These systems permit the sole encephalic activity to control external devices such as computers,
speech synthesizers, assistive appliances, and neural prostheses. This can be particularly relevant to
severely disabled people who are totally paralyzed or ‘locked in’ by neurological/neuromuscular disor-
ders, such as amyotrophic lateral sclerosis, brain stem stroke, or spinal cord injury. Such systems would
improve their quality of life and simultaneously reduce the cost of intensive care [55].
Typically, a BCI is an artificial intelligence based system that can recognize a certain set of patterns
in brain EEG signals via a number of consecutive phases, namely: (1) signal acquisition for brain signals
capturing; (2) preprocessing or signal enhancement for preparing the signals in a suitable form for further
processing, typically resulting in a power spectrum; (3) feature extraction for identifying discriminative
information in the brain signals that have been recorded, generally an analysis of the different frequency
bands in the power spectrum known as Brain Waves (See Table 2.1); (4) classification for organizing the
signals based on the extracted feature vectors; and finally (5) the control interface phases for translating
the classified signals into meaningful commands for any connected device, such as a wheelchair or a
computer [56].
Table 2.1: Brain-wave frequency bands
Brain Wave Frequency (Hz)Delta (δ) 1-4Theta (θ) 4-8
Low Beta (β) 12-15Mid Beta (β) 15-20High Beta (β) 20-30a Gamma (γ) 30-100
Concerning brain-waves’ activity - which are of particular interest to our project - neuroscientists have
14
found correlations between these and mental-state metrics such as: Emotional Valence, Attention and
Meditation. For example, Trimble et al [?] states that ”there is a clear asymmetry in activity between the
left and right hemispheres of the brain” and that “alpha frequencies in the left prefrontal cortex (PFC)
indicate positive emotion and in the right PFC indicates negative emotion”. Other studies noticed similar
correlations [28,57–59].
In recent times, the commercial computer industry has come up with a tantalizing solution to these
problems: wireless EEG systems (such as Emotiv EPOC, Imec’s wireless EEG headset, NeuroFocus
MyndTM, Neurokeeper’s headset, NeuroSky Mindwave, Plux’s Bitalino). These BCI examples require
no specific medical training or laboratory equipment, as they are intended for use by the general public.
Most of these solutions are “gaming EEG systems”, analyzing EEG activity to control the movement of
characters or objects in games via headsets that comprise a small array of electrode sensors. Other
advantages are listed below:
1. Wirelessly connected to software that runs on a laptop (so no need for an expensive laboratory);
2. Requires little adjustment of electrodes (so no need for long electrode placement procedures)
3. Either use a small cotton pad that are soaked in saline solution to connect each electrode to the
scalp (so no need for messy gel, and hence no need for head washing) or nothing at all.
.
Finally, Chapter 3 presents multiple BCI studies and industry solutions, often comprising several
areas of research mentioned in this chapter. Due to the abstract nature of their raw data, these often
relied on an InfoVis for Psycho-Physiological datum presentation. Incrementing upon this background
overview, it is our intention to present concrete and practical, state of the art examples.
15
16
3Related Work
Contents
3.1 Emotion’s Visual Representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.2 Affective Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.3 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
17
18
As previously stated in chapter 2, there is a broad interest in user Psycho-Physiological data. Through
different acquisition techniques alike, the resulting data by itself is uninterpretable. Thus, it is always nec-
essary to derive these values into more intuitive visual representations. As such, studying the best way
to visualize and interact with our data can be help improve existing offers.
We have documented our findings and grouped them in the following fashion: a) Emotion’s Visual
Representation; and b) Affective Computing, with the following sub-sections: (i) Text Analytics; (ii) Video
Content Analysis; (iii) Physiological Computing
3.1 Emotion’s Visual Representation
Figure 3.1: Thayer’s Arousal/Valence model [5] Figure 3.2: Lee et al. newly proposed visualization [5]
Lee et al. [5] presented a color study for emotion visualization. Explicitly, the research attempted to
measure emotions using different colors, thus proposing a correlation between color and emotion and
ultimately allowing to “identify user’s emotion with different colors”.
The researchers used Thayer’s arousal-valence polar coordinates model (Fig. 3.1) for emotional cat-
egory identification. The RGB accumulative color system in which red, green and blue light are added
together was used. Additionally, each of its components was subject to machine learning techniques
comparing results with other samples in order to improve their overall accuracy. Working with 240 sam-
ples, participants were asked to associate a color with each of the following for coordinates: (1,0), (1,1),
(-1,0), (-1, -1), corresponding to Pleased, Aroused, Miserable and Calm, respectively. A consequent
color gradient between the polar coordinates of each of these four points, ranging from the color of one
point to the other, can be seen in the final visualization (Fig. 3.4).
In the end, the authors validated their results using volunteers and achieved good intervals of confi-
19
dence. In spite of this, subjective and cultural differences were still identified as being influential in certain
emotion-color correlations, specifically in the Miserable-Black instance, where most disagreements were
identified.
Sanchez et al. [6] propose an inventive new way to visually represent emotions. Essentially, the
[-1,1] 3D vector-space representation of emotion is used, matching each axis to Affinity, Pleasure and
Arousal. It was identified as a common, useful way to measure emotion. However, its mathematical
visual representation was considered rational and emotionally unappealing. For that reason, a visual
correspondence was proposed between Pleasure and Shape, Arousal and Brightness and Affinity and
Size of a geometric figure (Figures. 3.3 and 3.4). In this geometric figure, the Japanese representation
of displeasure – X – and pleasure – O – were used as guidelines for the shape’s appearance. On the
other hand, brightness increases according to arousal and size increases correspond to affinity.
In the end, a user study was successfully conducted to validate results. Participants were presented
seven geometric figures and had to match them with each of Ekman’s six emotions plus one: Neutral.
83,93% matching accuracy was identified for non-experts, and 90,46% for experts.
Figure 3.3: Sanchez et al. proposal for emotion’srepresentation [6]
Figure 3.4: Sanchez et al. newly proposed visualization [6]
3.2 Affective Computing
In this section we present state of the art examples of Affective Computing InfoVis.
3.2.1 Text Analytics
Torkildson et al. [7] used machine learning AI techniques for sentiment analysis, emotion classifi-
cation and corresponding visualization purposes. Specifically, emotional classifiers are defined and a
20
collaborative visualization is proposed after analyzing twitter posts data preceding, during and after the
2010 Gulf Oil Spill Crisis.
Categorical Attributes of Emotion were defined based on Ekman’s six basic emotions: joy, anger,
fear, sadness, surprise and disgust. Sentiment attributes: negative, neutral and positive, were desig-
nated and intended to be mutually exclusive. Afterward, the purpose of the pretended visualization was
set: “support the analysis of the emotional impact of events”. Finally, the authors decided to graphically
present the information using a stacked area chart (Fig. 3.5). This idiom allows “easy comparison of val-
ues, facilitated by the colored bands. The top 25% of values, the time instances with the highest emotion
frequency, have the highest color saturation. The coloring makes these peaks easily distinguishable.”
Ultimately, this paper provides an interactive VIS for emotional analysis which allows for tasks such
as: “What emotional response did the public manifest as a consequence of the POTUS speech?” - to
be accomplished.
Figure 3.5: Staked Area Chart for each of Ekman emotions’ intensity over time [7]
Scharl et al. [8] achieved emotion detection through machine learning techniques using social media
platforms for data collection. Furthermore, several consequent visualizations are proposed. The ob-
ject of study is a popular television work of fiction. Using SenticNet 3 [60], sentiment annotations and
affective knowledge is defined according to the following attributes: pleasantness, aptitude, attention,
and sensitivity. Additionally, these attributes can reflect sentiment information by means of color cod-
ing, ranging from red (negative) to gray (neutral) and green (positive). Sentiment is shown with variable
saturation, depending on the degree of polarity.
The resulting VIS resorted to several idioms such as: Line charts, for effective comparison of aver-
age sentiment of each character as well as the level of disagreement ( derived data corresponding to
21
the standard deviation of sentiment which reveals polarization); Pie charts for comparison of overall pos-
itive/neutral/negative results; Radar charts, “to depict affective knowledge along emotional categories”,
by profiling across several emotional categories, as seen on Figure 3.6.
Figure 3.6: VIS screenshot displaying the following idioms: Line chart, Pie chart, Radar chart [8]
Figure 3.7: Screenshots of four distinct text-extracted emotion VIS: a) avatars; b) emoticons; c) Hooloovoo; d)Synemania. [9]
Krcadinac et al. [9] intended to “introduce a novel textual emotion visualization” named Synemania.
This VIS relies on theSynesketch software library to parse text and extract emotional tags from it. Its
purpose is to facilitate: emotion communication; emotion evocation; and overall user enjoyment. In order
to achieve this, Ekman’s emotion classification is used.
This is achieved through an exhaustive user study. In it, chat conversations’ visualizations are eval-
uated using four different ways to visualize emotion: avatars; emoticons; and two new VIS: Hooloovoo
and Synemania. The former focuses on color as a representation of emotion, and displays a matrix
of squares colored accordingly. The latter goes further and uses both color and abstract animation
22
to evoke emotions. See Figure 3.7 for a visual representation of these four distinct alternatives. In
the end, results proved that despite the participants inability to successfully recognize the “exact emo-
tional types”, the generative art content of Hooloovoo and Synemania was “enjoyable and effective in
communicating emotions”. In fact, “Synemania proved to be better (on a statistically significant level,
p < 0.001) than other visualizations in evoking emotions.” Interestingly, if an EEG was added to this VIS
as a physiological data input, this could lead to a more accurate emotional classification [61].
Kamvar and Harris [10] created an emotional search engine together with a web-based visualization.
This was achieved extracting sentences from social media that include the words “I feel” or “I am feeling”,
as well as the corresponding author’s sex, age, location and time of occurrence.
Combining these metrics allows for several distinct visualizations such as: bar-charts with age break-
down of feelings from people in the last few hours; world-maps characterized by a geographic breakdown
of feelings from people in the last few hours; line-charts relating sentiments over time such as stress and
relaxation over the week or love and loneliness in the week of Valentine’s Day (Fig. 3.8); stacked area
charts relating a specific feeling’s frequency over the aging of a human.
These results were based on an immense amount of different emotions or sentiments possible thanks
to the applied text extraction technique. Combining these techniques with physiological input data could
potentially yield even more accurate results. The overall result exposes the diverse capabilities of inter-
faces that allow for item-level exploration of sentiment data.
Figure 3.8: On the left: Sadness over age Area chart; On the right: keyword’s usage during valentine’s day LineChart [10]
Quan and Ren [11] presented an emotional VIS intended for bloggers’ usage. In it, an intuitive
display of emotions in blogs is presented which can be used to capture emotional change rapidly. This
is achieved through machine learning AI techniques applied to emotion recognition at different textual
levels, thus recognizing multi-label emotions.
The emotion visualization interface uses eight different emoticons to represent the following emo-
tions: expectation, joy, love, surprise, anxiety, sorrow, angry and hate. Multiple emotions are recognized
in each word, each sentence and overall document, as depicted in Figure 3.9. The left section is the
input blog. The central area shows the document structure and recognized emotions for this blog. The
23
rightmost part shows a table containing sentence emotions present in each sentence. The develop-
ers approach of using emoticons instead of colors for emotional representation was fully intentional,
designed to avoid ambiguity.
Figure 3.9: InfoVis Interface: input blog (left), document structure and recognized emotions (middle) and the sen-tence’s emotion table (right) [11]
Figure 3.10: Chicago geo-tagged emotional Twitter mes-sages VIS [3]
Figure 3.11: USA aggregated emotion tweets VIS [3]
Guthier et al. [3] proposed a visualization of the affective states of citizens. These emotions are
detected from geo-tagged posts on Twitter using machine learning techniques. The detected emotions
are aggregated over space and time and, finally, visualized using a city map idiom.
Emotions are represented as four-dimensional PADU vectors (pleasantness, arousal, dominance
and unpredictability). Unlike the categorical approach, this dimensional representation is attractive be-
cause it provides an algebra to describe and relate an infinite number of emotional states and intensities.
However, given its continuous nature, the main drawback of this approach is that it does not offer an in-
tuitive understanding of affective information, since people are used to reporting emotions by means of
24
words.
The authors decided to visualize the 4D emotion vectors as four concentric disks. The radius of
these is proportional to the amount of active tweets. The order of the disks was chosen based on
their importance, and a color model was created to represent each dimension: Pleasentness is gray,
Arousal is green, Dominance is red and Unpredictability is blue. Increasing brightness of these colors
was then chosen as a representation of intensity. Each emotional tweet is taken into account for 30
minutes, averaging its value with its geographical neighbors. As we can see in Figures 3.10 and 3.11, a
geographical map then presented the final result.
Figure 3.12: Sentiment before Sep 2013 Autralian Elec-tion InfoVis [12]
Figure 3.13: Sentiment after Sep 2013 Autralian ElectionInfoVis [12]
Wang et al. [12] set out to visually represent emotions over time. The authors created a new visu-
alization named SentiCompass. Intended to compare the sentiments of time-varying Twitter data, its
purpose is the improvement of affective analysis of sentiments.
SentiCompass uses a two-dimensional polar space with the horizontal axis being valence (the level
of pleasantness) and the vertical axis being arousal (the level of activation). These coordinates are
then converted into emotional categories. Polar coordinates representation is used in order to better
discern the cyclic behavior of sentiment data. This is combined with a perspective representation of
Time Tunnel, thus incorporating the temporal dimension and, again, facilitating the comparison of cyclic
behaviors. These rings vary in size according to their time interval correspondence: the smaller they are,
the more remote the time. Finally, color coding is used to represent the two dimensions of sentiment:
Valence (green to red) and Arousal (blue to yellow).
The final result can be observed in Figures 3.12 and 3.13. This new idiom proved effective in the two
case studies conducted by the researchers. Its intuitiveness, however, can be questioned due to the co-
25
centric circumference time interval representation of emotions which results in difficult value comparison
of distinct emotions.
3.2.2 Video Content Analysis
Hupont et al [13] created an emotional visualization – EMOTRACKER - employing not only facial
analysis for emotion recognition purposes as well as eye-tracking for gaze analysis.
After noting several industries’ increasing desire for objective measurements of engagement with
content, sparked by brands increasingly striving to build emotional connections with consumers, the
authors note there is a lack of tools for achieving these aims. “In fact, the important question of how to
efficiently visualize the extracted effective information to make it useful for content designers has been
scarcely studied”.
The resulting VIS was composed of two modes: “emotional heat map”, with selectable emotional
layers, and “emotional saccade map”, with a dynamic representation that shows the path formed by the
user fixation points (points the user had been looking at for a minimum configurable time, in millisec-
onds). In both modes, the users could also see their current emotional state via an emoticon as well as
Eckman’s six emotions, plus neutral, as seen on Figures 3.14 and 3.15.
Finally, it would be of scientific interest to analyze the impact in this VIS’s accuracy if an EEG was
added as input as done by Soleymani [61].
Figure 3.14: Emotional Heat Map InfoVis [13] Figure 3.15: Emotional Saccade MapInfoVis [13]
Emotient Inc. was a startup focused on video emotional analysis and its consequent visualization.
Recently acquired by Apple Inc.1, the company’s main product was an API called Emotient Analytics 2.
This software provided facial expression detection and frame-by-frame measurement of seven key
emotions, as well as intention, engagement and positive or negative consumer sentiment. All of this
1 http://blogs.wsj.com/digits/2015/12/11/silicon-valley-kingpins-commit-1-billion-to-create-artificial-intelligence-without-profit-motive/
2https://web.archive.org/web/20151219062637/http://www.emotient.com/products
26
was then incorporated in a visualization (Fig. 3.16). In it we can see line charts for Emotional Engage-
ment, Attention and Sentiment as well as the average of each of these metrics. Furthermore, stacked
area chart displayed each emotion over time as well as pie charts for the video’s average, emotional
engagement and participants’ gender.
The insight gained from quantifying emotions allowed companies to pinpoint and fix issues as well as
improve their marketing performance. Additionally, all of this was accessible independently of platform,
via a web browser.
Figure 3.16: Emotient InfoVis displaying Emotional Engagement current value with an angular meter idiom andevolution with a line chart (left) as well as Ekman’s emotions in a retrospective stacked area chart andits overall values in pie charts (right) 1.
Figure 3.17: EmoVu InfoVis usage of a area-chart which represent each of Ekman’s emotions with a distinctivecolor 3
Produced by Eyeris Inc, EmoVu3 is a self-serve cloud-based web platform that “allows video content
creators to accurately measure their content’s emotional engagement and effectiveness of their target
audience”. The creatures argued that establishing an emotional connection with the audience creates a
3http://emovu.com/e/video-analytics/emovu-cloudsync/
27
more affective tie the creators assert that emotional campaigns are likely to generate larger profit gains
than rational ones.
Emotional acquisition techniques are employed, such as: face-tracking; head pose detection; facial
recognition; multi-face detection; gender and age group recognition; and eye tracking and openness.
These were then subject to machine learning techniques to reach acceptable accuracies. This emotional
analysis is then encoded using seven emotions (Ekman’s six plus neutral), five Engagement metrics,
three Mood indicators, four Age groups and gender.
Finally, an interactive visualization (Fig. 3.17) consisting of a layered area chart representing the
different emotions detected over a commercial’s duration is presented. Adjusting it using metrics such
as Age, Gender and Location is also possible. A free demo is offered upon account creation.
Figure 3.18: Affdex InfoVis comparing a user’s line chart for Expressiveness with other users according to agegroups 4.
With millions of faces analyzed to date and one-third of the Global Fortune 100’s companies us-
ing their technology, MIT’s Media Lab’s offspring Affectiva Inc has developed a video analysis emotion
detection and visualization API – Affdex4.
The argument that Emotions are the number one influencer of attention perception, memory, human
behavior and decision making, the API is capable of detecting Ekman’s six emotions plus neutral, 15
nuanced facial expressions and even heart-rate by evaluating color changes in a person’s face, which
pulse each time the heart beats.
The resulting visualization consists of a retrospective line chart, generated after analyzing via web-
cam the user’s reaction to an ad. This line chart compares different age groups reactions as well as the
4 http://www.affectiva.com/solutions/affdex/
28
user’s, along the ad’s duration. The measured reactions are: Surprise; Smile; Concentration; Dislike;
Valence; Attention; and Expressiveness (Fig. 3.18). This demo is publicly available to test, online.
Figure 3.19: Kairos InfoVis presenting a subjects’ facial recording side by side with multiple retrospective areacharts, one for each of Ekman’s emotions 5.
Another thriving company in the emotional research scene is Kairos 5. Offering an Emotion Analysis
API powered by IBM’s supercomputer Watson. Using machine learning AI techniques, their product
examines people’s reactions to ads for neuromarketing purposes. These responses are broken down
and interpreted according to Ekman’s six basic emotions as well as the overall sentiment, which can
range in a scale from Very Negative (-100) to Very Positive (+100). Users can submit videos containing
people’s faces by visiting Kairos’ website. The video is then processed and the results, presented to
users. The visualization consists of an interactive interface alongside the submitted video. An area chart
is presented for each emotion, displaying its intensity over time (Fig. 3.19). A free demo is available
online.
Figure 3.20: Nviso’s InfoVis usage of an area chart to represent each of Ekman’s emotions detected over time asa result of facial expressions’ detection (left) 6.
Nviso 6 is a swiss company based at the Swiss Federal Institute of Technology in Lausanne which
specializes in emotion video analytics. Based on Ekman’s Facial Action Coding System, an artificial
5 https://www.kairos.com/6http://www.nviso.ch/
29
intelligence algorithm captures and measures the response to the main facial muscles involved in the
expression of emotion in real time. Proprietary deep learning 3D Facial Imaging systems decode facial
movements and underlying expressed emotions. Specifically, this is achieved scrutinizing facial expres-
sions and eye movements by tracking 43 facial muscles and hundreds of muscle movements using
ordinary webcams.
Intended for real-time consumer behavior analysis, the results are presented through an on-line VIS
tool. In it we can identify the intensity of each fundamental emotion at any given moment of the submitted
video by examining a stacked area chart (Fig. 3.29).
3.2.3 Physiological Computing
Figure 3.21: EEG’s Power Spectrum Figure 3.22: Volumetric bull’s eye id-iom [14].
Figure 3.23: Gavrilescu [14] pro-posed EEG InfoVis
Gavrilescu and Ungureanu [14] investigated contemporary methods to display EEG data and subse-
quently proposed a new VIS. After mentioning the several useful usages of EEG, such as “the assess-
ment of a user’s emotional state”, the authors note that the nature of EEG signals, lacking any intrinsic
visual data, causes multiple challenges regarding their graphical representation. Particularly “for data
spanning over frequency bands and extended durations”. The current, most common ways, for EEG
visual representation are then dissected:
Power Spectrum graphs are identified as being time-consuming and tedious to compare (Fig 3.21).
This issue is particularly severe when representing raw data for a large number of electrodes and
for various brainwaves.
Volumetric bull’s eye plot - a 2D top-down disc-view of the cranium (Fig. 3.22) - is identified as an
effective way of relating desired information for a single sample across all electrodes positions.
30
However this idiom is unable to effectively represent complex, multivariate data concurrently. This
is due to the difficulty to “represent multi-value data across multiple ranges and time phases in a
single image”.
Finally, a new VIS (Fig. 3.23, aiming to “provide an intuitive means of representing the data over
multiple sample phases and for all available frequency bands” is presented. Using color spots varying
in size and color according to the brainwave frequency and voltage, respectively; and glyphs varying in
volume depending on the variation of the data between consecutive time phases (Fig. 1).
Figure 3.24: Valence/Arousal space-visualization map-ping [15].
Figure 3.25: Emotional heat map; Valence/Arousal bar-chart [15].
Cernea et al. [15] identified emotions with the purpose of enhancing the experience of multi-touch
interfaces’ users. A VIS is proposed for this context and this approach is verified conducting an EEG
user study. The researcher’s goal is to improve the user’s emotional self-awareness; the awareness of
other users emotion in collaborative and competitive scenarios; and the evaluation of touch systems and
user experience through the visualization of emotions.
This was achieved using Russel’s model [62] encoding a variety of emotions in terms of affective va-
lence (pleasant or unpleasant) and arousal (excited or calm) (Fig. 3.24). Using EEG, facial expressions
and GSR readings as input, the VIS would then be presented around the user fingertip’s touch-screen
outline. It would vary from a sharp to a curvy outline according to displeasure or pleasure and from blue
(slow pulsation) to red (intense pulsation) according to low/high arousal. Additionally, for user retrospec-
tion purposes, a bar chart representing arousal-valence values over time could be posteriorly analyzed.
Finally, a blue-violet-red heat map was used in order to indicate average arousal around a football field
for a specific multi-touch video game application (Fig. 3.25).
McDuff et al. [16] presented AffectAura: a “multimodal sensor set-up for continuous logging of audio,
visual, physiological and contextual data” intended for emotional memory improvement purposes. Their
broad approach for emotional input data was subject to filtering and machine learning techniques in order
31
to improve affect recognition. Consequently, a “classification scheme for predicting user affective state
and an interface for user reflection” was proposed and successfully tested. The visualization consisted
of a retrospective log of emotional activity.
The system classified emotions using a categorical version of the three-dimension valence-arousal-
engagement model. Valence could be either negative, neutral or positive and arousal and engagement
could each be either low or high. Visually, a circle geometry would change color, shape, opacity and
size according to valence, arousal, engagement and overall activity respectively (Fig. 3.26).
In the end, users could successfully leverage cues from AffectAura to construct stories about their
days, even after they had forgotten these particular incidents or their related emotional tones. Most par-
ticipants (83%) thought that this emotional analysis would be useful for reflecting and self-improvement.
Figure 3.26: Screenshot of AffectAura’s InfoVis [16]
Figure 3.27: Museum’s top view heat map [17]. Figure 3.28: Emotional Intensity bar charts and scale un-der painting [17].
Du et al [17] conducted an emotional analysis study in an art museum. Using a GSR physiological
sensor to compute “the individual and average affective responses and provide the “emotion profile” of
a painting as well as the “emotional map” of the museum”.
Museum visitors were simply asked to visit the museum while wearing the GSR as no other source of
32
emotional feedback was targeted. This was intentional, as the participant’s nervous system responses
were preferred in order to avoid culturally and psychologically biased responses.
The visualization encoded emotion using a color scale ranging from blue to yellow to red as the
emotional intensity increased. The VIS consisted of a 3D virtual museum. The floor was used as a heat
map, changing color according to the overall emotional reaction to nearby paintings. Additionally, bar
charts were presented under each painting, displaying each of the participant’s emotional reaction in the
y-axis, for retrospective analysis purposes (Figures 3.27 and 3.28).
Figure 3.29: Screenshot examples of iMotions’ InfoVis for distinct emotions 7
iMotions 7 is a biometrics research platform that provides software and hardware for many types of
bodily cues intended for emotional identification. They provide an API which supports more than 50
hardware devices responsible for emotional acquisition techniques such as: eye tracking, facial expres-
sions, EEG, GSR, ECG and EMG.
This physiological data is converted to emotional insight using a deluge of metrics such as: positive,
negative and neutral valence; 19 action units; Ekman’s six basic emotions plus neutral and two advanced
emotions – confusion and frustration. This data is then visualized, in real time, showing an area chart
for each emotion, over the duration of a commercial which is displayed alongside the subject’s recorded
reaction video. If used, eye tracked areas of interest are also displayed on the ad. Similar area charts
are also available to other metrics. Interestingly the technology was used by the US Air Force and was
featured in television shows.
NeuroSky’s MindWave is a BCI headset catering directly to the consumer market 8. The hardware
consists three electrodes: a reference and ground electrodes located on a ear clip and one EEG elec-
trode on the forehead above the eye (FP1 position, see figure 2.3)9.
MindWave’s also supports previous recording data display in the format of an InfoVis (Fig. 3.30). In
it we can visualize brain-wave bands’ intensities (counts) in a radar chart (left) as well as a colored bar
7https://imotions.com/8 http://neurosky.com/biosensors/eeg-sensor/biosensors/9http://neurosky.com/biosensors/eeg-sensor/ultimate-guide-to-eeg/
33
Figure 3.30: Screenshot of Neurosky’s MindWave InfoVis 8
chart (right). The raw EEG signal is displayed above the bar chart. Additionally, two circular meters
display the normalized (1-100) Attention and Meditation metrics. This angular display of numbers is a
bad practice as it complicates user comparison. An alternative, linear representation would fix this.
Finally, signal quality is also displayed so the user can better identify issues such as signal saturation.
Figure 3.31: Screenshot of bitalino’s OpenSignalsInfoVis. In it we can observe a line-chart for each of bitalino’schannel raw data over time [18].
34
Bitalino is a low-cost toolkit to learn and prototype applications using physiological signals10. It
supports a wide range of sensors, although not all simultaneously, due to bandwith limitations [18].
Namely: an Accelerometer, PPG, ECG and Electrodermal Activity (EDA)/GSR.
Additionally, a User Interface (UI) and VIS software - OpenSignals11 - can be installed for a real-time
or retrospective analysis as well as loading of pre-recorded signals (Fig. 3.31). On it we can see line
charts displaying each sensor’s signal.
OpenSignals main limitations are its lack of derived metrics, the VIS shows RAW data only; its
absent of any post processing which can result in saturated signal visualization; its connectivity can be
somewhat lacking and frequent crashes have been reported.
3.3 Discussion
In this section, we have extensively investigated several distinct InfoVis examples of emotional, af-
fective and sentimental metrics such as Emotional Valence, Arousal, Meditation and Attention.
We started by investigating two scientific literature papers studying how to represent an emotion
visually as well as their success rates.
Moving on to Affective Computing InfoVis examples, we found: seven emotional analysis InfoVis from
extracted text, six from video camera recordings and six from physiological computing, some even using
hybrid, data fusion techniques. As far as we could tell, there is only one clear example of an InfoVis
dedicated to EEG [14]. However, its lack of emotional representation as well as the ”unjustified usage
of 3D” in the form of a Human’s head - a typically prohibitive practice in InfoVis design [63] - proved
disappointing.
It is noteworthy to point out that academic literature focused mostly on emotions extracted from text
and, recently, on fusion approaches. Moreover, with the exception of EMOTRACKER, all video based
emotional analysis InfoVis were proprietary in nature.
Additionally, combining different emotional input such as physiological data and camera facial recog-
nition also proved a popular approach with supplementary, insightful information as a consequence of
new metrics as well as increased complexity and efficacy. Noteworthy accuracy increasing techniques
for emotional acquisition goals were also identified. AI’s machine learning approaches were common
and advantageous as programs could be trained to better identify emotions according to numerous past
examples.
After this state of the art review, it became clear that there are several distinct approaches and fields
of research analyzing and scrutinizing one’s mental state and emotions. Despite their accompanying
UI, a lacking amount of InfoVis specific Physiological Computing studies was, regrettably, registered.
10http://bitalino.com/en/11http://bitalino.com/en/software
35
With the exception of Gavrilescu’s [14] work which featured ”unjustified 3D interaction” (see 3.23), we
could not find any other EEG InfoVis. Here lies our opportunity to expands the borders of scientific
knowledge. Further motivating our work, we propose the development of an intractable and intuitive
Psycho-Physiological visual medium. Through which, users will be able to better understand their emo-
tional and mental inner-workings.
36
4Proposed Solution
Contents
4.1 Biosignal Input Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
4.2 Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
4.3 Development Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
37
38
This chapter portrays all the work which led into the most recent version of our proposed InfoVis
solution1. In it we will document the various stages of this process. We start with presenting and justi-
fying our chosen device and method of affective data aquisition: a bitalino BCI physiological computing
wearable device. Subsequently we will detail the physiological DB acquisition methodology. Finally, we
shall present our InfoVis system alongside its development process, detailing its iterative nature as well
as its underlying algorithms.
4.1 Biosignal Input Solution
Efficient Neuro-Physiological acquisition is key for any biosignal database upon which we can de-
velop and InfoVis, particularly so in real-time scenarios. This is comprehensible as the database itself
is the foundation of any InfoVis. As such, we must discuss the means through which we visually and
interactively amplify cognition of this otherwise incomprehensible data. During initial IBEB meetings, we
learned about the opportunity of using a BCI as input to a Physiological Computing InfoVis database.
We were introduced to Bitalino2 - a DIY BCI biosignals hardware kit - which we ended up using
as our affective data acquisition device. The hardware was introduced to this project thanks to IBEB.
Furthermore, in January 2017 we participated in a case study meeting for Bitalino at IBEB. In there,
we had the opportunity to meet one enthusiastic and helpful supporter: Bitalino’s parent company Chief
Information Officer (CIO), Ph.D Hugo Silva 3 - with whom we kept contact. Indeed, we found one bug in
Bitalino’s bare-bones python API and were credited for debugging it4.
Intended to be a Physiological Computing equivalent of the traditional Arduino platform, the hardware
consists of a lowcost, modular wireless biosignal acquisition system, with a credit card-sized form factor
that integrates multiple measurement sensors for bioelectrical and biomechanical data. A particularly
noteworthy attribute is this BCI’s battery longevity: it runs up to 10 hours using Bluetooth and two sensors
with the standard battery. Additionally it can be upgraded to achieve more than 40 hours of battery life
in continuous operation [20].
Bitalino supports many distinct sensors, yet its analog ports are limited. For this and other features
such as weight in Table 4.1. We equipped ours (Table. 4.2) with an accelerometer, four EEG electrodes
and one PPG ear-clip sensor. The rationale behind this multi-sensor approach is straightforward. The
accelerometer is used to detect user movement and the resultant signal saturation. Through EEG anal-
ysis, we can extract CNS information such as Emotional-Valence. The PPG allows us to estimate the
user’s heart rate and thus better understand his arousal levels. Technical specifications of the compo-
1https://github.com/chipimix/thesis2http://bitalino.com/3https://www.linkedin.com/in/hugo-silva-0661763/4https://github.com/BITalinoWorld/python-serverbit/commit/151cfbb0a80efd360c4dc917b141dfb9118a79ff
39
Figure 4.1: Bitalino board with EMG [18]Figure 4.2: BrainBIT prototype Outside and Inside head-
band view detailing: (1) The Power Block; (2)Battery; (3) PPG; (4) Electrodes.
nents can be found at Bitalino’s website5, including hardware data-sheets6.
Table 4.1: Bitalino’s Technical Specifications [20]
SpecificationsSampling rate Configurable to 1, 10, 100, or 1,000 HzAnalog ports 4 input (10 bit) + 2 input (6 bit)Digital ports 4 input (1 bit) + 4 output (1 bit)
Data link Class II Bluetooth v2.0 (∼10-m range)Actuators LEDSensors EMG, ECG, EDA, Accelerometer , and PPG
Weight 30 gSize 100 × 60 mm
Battery 3.7-V LiPoConsumption ∼65 mAh (with all peripherals active)
The next step was the assemblage of the Bitalino. The pre-ordered Plugged Kit consists of elec-
tronics (cables, sensors, the processing unit and other blocks) but no supporting apparatus as seen in
Figure 4.1. As such, a considerate amount of thought went into the ergonomics and the resistance to
wear and transport of our device. Our solution, created by IBEB colleague Mariana Antas de Barros,
consists of two velcro strips glued to each-other with the hardware mostly between them.
The only visible electronic components on the inside of the strip are the electrodes. The PPG sensor,
which hangs on the ear-side, the Power Block with its indicate LED and power switch and the battery
are visible on the outer side of the strip. All of this is detailed and can be seen in Figure 4.2.
Finally, our headband had improved usability substantially over its original components. Its wearable
nature allowed it to be used without cables or any other support. Ergonomically, the soft component of
velcro was placed in the inner side of the headband so that our prototype was comfortable and fitted
5http://bitalino.com/en/6bitalino.com/datasheets
40
users regardless of head size.
This prototype’s uniqueness entitled it its own name: BrainBIT. The final result can be seen in Figure
4.3.
Figure 4.3: BrainBIT BCI final prototype used in this project.
4.1.1 Device Configuration
Bitalino is well documented7, which facilitates its configuration. When powered on, its firmware is
prepared to receive a set of commands that can be used to control the state of the device. Namely, Idle
- with the LED blinking at 0.5Hz, biosignal streaming is off - and Live - activating each sensor channel
stream while the LED blinks at 1Hz.
Additionally, Data Packet configuration, totaling a maximum of six channels, up to four with a res-
olution of 1024 bits two limited by a 64bit resolution, is presented in Table 4.2 - as previewed in the
beginning of section 4.1.
Table 4.2: Depiction of our Bitalino Channel-Sensor configuration
Analog Channel Correspoding Sensor Sensor ResolutionA0 Left Electrode 1024 bitsA1 Right Electrode 1024 bitsA2 Photoplethysmogram 1024 bitsA3 Accelerometer x-axis 1024 bitsA4 Accelerometer y-axis 64 bitsA5 Accelerometer z-axis 64 bits
Finally, one last imperative setting is set to be configured: the Micro-Controller Unit (MCU)’s sample
rate. It is configurable to 1, 10, 100 or 1000Hz.7http://bitalino.com/en/learn/documentation
41
Since the brainwave frequencies we intend to analyze lie in the zero to 50 Hz range (Table. 2.1),
the device’s sample rate should be of at least 100 Hz, as stated by the Nyquist–Shannon Sampling
Theorem [64].
Nonetheless, we chose a sample rate of 1000 Hz as this a more scientific approach with the added
benefit of supporting the identification of harmonics (distortion) in waves of higher frequencies- up to
500Hz.
4.2 Architecture
System architecture’s can be divided in three main pillars: the hardware - which has been extensively
presented in section 4.1; the users - whose perception we are attempting to improve, as explicit in sub-
section 4.2.1; last but not least, the software - responsible for filtering, processing and the derived data
InfoVis.
Figure 4.4 portrays the behavior and structure of these comprising elements in a detailed, conceptual
system architecture. This comprehensive schematic displays system modulus using numbers - from (1)
to (5). Additionally, lettered, colored arrows - from (a) to (e) - display the data pipeline transitions from
chemical and electrical biosignals to filtered, processed, derived psycho-physiological data.
Upon scrutinizing our figure we can better understand our system components, their functions and
how they Interact with each other. Particularly, its data abides by the following color scheme, according
to its stage:
Black stands for the analog biosignals input, from a current or previous subject.
Red represents the U digitalized biosignal. These vary in resolution from 64 to 1024 bits as stated in
Table4.2;
Green depicts the python output preprocessed data which features saturation removal, peak-finding
and the raw EEG singal’s resulting Power Spectrum calculus.
Yellow illustrates the positional coordinates input system used to interact with the InfoVis
Blue portrays the output of visual data.
These categories help us understand how the system’s software processes and transforms previously
incomprehensible Physiological Data, into a visually recognizable depiction of data.
To do so, we rely on two main software components:
ServerBIT Python back-end application, based on the Twisted8 event-driven networking engine. This
architecture is based on an asynchronous message passing protocol, in which the server and8http://twistedmatrix.com/
42
Figure 4.4: System architecture. Different, arrow colors and directions represent different system interactions: InBlack (a), we see the users’s (1) electrical and chemical biosignals (EEG and PPG) and the physicalaccelerometer input. In Red (b), BrainBit (2) listens for requests - sent over Bluetooth by ClientBIT(3) - which sends a raw, discrete domain, digitalized version of the user’s biosignals, accordingly. Thisversion is then sent to ServerBIT (4) for post-processing. Indeed, in Green (c), we see the the PythonServer (4), JSON-formatted, preprocessed data answer the request it received (in identical fashion)from ClientBIT (3) - a web-based, Javascript/HTML InfoVis interface.In Yellow, the input means of ex-ploring our InfoVis - typically a computer mouse - is depicted. Finally, a large blue Blue, one-directionalarrow, represents the optical NeuroExplore (5) InfoVis output, through which the user can visually ana-lyze their own, previously collected, or real-time, BrainBIT Psycho-Physiological data. Finally, in Yellow,the input means of exploring our InfoVis - typically a computer mouse - is depicted.
the client communicate using JSON-formatted strings. ServerBIT receives Python instructions as
string-based requests from a client, evaluates them, and replies with the same instruction having
the evaluation results as input arguments.
ClientBIT is the HTML/Javascript/CSS front-end that connects to ServerBIT and opens a specified
BITalino device to request biosensor data Using the Data Driven Documents (D3.js) javascript
library 9. D3.js helps us bring data to life using HTML, SVG and CSS. D3’s emphasis on web
standards allowed us to build our own framework, combining powerful visualization idioms and a
data-driven approach to DOM manipulation.
4.2.1 Users
It is important to understand that our system is focused on the visualization of brain patters and other
Physiological signals. As such, for design and validation purposes, it is relevant to define our user as
the final InfoVis data visualizer.
This distinction is relevant: the wearable user is not necessarily the user visualizing and interacting
9https://d3js.org/
43
with its resulting data. Indeed the wearable user can be a mere test subject, whose aggregate data can
be of interest for an external entity.
Notwithstanding, the user can interact with our browser-based InfoVis in one of two ways: using just
a screen and a mouse, or a touch-based interface, to visualize previous recordings; or, adding BrainBIT
into the equation, visualize their mental inner-workings in real time.
4.3 Development Process
With our hardware means of raw, biosignal data acquisition established, we now present all stages
of development which led into our final, interactive InfoVis solution: NeuroExplore.
4.3.1 Biosignal Dataset Development
The foundation for any InfoVis system is the data whose cognition, perception and insight we are
attempting to revolutionize. We aim to transform and represent data in a form that allows human inter-
action such as exploration and newfound knowledge as a result. This is what differentiates an InfoVis
from a more traditional and elementary Graphical User Interface (GUI).
Accordingly, we set out to collect a physiological, big data sample using BrainBIT. As we will detail
in sub-sections 4.3.2 and 4.3.3, several neuroscientific studies have documented correlation between
specific brain-wave frequencies’ power changes, according to their respective electrode scalp location
in the International 10-20 system (Figute 2.3), and certain activities such as Meditation [65,66], mental
Engagement (or Focus) [67–69], and positive or negative emotional-valence [70–72].
Originally, these studies were conducted with traditional EEG lab equipment. Contemporaneusly,
commercial BMI EEG solutions have shown similar results [28]. Indeed Work by Thie [73] and Debener
[74] suggests that these wearable systems are valid tools for measuring visual evoked potentials and
EEG activity when walking outside.
Accordingly, we devised a set of tasks in which, in accordance to our goals (Chapter 1, the mentioned,
mental activities, emotions and varying degrees of focus are stimulated. The exercises’ nature was
designed according to previous investigators work in the field of Neuroscience and [! ([!)qEEG], as
suggested by IBEB colleagues. Particularly, these were designed to stimulate specific the mental action
or processes in which correlation has been found by analyzing different scalp location qEEGs. The
chosen tasks
The final result can be seen in Table 4.3. As a result, and considering the fact that there is no
impediment in measuring the acceleration and PPG in the ensuing conditions, the chosen activities for
our test-subjects to complete were designated as follows:
44
Meditation Users were asked to complete 12 minute’s meditation sessions using Headspace. Headspace
is a Mindfulness meditation app with scientifically proven benefits [75–77]. More information re-
garding this app is found on its web-site10.
Puzzle Solving Users’ signals were recorded while playing free flash puzzle games for five to six min-
utes.
Music Listening A wide range of music videoclips were showed (visual and audio) to our users. Addi-
tionally, IADS were used in two, closed-eyes sessions (see sub-section 4.3.1.A for more details).
This usually took four to six minutes.
Video Watching Small emotional stories or movie clips as well as fail videos were showed to the users
for approximately five minutes.
Table 4.3: Depiction of each session’s activities, besides meditation
Song Author and Title Puzzle GameTitle Video clip URL
S01 James Brown - I Feel Good Bloxorz youtube.com/watch?v=GLq Vp5z9D4S02 Animals - House of the Rising Sun Bloxorz youtube.com/watch?v=f8OmSWxF6h8S03 The Verve - Bitter Sweet Symphony Flow Free youtube.com/watch?v=etKoVSPfBJ0S04 Radiohead - Like Spinning Plates Flow Free youtube.com/watch?v=OnMG8G0NG US05 Gorillaz - Clint Eastwood Isoball 3 youtube.com/watch?v=F2bk 9T482gS06 Bethoven - Moonlight Sonata Isoball 3 youtube.com/watch?v=-rgDvP39LqwS07 Mogwai - Auto Rock Redremover youtube.com/watch?v=GV4DCfZt-QoS08 Crazy Astronaut - State Redremover youtube.com/watch?v=GV4DCfZt-Qo v2S09 Cartoons - Witch Doctor SuperStacker2 youtube.com/watch?v=zCsmBDI8vNAS10 Police -Walking on the Moon SuperStacker2 youtube.com/watch?v=a9mJQiJnheoS11 Siavash Amini - The Wind Blosics 2 youtube.com/watch?v=9UchIJV9jGMS12 Bonobo - Silver Blosics 2 youtube.com/watch?v=9UchIJV9jGM3t=6mS13 Mariah Carrey - Without You Klocki youtube.com/watch?v=9UchIJV9jGM3t=11mS14 Pirates of the Caribbean Main Theme Klocki youtube.com/watch?v=9UchIJV9jGM3t=17mS15 Pokemon Season One Main Theme Electric Box 2 youtube.com/watch?v=7grc 6cDYd4S16 Bonobo - Stay the Same Electric Box 2 youtube.com/watch?v=7SmN1D5eS4wS17 Peer Gynt - Morning Mood Color Switch youtube.com/watch?v=-cBgb8IOO2US18 Los del Rio - Macarena Color Switch youtube.com/watch?v=sfzsznPP590S19 IADS Audio Clips 4 In A Row youtube.com/watch?v=sfzsznPP590#t=6mS20 IADS Audio Clips 4 In A Row youtube.com/watch?v=sfzsznPP590#t=12m
BrainBIT captured the user’s physiological signals in each session. Explicitly, each second of a
recording contains 6000 data values: 2000 for each of the two EEG with a sampling resolution of 1024;
1000 for the PPG, also with a resolution of 1024bits; and finally, 3000 for the accelerometer with a
resolution of 1024 bits for one of its channels and 64 bits for the other two due to technical limitations as
seen in Table 4.2.10https://www.headspace.com/science/meditation-research
45
A total of twenty sessions per user was achieved. Each session consisted of four activities starting
with meditation followed by puzzle solving, music listening and then video watching. Typically this would
take around 35 minutes with small breaks between each activity of around 30 to 120 seconds. This
pause was intentional as it allowed the users to provide feedback as well as to decompress.
As a consequence, each session contained between 7 920 000 to 10 440 000 data values as its
duration varied from 21 to 29 minutes according to movie and music lengths. A total of 23 sessions were
completed between April and June 2017. Three 24 year old male participants took part in this endeavor.
The first three sessions were excluded because we wrongly saved processed data instead of RAW
data. The remaining 20 sessions were divided by tester (three) and activity types (four) which resulted
in a total of 250 CSV files.
Figure 4.5: Recording CSV file screenshot snip depicting a six-line long, respective to one second, selection (a)with one thousand CSV each (b).
Finally, each CSV file was formatted as visible in Figure 4.5 so that each six lines corresponded to
one second of recording. Each one of these lines stores 1000 comma separated data values specific
to each channel. Indeed, each of these lines corresponds, accordingly, to each Bitalino Channel, from
A0 to A5, as seen in Table. 4.2. This channel format is respected both in Real-Time as well as in
Retrospective VIS
The full extent of data preprocessing filtering, subsequently described in subsection 4.3.2, resulted
on each recording’s respective CSV, database file decrease of approximately 37%, as visible in figure
4.6.
In the end, the database decrease from approximately 7,97 to its final size of 5,81 GB. The entire
gather process took three months, starting March 27th, and features 250 CSV files spanning four activity
categories correlated with specific, documented, brain patterns.
46
Figure 4.6: Raw biosignals (left) versus Preprocessed (right) database sizes, in GB.
4.3.1.A IADS
The International Affective Digitized Sound system (IADS) provides a set of acoustic emotional stimuli
for experimental investigations of emotion and attention.
This set of 167 naturally occurring standardized, emotionally-evocative, internationally accessible
sound stimuli is widely used in the study of emotion. Its includes contents across a wide range of
semantic categories.
It is part of a system for emotional assessment and was developed by the Center for the Study of
Emotion and Attention (CSEA) at the University of Florida. Each sound-clip lasts six seconds long and
consists of varied examples such as: bees buzzing, a baby laughing, a female sighing or traffic noises.
Finally, as seen in Table 4.3, we played these sound in a closed-eyes recording session.
4.3.2 Data preprocessing
The massive nature of collected, enigmatic, raw biosensor data added to its susceptibility to real-
time recording condition deterioration demanded preprocessing, for feature extraction and filtering pur-
poses. As such, we started by building a data cleaner Python script, which we used for the retrospective
database and extended it into the real-time back-end preprocessing development. NeuroExplore data
preprocessing is implemented by the Python server - serverBIT. Receiving as input the unprocessed
biosensor signal, this back-end component is tasked with the following successive data transforma-
tions:(1) Filtering Signal Saturation; (2) Calculating the EEG Power Spectrum; (3) PPG blood volume
pressure Peak-Finding.
47
4.3.2.A Saturation Filtering
As we began acquiring data, we noticed BrainBIT was susceptible to external, electro-magnetic
noise.
Visible examples featured the interference from the Alternate Current (AC) power supply - were
discernible in the InfoVis Power Spectrum at 50Hz. In addition, other coherent noises [78] were visible
in the raw EEG line-chart. In one instance, this coherent noise was the clear outcome of a nearby
cooling fan electric motor. This correlation was visible in NeuroExplore as we moved closer of away from
the fan.
As stated in section 4.1, we intended to filter saturation using our physical sensor - the Accelerometer.
In practice, any unprocessed EEG recorded saturation lasting more than 10ms meant it is respec-
tive 1000-values communication had to be removed. The justification for this is two-fold: Firstly, 10ms
correspond to the period of a 100Hz wave, and thus, the resulting noise could compromise the Power
Spectrum; Secondly, we could not simply remove these values and keep the rest of the recording be-
cause, further in the data transformation pipeline, Fast Fourier Transform (FFT) requires a temporally
equidistant input.
The resulting EEG-Saturation filtered data included all data instances in which Accelerometer-detected
abrupt changes. In other words, we found that recorded velocity differences were always accompanied
by detrimental EEG saturation. As a consequence, we ended up discarding the Accelerometer data
from the preprocessing procedure.
The final, detailed Algorithm 4.1 depicts how we detected raw signal saturation.
Algorithm 4.1: EEG Raw Signal Saturation Detection for each second of input datainput : raw EEG1 and raw EEG2, 1000 long input Arrays with Left Frontal Polar (FP1) and
Right Frontal Polar (FP2) raw EEG signaloutput: Boolean relative to Saturation presence in this second of recording
count raw1 = 0count raw2 = 0for curr ms← 0 to l000 do
read current;if raw EEG1[curr ms] ≥ 1018 or raw EEG1[curr ms]=0) then
count raw1++if raw EEG2[curr ms] ≥ 1018 or raw EEG2[curr ms]=0) then
count raw2++
if count raw1 ≥ 10 or count raw2 ≥10 thenreturn true
elsereturn false
48
4.3.2.B EEG Power Spectrum Calculus
In order to derive the EEG’s Power Spectrum, we used the NumPy library11. This Python library
implemented a function - rFFT12 - which computes the one-dimensional discrete Fourier Transform for
real input. This means the function does not compute the negative frequency terms, and the length of
the transformed axis of the output is therefore n/2 + 1., which resulted in 501 values (from 0 to 500Hz)
as n = sampling rate = 1000 Hz.
In our case, this real input is the digitalized, electrode captured, 1000 electric potential difference
values - each associated with a time interval of one millisecond. This input was received every one-
second, from BrainBIT. Thus, this function computed the one-dimensional 501-point discrete Fourier
Transform (DFT) of a real-valued 1000-point array by means of an efficient algorithm called the Fast
Fourier Transform (FFT) [79].
The resulting formula is visible in equation (4.1).
PS = EEG Power Spectrum, ranging from 0 to 500Hz
rFFT = One-dimensional Discrete Fourier Transform for real input.
raw = One second respective 1000 values of unprocessed EEG signal
PS = | rFFT ( raw) |2 (4.1)
4.3.2.C PPG Peak-Finding Algorithm
The PeakUtils13 Python package ascribed an Massachusetts Institute of Technology (MIT) license for
this undertaking. As the name implies, it implements methods for detection of peaks in one-dimensional
data (e.g. BrainBIT output signals).
This is achieved by finding the index of the local maxima, depending on the passed threshold and
minimum distance. We chose it because of its efficiency and simplicity, its a Python implementation of
Matlab’s Find Peaks function, and because it also supports methods for baseline correction, Gaussian
adjustment and others.
Considering the PPG’s blood volume values rested at half on the sensor’s resolution (512), through
a trial and error approach, we set the normalized threshold as seen in Equation (4.2).
prev ppg = up to 10 seconds (10 000 values) of previously preprocessed PPG sensor data. It is not
normalized and, thus, must be divided by its maximum resolution (1023).
11https://docs.scipy.org/doc/numpy/user/index.html12https://docs.scipy.org/doc/numpy/reference/generated/numpy.fft.rfft.html#numpy.fft.rfft13https://pythonhosted.org/PeakUtils/
49
max( 0.5, 0.8 ∗ (prev ppg)/1023) (4.2)
Additionally, we set the minimum distance between consecutive peaks as 600 ms, as this value
corresponds to the heart-rate of 100Beats per minute (bpm), which, in turn, stands at the limit of the
typical adult’s resting heart-rate spectrum - [60− 100] bpm.
4.3.3 Derived Data
NeuroExplore relies on attributes derived from the ServerBIT output of preprocessed data. In this
subsection we will document how the front-end (ClientBIT) derived each of the following metrics: Theta
(θ), Alpha (α) and Beta (β) brain oscillations; Heart-rate; Emotional Valence; Engagement; and Medita-
tion.
These last three metrics in particular: Emotional-Valence, Engagement and Meditation, are deter-
mined exclusively for each second of non-saturated communication. This was intended as it is futile
to derived metrics from a saturated signal. In these missing value situations, we opted to assign the
previously, adequately derived, metric.
4.3.3.A Brain-wave extraction
As we will see further ahead, for our derived metrics we will need to extract information from these
specific brain waves: Tetha (θ), Alpha (α) and Beta (β).
There as several possible protocols to extract brainwaves, as studies have shown correlation with
the sum of a specific wave’s spectrum, the value of its peak or its location in the mentioned spectra [66].
This lack of a flawless protocol is candid example of Neurosciences’ scientific research knowledge-
frontier exploration and discovery. Enthusiastically, we propose and test two techniques. Firstly, we
used the maximum value of a specific neural oscillation as its representative metric. This can be seen
in equations (4.3), (4.4) and (4.5).
PS = EEG Power Spectrum, ranging from 0 to 500Hz
h = specific frequency, in hertz
θ = max(PS(h)17≤h≤20 ) (4.3) α = max(PS(h)17≤h≤20 ) (4.4) β = max(PS(h)17≤h≤20 ) (4.5)
Secondly, we propose the calculus of the mean value of each wave’s specific frequencies, as seen
in equations (4.6), (4.7) and (4.8):
θ = mean(PS(h)17≤h≤20 ) (4.6) α = mean(PS(h)17≤h≤20 ) (4.7) β = mean(PS(h)17≤h≤20 ) (4.8)
In the end, we opted for this second option as we observed less erratic oscillations, over time, in the
50
consequently derived metrics. This, in turn, resulted in significant less outliers for the derivable metrics
in succession.
4.3.3.B Heart-Rate metric
Heart rate is the speed of the heartbeat measured by the number of contractions of the heart per
minute (bpm). Our PPG signal represents such contractions via the acquisition of more or less light in
accordance to the fluctuating blood volume pressure.
As described in sub-section 4.3.2.C, when the PPG data reaches the front-end application, it has
been preprocessed and features a sentinel value (1023) for each heart-beat corresponding peak.
As such, we keep in a queue the nearest 10 seconds respective to each detected and recorded
spikes. Every time we a receive a new message, we update the queues values, adding one second to
previous ones as well as the new one. Finally, we define the heart rate as the average time difference
between each two consecutive queue entries.
The javascript function responsible for this - estimateHR() - is detailed on Annex A, with respective
project code comments for added insight.
4.3.3.C Emotional-Valence metric
Neuroscientists have correlated several emotional states (such as depression [80]), as well as the
Emotional-Valence dimensional space, with EEG FP1 and FP2 alpha-wave asymmetry [70–72]. Accord-
ingly, we defined our Emotional-Valence metric as seen in Equation (4.9):
EmotionalV alence = αFP1 − αFP2 (4.9)
These metric values are then subtracted from one another, according to their FP1 and FP2 electrode
scalp location. This frontal, alpha-wave asymmetry is intended to facilitate the users’ current emotional
attraction vs rejection, thus translating into ordinary Human language the originally complex, localized
electrode’s electric potential readings.
4.3.3.D Meditation metric
We have previously justified our motivation for measuring emotional-valence and engagement, re-
spectively, for their neuromarketing and high-performance training capabilities. Meditation has also been
for subject of neuroscientist’s EEG analysis for some time [65].
Correspondingly, we believe that it is of interest to be able to monitor inter-session meditation, as
well as their respective evolution. Indeed, scientific studies have proved the daily-life health benefits
51
of this practice, decreasing stress levels which, chronically, have been linked to the development and
progression of this broad spectrum of disorders [66].
We based our formula on review the work by Fingelkurts [66] which features a state of the art table
listing 35 studies correlating meditation with alpha-power increases (versus 3 against) as well as theta
power surge (27 studies).
Meditation =θFP1 + θFP2 + αFP1 + αFP2
4(4.10)
4.3.3.E Engagement metric
Neuroscientists have correlated Task Engagement, Mental Workload in Vigilance, Learning, and
Memory Tasks with EEG [68]. Different studies have attempted to evaluate player task engagement and
arousal. Evaluating player task engagement [67,69] using the following equation (4.11):
Engagement =βFP1 + βFP2
αFP1 + αFP2 + θFP1 + θFP2(4.11)
We also adopted this algorithm, in particular due to its player engagement measuring correlation, as
one of our presented database’s tasks is playing puzzle games.
4.3.4 InfoVis Development
In this subsection we detail the various iterative development process stages which led into the
current version of NeuroExplore, a Physiological Computing InfoVis through which users can better
decipher their brain and body’s Psycho-Physiological state.
Thoroughly, we detail how NeuroExplore handled specific data structures to achieve a fast ac-
tion/feedback loop required by dynamic queries, through vision and biosignals alike.
These components are integrated into a coherent framework that simplifies the management of so-
phisticated physiological data structures through the BrainBIT BCI or our collected database, the Server-
Bit Python back-end, and the InfoVis Javascript implementation, ClientBIT.
4.3.4.A Low Fidelity Prototype
Initial NeuroExplore development began after several meetings and correspondence with IBEB fac-
ulty in which we probed whether hypothetical design solutions were viable from a Neuroscientific, often
biosignal analysis, standpoint. Furthermore, our bioengineering colleagues provided influencing feed-
back, as we recurrently aimed to better understand and meet their expectations.
Upon learning important Physiological Computing and qEEG concepts we started testing Bitalino
Prototypes and their respective OpenSignals VIS solution as well as API development protocols and
52
examples14, we started tinkering with possible InfoVis solutions. These were usually discussed and
validated by the mentioned experts.
Figure 4.7 portrays an exemplary paper prototype, as aspired in Februart 2017, based on collected
feedback.
Figure 4.7: Initial NeuroExplore conceptual paper prototype artifact, dating February 2017
Early on, as seen in Figure 4.7, we learned about the possibility of deriving metrics such as Attention,
Meditation and Emotional Valence as well as their respective visual idiom presentation.
4.3.4.B High Fidelity Prototype
On January 15th, in a meeting with fellow investigators at IBEB, we had the opportunity to meet
Bitalino creator, Dr Hugo Silva15. We discussed Bitalino’s configurations extensively - specifically its
channel resolutions, sensors and output rates as well as API documentation so that we could build a
High Fidelity, functional prototype.
The main motivation for this was the need to start collecting a database through which we could de-
liver our InfoVis aspirations. As such, we followed the Python Client-Server API16 guidelines to develop
the initial prototype version.
The initial NeuroExplore software prototype, depicted in Figure 4.8 featured six charts: two server
derived-data qEEGs; and four unprocessed data linecharts: one PPG and three axis-specific accelerom-
eter output. This version allowed the recording and saving of Physiological data, indispensable to the
database’s acquisition.
To do so, we used a Python back-end and Javascript/HTML front end, Client-Server communication
14http://bitalino.com/en/development/apis15https://www.linkedin.com/in/hugo-silva-066176316http://bitalino.com/en/development/apis,https://github.com/BITalinoWorld/python-serverbit
53
Figure 4.8: Initial NeuroExplore software prototype displaying live data acquisition, April 4th, 2017.
application. We embraced Twisted’s17 event-driven networking engine, following an asynchronous JSON
formatted message passing protocol.
Finally, this prototype’s VIS implementation was already architectured around HTML, and CSS con-
cepts relying on a graph plotting jQuery library: Flots.js18. Depicted idioms updated their images as new
communications where received, each second.
This version’s was live-tested in the format of the database acquisition, where we also collected user
feedback. As the sessions took place, we started to better understand how the user rationalized the
[!InfoVis ] as well as how we could further improve interaction.
4.3.4.C β Version InfoVis
Iterating upon previous prototype versions, this stable version - reproduced in Figure 4.10 - pre-
sented the usage of Scalable Vector Graphics (SVG) through the usage of a different Javascript plotting
library: D3.js. In addition to real time, this version featured, at the time, separate retrospective InfoVis
analysis component of Physiological data.
To do so, a small horizontal file selection bar stood at the top of our vis (which allowed CSV file input.
Furthermore, a time slider interface, complete with play and pause buttons was located at the bottom of
the screen
One of the main motivations behind choosing the D3.js library was its proliferation of distinct exam-
ples. Through these, we were able to fulfill a common user feedback request: the animation between
the time varying data. The β version featured the smooth animation of data instead of flickering every
one second.
In addition, logarithmic scaling - of base ten - was employed in the qEEG y-axis. This was the result
of empirical NeuroExplore usage as, early on, we noticed values respective to smaller frequencies were
17http://twistedmatrix.com/18http://www.flotcharts.org/
54
in a significantly higher scale than the remaining. In the end, this solution was adopted as a method of
visually displaying all spectral data at once.
Figure 4.9: NeuroExplore retrospective mode beta-version InfoVis, featuring the VIS of post-processed, derivedphysiological metric’s charts. On the left we can see the maximazed menu. On top, its show/hideinterface next to the file selection interface. On the bottom, the previous recording time slider withrespective play/pause buttons is visible. The plotted idioms y-axis log-scale usage is also visible.
Finally, this version also supported a lateral, minimizable menu bar. In here the user can find Instruc-
tions as well as Information and Contacts regarding NeuroExplore support. At this stage, this menu was
thought of as a mere assistant whose interaction was mainly destined to intertwine between real-time
and retrospective mode and the presentation of instructions for newbie users such as tutorial and other
assistance information. The rationale behind was that other interactions - deemed more common - such
as the file opening and recording slider manipulation, should be constantly visible for a quicker overall
interaction.
4.3.4.D Final InfoVis
A Beta testing, formative evaluation in which users were asked to use the think aloud protocol re-
sulted in qualitative user data acquisition as well other feedback and suggestions. Three users - unfa-
miliar with the project - volunteered for this experiment: two female, one male with ages ranging from
[24− 55].
Specifically, we decided it was important to improve the interactivity of our VIS in order to increase
exploration potential and increase user data perception through visual manipulation. In order to achieve
this, we implemented several features, divided in the two following categories:
RetroVis As suggested by users, we decided to increase the visual space for interaction so that we
55
could support the display of six idioms at once. In order to achieve this we moved the file selection
interface into the Menu, with a larger draggable recording upload area. This provided extra com-
parison capabilities to the user. Additionally, overall interface presentation stood out as cleaner
since no chart was only partially visible and the menu integrated - and hid - functions external
to the current visual manipulation. Finally, mouse-over interactivity was added so the user could
more expeditiously interpret the presented data, instead of relying on axis scales.
RTVis In addition, real-time specific interaction was enhanced via the display of error messages and
red highlights according to incorrect PPG heart rate readings, indicated as a probable sensor
misplacement, as well as EEG saturation, in real time. This had been previously requested by
system tester which, in some cases, had difficulty accomplishing system tasks due to unrecognized
saturation.
Figure 4.10: NeuroExplore retrospective mode final-version InfoVis, introducing a broader VIS space due to inputfile-selection menu integration and mouse-over interaction,
Finally, in anticipation for the usability tests presented in Chapter 5, we implemented a information
messaging system on both VIS modes. Intended to decrease errors and task completion times, once
interaction with NeuroExplore starts, the user is greeted with guiding and informational messages so he
can know exactly what to do. This presented significant usability increases, as, previously, empty charts
occupied most of the screen until connected to BrainBIt (RealTimeVIS) or selected a previous recording
file (RetroVIS)
56
4.4 Discussion
Throughout this chapter, we have extensively detailed the iterative development process which led
into the final, most current implementation of NeuroExplore.
Through the development of this Physiological Computing InfoVis - intended to improve the user’s
ability to decipher their mental state - we reached several veredicts:
1. We acquired and preprocessed a massive (5,81GB) biosignals database, featuring approximately
18 hours of recording spanning 250 CSV files divided into three subjects and four categories
associated with specific, measurable and studied mental patterns.
2. BrainBIT Physiological Computing database acquisition capabilities have been proven. It pre-
sented outside electromagnetic noise susceptibility - visible in NeuroExplore - which needs to be
managed. Additionally, the PPG sensor signal outputted weak signals in some users. We believe
this is related to lightning conditions, which, if correctly handled, work as intended.
3. Our Bitalino sensor specs feature three channels which - thoughtfully - could be reassigned. The
accelerometer’s anticipated filtering enabling insight proved fruitless, as EEG signal saturation
filtering encompassed it. Currently, we have no use for this output, but have decided to keep it as
part of our project to avoid risking future developments.
4. There is extended neuroscientific research regarding EEG derivation of psycho-physiological met-
rics. We have meticulously detailed our proposed extraction algorithms - and justified them ac-
cordingly.
5. A final version of NeuroExplore is proposed, and its features are widely dissected: from the initial
data’s output, through preprocessing, back-end, mechanisms and data derivation alike, to the
front-end interactive visual display.
Through the iterative development of project, user - and often expert - feedback was equated and
incremented into each surpassing version, thus increasing our goal’s success prospects by repeatedly
reality-checking countless preconceived notions
In sum, NeuroExplores current prototype varied features warrant it the right to be tested with a larger
pool of users and throughly validated, via usability and utility testing alike.
57
58
5Evaluation
Contents
5.1 Context: European Investigators’ Night 2017 . . . . . . . . . . . . . . . . . . . . . . . 61
5.2 Usability Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.3 Case Studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
5.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
59
60
Validating an InfoVis is an indispensable external indication of its features successful implementation.
We will assess its usability and functionality by testing if our design has succeeded in its purpose [63].
In order to validate our solution, we decided to tackle this challenge in two distinct approaches:
Usability Testing and Case Studies. The former was undertook by users without previous knowledge of
the system. This was intentional in order to evaluate the system in regards to interactivity and usability.
Testers were asked to complete a set of tasks and a quiz while under observation. After which, a
discussion where overall feedback and personal suggestion took place. The latter corresponds to two
experts in biomedicine engineering interacting with the InfoVis while providing functional feedback.
5.1 Context: European Investigators’ Night 2017
Every detail narrated in this chapter took place at the European Investigators’ Nigth 2017 event.
This happened on September 29 at a science museum in Lisbon: Pavilhao do Conhecimento. The
event consisted of an open-doors science demonstration with dozens of investigators presenting their
experiments from a wide range of scientific areas to the general public. As participating investigators
this provided us an opportunity to validate our InfoVis solution and collect user feedback.
Relying on our charged BrainBIT, a laptop for the VIS display and a mouse, our project stood at
IBEB’s Physiological Computing table. Other BCI devices and hands-on demos were available as seen
in Figure 5.1. Data was systematically collected amidst this events’ six pm to midnight duration. A
large audience - of all ages - showed interest in Physiological Computing and in this project. Indeed,
a considerable group of people consistently stood either waiting in line or observing our table while we
attended visitors.
Figure 5.1: IBEB’s Physiological Computing table at European Investigators’ Night 2017. On the bottom-right wecan see a participant filling the SUS
Amongst the visitors, not everyone we talked to showed availability to be a a formal participant.
Some visitors only wanted to talk - enthusiastically - about the technology and its possibilities. Others
61
lacked technical knowledge such as the concept of an operating system’s file. One visitor just wanted
to see how the headband felt like. Regardless, conversations usually skewed towards BrainBIT and
our InfoVis’ hypothetical applications such as in Mindfullness an competitive sports training as well as
Neuromarketing.
Finally, these resulting conversations often provided insightful and interesting feedback. One ex-
ample was the multi-age interest in BCI, mainly Videogame possibilities’ enthusiasm from children and
teenagers and Neuromarketing interest and fears from adults. Additionally, some visitors showed sur-
prising knowledge in Physiological Computing wearable devices applications - typically having seen or
owned a wristwatch with biosensors.
5.2 Usability Tests
Upon many iterations of development, a final version of our VIS must be subjected to a summative
evaluation. This is intended to determine our prototype’s usability success as well as assuring a mean
of maintaining standards [81]
In this section we will detail how the user testing was conducted, depicting each user task and its
derived, quantitative data: the amount of time necessary for the user to successfully complete each task;
and the number of errors made.
Additionally, upon task completion but before feedback took place, participants are asked to fill a
SUS, ”a highly robust and versatile tool for usability professional” [82]. Consequently, we could establish
a concrete, comparable [0− 100] usability score.
Finally, a detailed account is presented beside statistical analysis of the resulting data.
5.2.1 Methodology
The tests took place in a public event at Pavilhao do Conhecimento science museum. Thousands of
visitors attended the event in what soon became a very productive evening, predictable by the constant
crowds surrounding investigators’ tables. Overall, the Usability Tests respected the ensuing protocol:
1. Users were selected from this pool of attendees and fellow investigators without any bias towards
age, gender or any particular criteria.
2. An introductory and sometimes off-script, contextual chat which subscribed the following pattern:
(a) A brief BCI technologies and applications summary led into a BrainBIT hands-on presentation
explaining its functions, specifically the non-invasive nature of its physiological sensors: the
PPG and the EEG and its accelerometer.
62
(b) A mention to Neuroscience’s background on concepts such as Emotional Valence - explained
as Emotional attractiveness versus repulse - and its relation with FP1 and FP2 EEG’s alpha
brain waves.
3. Users were seated in a chair in front of an battery powered, unplugged laptop and mouse. Inten-
tionally, on it, users could see nothing other than a warning message.
4. Testers were aided in BrainBIT placement. Firstly the headband was positioned and tightened so
that the electrodes were in direct contact with the forehead. Then the PPG sensor was clipped
in the earlobe. Upon proper placement, testers were asked if they were comfortable wearing
BrainBIT, and readjusted as needed.
5. At this point a debriefing took place. Testers were told they would have to attempt to complete
five tasks while using the system. The tasks were reformulated from previous evaluation sessions.
Other than the tasks’ enunciation, testing was intended to be achieved in succession, without any
external intervention. Accordingly, participants told were they would have no assistance during
task execution. We made sure to ask our users if they were ready before each task description
was read.
6. With BrainBIT powered on, testing began after one final check where we made sure to explain
there was no need to feel pressured, as it was the system that was being tested, not the user.
From the beginning to completion of each task (see Table 5.1) two evaluation metrics are being
registered: task completion times and number of errors.
7. One last assessment ensued which followed a meticulous 10-item Likert scale - the SUS - as
depicted in sub-section 5.2.1. Participants were asked to record their immediate response to each
item, rather than thinking about items for a long time.
8. Finally, a discussion where user suggestions and ideas were annotated took place.
Table 5.1: Planned User Tasks Description
Task DescriptionT1 ”Open a file from a previous recording.”T2 ”Identify this recordings’ heart rate evolution and current value”T3 ”Change into Real Time Mode”T4 ”Shake your head and identify when the respective signal saturation occurs”T5 ”Identify and Compare the evolution of your Emotional Valence”
63
5.2.2 Results
Initial concerns of not enough participants were soon dismissed thanks to the wide number of atten-
dees present during our demo evening. Our established quota of twenty users was successfully met.
This data gathering process took a total of approximately five hours.
Notably, BrainBIT’s battery, which was charged in the precedent evening, endured this test success-
fully. This is coherent with our database acquisition experience (see subsection. 4.3.1). The continuous
Bluetooth connection also proved reliable during the consecutive completion of all the usability testing
tasks. Indeed, BrainBIT opportunely sent physiological and physical data to our InfoVis’ real-time mode
without communication constraints.
On a less positive note, other testing conditions were not as favorable. In appreciation to our par-
ticipants, whose tests were at times surrounded by a crowded and noisy environment - specially as the
evening progressed. Thanks to their collaboration, the situation was made much more manageable. The
same can’t be said, regrettably, regarding the electromagnetic noise which consistently enough, satu-
rated our electrodes’ signal. We hypothesize this is credited to the large number of electronic devices
both on display and on the participants behalf.
Pragmatically, as a consequence, we decided to revise our Tasks. As seen on Table 5.1, Task 4
formerly asked users to ”Shake your head and identify the respective signal’s saturation”. Seeing how,
in some cases, the signal was saturated from the moment Real Time Mode started, we decided to
exclude this task and used the remaining tasks of Table 5.2 as a guideline instead.
Participant’s ages varied from 18 to 51, 60% (12 users) being male and the remaining 40% (8 users)
female. Six participants (30%) were fellow investigators and fourteen (70%) were visitors from varied
walks of life.
A detailed description of how each task, from T1 to T4, was typically executed by our users in
presented below:
T1 This tasked asked users to open a file from a previous recording. In order to achieve this, users
had to identity the Menu button as they faced a ”Please Select a Recording” message. Upon
correct menu option selection, an operating system windows popped up displaying a directory with
a recording file to be selected. Relying on intuitive discovery, this task posed users little difficulty.
T2 In here, users had to correctly identify the recordings’ current heart-rate. This could be achieved
directly by looking at the valued displayed in the PPG or by looking at the latest value in the Heart-
Rate over Time Line Chart. Either way, every one succeeded at this identification task.
T3 Users were asked to change into Real Time Mode. Upon exploration, users successfully discovered
this was accessible through the menu’s first option.
64
Table 5.2: Final User Tasks
Task DescriptionT1 ”Open a file from a previous recording.”T2 ”Identify the heart rate value”T3 ”Change into Real Time Mode”T4 ”Identify and compare the evolution of your Emotional Valence”
T4 Participants had to identify their Emotional Valence Line Chart as well as to comment its evolution.
comparing it with previous seconds. The Identify and Compare nature of this task made it more
complex. Despite this, every user completed this task.
5.2.2.A Task Completion Times
The times each user took to complete each of the four tasks seen in Table. 5.2 are presented in
Table 5.3. Specifically, we can see times, in seconds, respective to the duration of each task successful
completion.
Additionally, statistical data is presenting regarding our results: the minimum, maximum, average
time (mean), standard deviation as well as the margin of error with a confidence interval of 95%.
Beside Table 5.3 stands a box-plot. Intended for increased data insight, Figure 5.2 portrays the
median, upper and lower quartiles and the minimum and maximum data values.
After dissecting the data from Table 5.3 and Figure 5.2’s box-plot, several conclusions were reached:
1. Task one and three were the quickest as duration results were the lowest. Respectively, a minimum
of 3,75 and 4,06 seconds and an average completion time of 8,55 and 8,05 seconds. Additionally,
as perhaps more evident in Figure 5.2, these two tasks also featured the least amount of value
dispersion. Indeed, users swiftly identified and transversed the Menu as these tasks required. We
had anticipated such results given the intendedly simple nature of these Discover actions [63]. This
leads us to believe that the users group generally found these tasks straightforward and intuitive.
2. Tasks two and three were slower as depicted by the maximum values of 39,18 seconds and 41,69
seconds and the averages of 14,41 seconds and 14,14 seconds, respectively. Comparably, we
can clearly see broader value dispersion, as seen in Figure 5.2. This means users had a lower
level of agreement with one another. Perhaps explained by the system unfamiliarity, some users
struggled with the identification nature of these tasks, losing time by attempting to interact with
the observing staff. Particularly, errors detailed in sub-section 5.2.2.B and one case of external
disturbance amounted to longer performances. This consequently higher completion times, in
comparison to task one and three, were previously anticipated due to the increased task difficulty.
65
Task Duration (in seconds)Users Task1 Task2 Task3 Task4U01 12,12 17,25 5,42 12,98U02 3,90 5,30 6,20 6,79U03 6,12 7,66 16,18 9,02U04 4,73 5,73 4,40 11,39U05 13,45 29,21 4,15 8,74U06 3,96 4,87 6,87 7,94U07 6,42 19,31 13,59 5,84U08 5,76 22,16 5,11 21,13U09 16,95 39,18 6,39 19,22U10 4,88 4,87 11,52 9,50U11 9,09 18,22 8,46 24,33U12 5,91 5,88 12,43 10,88U13 18,05 26,00 9,74 41,69U14 4,48 4,92 4,19 8,04U15 3,75 5,78 4,06 7,82U16 11,35 10,41 5,83 9,23U17 5,89 17,67 6,47 8,66U18 13,54 21,11 9,30 27,51U19 4,78 8,55 4,98 5,84U20 18,04 14,02 15,75 26,28
StatisticsMin 3,75 4,87 4,06 5,84Max 18,05 39,18 16,18 41,69
Mean 8,55 14,41 8,05 14,14SD 4,82 9,74 3,91 9,52CI 2,10 4,30 1,70 4,20
Table 5.3: Task Completion Times
Figure 5.2: Box Plot for user task completion times.
Notwithstanding, when accounting every user’s performances, tasks two and four still averaged
positive times considering they corresponded to Identify and Compare actions [63].
3. Overall, tasks one and three achieved similar mean results - diverging by 0,5 seconds. Both of
which averaged 5,59 to 6,39 seconds less than task two and four. Again, similar average times
were achieved times - differing 0,27 seconds from each other. Finally we can state with 95%
confidence:
T1 took 8,55±2,10 seconds.
T2 took 14,41±4,30 seconds.
T3 took 8,05±1,70 seconds.
T4 took 14,14±4,20 seconds.
66
5.2.2.B Errors
During each task’s execution, user errors such task misinterpretations and incorrect actions were
scrutinized. The resulting amounts of errors can be seen next to a brief statistical analysis in Table 5.4.
Number of Errors per TaskUsers Task1 Task2 Task3 Task4U01 1 1 0 1U02 0 0 0 0U03 0 0 1 0U04 1 0 0 1U05 0 1 0 0U06 0 0 0 0U07 0 1 1 0U08 0 1 0 1U09 1 2 0 1U10 0 0 1 0U11 0 1 0 1U12 0 0 1 0U13 1 1 0 1U14 0 0 0 0U15 0 0 0 0U16 1 0 1 0U17 0 1 0 0U18 1 1 0 1U19 0 0 0 0U20 1 0 1 1
Statistics
Min 0 0 0 0Max 1 2 1 1
Mean 0,35 0,5 0,3 0,4SD 0,49 0,61 0,47 0,51CI 0,21 0,27 0,21 0,22
Table 5.4: User Errors Detailed
Figure 5.3: Box Plot for number of user errors in eachTask completion.
Explicitly, Table 5.4 data dissects the minimum, maximum, average time (mean), standard deviation
as well as the margin of error with a confidence interval of 95%. Additionally, value dispersion is depicted
in Figure 5.3 as well as median, upper and lower quartiles and the minimum and maximum data values.
Upon detailed analysis the following verdicts were reached:
1. No task had a minimum number of errors higher than zero. Five testers (25%) completed all of their
tasks successfully with no errors. Additionally, ten testers (50%) made one error or less during the
entire testing.
2. Only one user, during the second task, made more than one error: two. Accordingly, task two,
followed by task four, impaired users the most - respectively averaging 0,5 and 0,4 errors. This
67
was often either a symptom of task description misunderstanding or, on occasion, misidentification
of the derived biometrics as required by the task.
3. Globally, all tasks were successfully completed. None averaged more than 0,5 errors and as such,
NeuroExplore usability is further emphasized. In direct correlation with task completion times
visible in sub-section 5.2.2.A, tasks one and three bestowed less errors (0,30 and 0,35 seconds)
than tasks two and four (0,4 and 0,5), respectively and on average. Again, these repercussions
are a consequence of the Discover versus Identify and Compare nature of the actions performed
in each task. Finally we claim, with 95% confidence:
T1 induced 0,35±0,21 errors.
T2 induced 0,50±0,27 errors.
T3 induced 0,30±0,21 errors.
T4 induced 0,40±0,22 errors.
5.2.2.C System Usability Scale (SUS)
NeuroExplore results averaged a total of 85,13 in the SUS. This study provided us a comparable
standard to contemplate in distinct ways, such as:
1. The acknowledgement of measurable contrast regarding other current, and future, solutions to the
credit of SUS usage in ”a variety of research projects and industrial evaluations” [82].
2. The full extend of the results and their respective statistics are seen in table 5.5. These were the
aftermath of the user’s first time interacting with the system.
3. The system boasts an ”Excellent” score and a ”B” grade as depicted in Figure 5.4 and defined
by Bangor [83]. This achievement is a positive testament to our system overall usability and it
correlates with the previously recorded, comprehensive user feedback as detailed in section 5.4.
5.3 Case Studies
Before the European Investigators’ event doors opened to the public, during preparation hours, two
case studies were scheduled. The voluntaries were experts in Physiological Computing, IBEB members,
respectively, MD/PhD Hugo Ferreira 1 and Bioengineering Msc finalist Mariana Antas de Barros.
1https://scholar.google.com/citations?user=92iIVdsAAAAJ
68
Table 5.5: SUS scores from each user
Questions Results (in seconds)Users Q01 Q02 Q03 Q04 Q05 Q06 Q07 Q08 Q09 Q10 Total Final(x2.5)
U01 4 1 5 2 4 1 5 1 5 1 37 92,5U02 3 1 5 2 4 2 4 2 4 1 32 80U03 5 2 4 1 3 2 4 2 5 1 33 82,5U04 4 2 4 4 5 1 5 2 4 1 34 85U05 4 2 3 1 4 3 5 1 4 2 31 77,5U06 3 2 4 1 4 1 5 3 5 3 31 77,5U07 5 1 4 1 5 1 5 1 5 3 37 92,5U08 3 2 4 2 4 2 5 1 4 2 29 72,5U09 5 1 4 1 3 2 5 1 5 2 35 87,5U10 4 2 4 1 4 2 5 1 4 1 34 85U11 5 1 5 1 4 2 5 2 5 1 37 92,5U12 5 1 4 2 4 2 5 1 5 2 34 85U13 4 2 4 1 4 2 5 1 4 1 34 85U14 5 1 5 5 5 1 5 1 5 1 36 90U15 4 1 5 1 5 2 3 1 4 1 35 87,5U16 4 1 5 2 5 1 5 1 5 1 38 95U17 5 2 4 1 3 2 4 2 5 1 32 80U18 5 1 1 1 4 2 2 2 5 1 30 75U19 4 2 4 1 4 2 5 2 5 1 34 85U20 5 1 5 2 5 1 4 1 5 1 38 95
StatisticsMin 3 1 1 1 3 1 2 1 4 1 29 72,50
Max 5 2 5 5 5 3 5 3 5 3 38 95Mean 4,3 1,45 4,15 1,65 4,15 1,70 4,58 1,45 4,65 1,40 34,05 85,13
Figure 5.4: Our project’s final system usability score presented in Bangors’ Adjective Rating Scale [19]
Participants had previously worked with Bitalino and the BrainBIT prototype in particular but never be-
fore interacted with our solution. Interestingly, they also had past experience using Bitalino’s OpenSignals
VIS system [20].
Before each session, users were guaranteed that no concrete metric, such as the measurement of
time and errors, was being recorded. Indeed, they were explained the purpose of this exercise was to
69
explore the functionality components of our solution as it was transversed by a participants with expert
knowledge in the domain of our InfoVis - specifically Neuroscience, Physiological Computing and BCIs.
Both cases followed a less protocular approach. Still, the ensuing proceedings are summarizes below.
5.3.1 Methodology
1. A BrainBIT was mounted on their heads and ears and users were presented NeuroExplore through
a demo in which relevant features and common tasks were exemplified (see Table. 5.2).
2. Users were asked to embrace the Think Aloud Protocol as they interacted with our solution. Par-
ticularly we asked them to state their current actions’ rationale and intents.
3. During this free exploration and narration, we intentionally interacted with the participant and col-
lected feedback destined to help us validate domain terms as well as understand the most relevant
features and how they were being perceived.
4. This process was intended to last past the participant’s exploration of every single feature of our
solution.
The results were recorded, as seen in sub-section 5.3.2,
5.3.2 Results
Both participants extensively and enthusiastically transversed our system’s functionalities with ease.
Through NeuroExplore, they could visualize raw physiological data as well as the derived affective and
nervous-system related metrics both in real time as well as retrospectively. As pointed out, this con-
trasted with their current solution which relied on OpenSignals VIS’s restriction to raw values (see figure
3.31 for reference).
Overall, comments were extremely positive. The purposefully minimalistic and sober menu UI, as well
as its smooth animation and clean color pallet were praised. Participants appreciated these aesthetics
in what they deemed as corresponding to the simple and intuitive user-system interactions. In detail,
line-chart mouse interaction resulting in data knowledge without the visual aid of the scales was exalted.
What’s more, the location of the remaining interactions inside an extendable menu was pointed out
as facilitating both the analysis of a wider area of visible data as well as overall interaction with other
relevant system functions by concentrating all these in one place.
Interestingly, the employed algorithms and their respective representation were discussed as well as
the on-field data collection possibilities of our projects’ web-based nature.
70
Additionally, testers stressed recurring connection issues while recording data using OpenSignals.
These situations could potentially frustrate user recording sessions. Thus our positive experience when
using our solution to collect data was appreciated and successfully put to the test.
5.3.2.A User 1
The first session was attended by professor Hugo Ferreira from IBEB 2. Professor Hugo has ex-
tensive previous experience working with distinct BCI devices, including Bitalino, and their respective
software. Despite previous support in many meetings, our participant had never actually interacted with
NeuroExplore. As such, the tests began enthusiastically and at a fast pace with the speak-aloud protocol
not always being respected.
Nevertheless, feedback was plenty. After quickly selecting a previous recording, our collaborator
pointed out to adequate usage of a logarithmic scale in the yy-axis of the power spectrum. Indeed,
a linear scale would be incompatible with the value dispersion, specially at lower frequencies. After-
wards, the post-processed, high value outliner in each of the corresponding PPG’s peaks milisecond
was inquired upon and explained.
Subsequently, upon looking at all idioms as a whole, the professor inquired about the positioning
of each chart. While noticing they were divided in groups according to brain signals, blood volume
pressure signals and then derived metrics, the following suggestion ensued: ”Group brain signals next
to their derived metrics”.
As the case study progressed, the user moved on to the Real Time Mode of our system. Then, upon
realizing inadequate detection of blood volume pressure, the professor swiftly tested different lighting
conditions and repositioned the ear-clip until satisfied with the resulting PPG. This setback, while regret-
table, interestingly depicts the responsiveness and interactions made possible between our BCI and our
InfoVis.
Upon further inspection of the power spectrum, our user also suggested:” it would be remarkable
if we could add a line representing a previously measured, no user movement baseline to the power
spectrum”. Indeed, this feature would allowed to better differentiate our results from the surrounding
environment electromagnetic noise.
Most notably, professor Hugo said he was ”intending on using the system”. Emphasizing his interest
in using NeuroExplore at IBEB, we were strongly encouraged to develop proper documentation for easier
future setups.
2http://ibeb.ciencias.ulisboa.pt/pt/hugo-alexandre-ferreira/
71
5.3.2.B User 2
The second user was fellow colleague Mariana Antas de Barros, who is also finishing her MSc
degree thesis using a Bitalino as input for an artistic, audio-VIS. Mariana stated that she was also using
FP1-FP2 alpha-wave asymmetry as a means of capturing Emotional Valence used as input in her VIS.
Explicitly the sum of one electrode’s power spectrum alpha frequency values was subtracted by other
electrode’s respective values.
Much to our delight, one of her first remarks was: ”It’s so pretty!”. This happened after our collabo-
rator swiftly opened a previous recording and was promptly presented its respective data time evolution
animation. Upon further interaction, the mouse-over highlighting of chart’s values resulted in positive
feedback.
A brief discussion regarding data processing and other algorithms used in our derived metrics ensued
as the user explored each idiom. Eventually, Real Time Mode was clicked. In here, the display of the
raw electrode data was appreciated, as these allowed neurosignal experts such as our participant to
collect feedback via pattern recognition.
As a consequence, our first suggestion was annotated: support an option through which one can
permanently close a saturation warning message. In addition to abrupt movements, the signal started
to saturate depending on the user’s location, susceptible to external electromagnetic noises. Because
of our collaborator’s ability to easily spot saturation on the raw electrode signal, she felt this message
should be removable. Otherwise, this message was intentional as any one second recorded deemed
saturated was discarded in post-processing and most users would appreciate feedback on saturation
other than the raw signal line-chart.
Finally, when wondering about possible usages for the unused space at the bottom of the InfoVis
such as representing new derived metrics or supporting idioms for other Bitalino sensor configurations,
an epiphany moment occurred: support user selection of displayable idioms and respective Bitalino
sensor channel configuration.
In the end, Mariana admitted personally liking the chosen ”simple and clean” aesthetics. Additionally
the broad nature of our project’s possible usages was pointed out as well as interest in the acquired data
collection and its applications.
5.4 Discussion
As this section nears completion a retrospective view of our extensively detailed and diversified
system of usability and functionality validation is in order.
Through usability testing, we observed expeditious and always successful task completion. As an-
ticipated Discover tasks were considerably faster and less error-prone than Identify and Compare ones.
72
Additionally, our solution stood out as ”Excellent”, scoring a 85,13 rating in the SUS.
The case studies with participating IBEB members depicted how the system functions were easily
achieved and thus perfectly exemplify the solution’s intuitiveness and ease of use. Furthermore, several
usages and genuine interest was displayed in our project’s future.
In sum, NeuroExplore was prosperous in our evaluation. Users easily, quickly and successfully
identified and interacted with their own psycho-physiological mental-state data. This is what we aimed
for in chapter 1’s objectives: ”an interactive visualization so that users better understand one’s mental
state and emotions”.
73
74
6Conclusion and Future Work
Contents
6.1 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
75
76
Retrospectively, this thesis begins with the introduction of our project and the postulus of comprising
objectives which lead into the implementation of NeuroExplore. Given the diversified fields of study our
solution transverses, we presented a diversified pallet of background knowledge: this ranged from the
more traditional, text derived field of Sentiment Analysis, passed through Affective Computing and its in-
crement on this method via multi-modal approaches, to arrive at the complementary fields of Physiologi-
cal Computing and - in particular - BCI. This Computer Science meets NeuroScience backdrop then into
a related work, InfoVis examples of both proprietary and academical nature. The document progresses
by comprehensively entailing the development process leading into our final implementation, namely:
the software and hardware’s architecture, the database acquisition process and its preprocessing, the
derived metrics rational and protocol, and - finally - the iterative development nature and its procedural
growth. Lastly, we vindicate NeuroExplore through a comprehensive functionality and usability, user and
expert validation.
NeuroExplore is a Physiological Computing InfoVis prototype, through which users can better com-
prehend one’s current or previously recorded brain and body status. Achieving a SUS of 85.13, the
system was considered excellent by users and experts alike. Coherently, we have fulfilled our main
project goal (see Chapter 1) and all of its dependencies, namely, a data pipeline which seemingly con-
verts imperceptible biosignals into an insightful VIS, for increased, mind and body, behavior insights.
6.1 Future Work
The developped NeuroExplore system future prospects bode well. As we have detailed in chapter 1,
several IBEB colleagues displayed interest in working with our solution, typically by further developing
it to specific project contexts. As requested by prof Hugo Ferreira, we are currently writing proper
documentation to facilitate this procedure.
Additional features have been requested during feedback gathering, notably - but not exclusively - in
the case studies:
1. Grouping EEG brain signals next to their derived metrics
2. Graphical represent a previously measured, standing still, user baseline to the power spectrum.
3. Support an option through which one can permanently close a saturation warning messages.
4. Support user selection of displayable idioms and respective Bitalino sensor channel configuration
5. Support the graphical selection of different methodologies - up to user definition - of data derivation
6. Support the visualization of multi-user physiological data regarding the same object of study or
type of activity.
77
On a positive side-note, we have since implemented the first request of grouping signals next to their
derived metrics.
78
Bibliography
[1] G. A. Miller, “The cognitive revolution: a historical perspective,” Trends in cognitive sciences, vol. 7,
no. 3, pp. 141–144, 2003.
[2] P. Ekman and W. V. Friesen, “Constants across cultures in the face and emotion.” Journal of per-
sonality and social psychology, vol. 17, no. 2, p. 124, 1971.
[3] B. Guthier, R. Alharthi, R. Abaalkhail, and A. El Saddik, “Detection and visualization of emotions
in an affect-aware city,” in Proceedings of the 1st International Workshop on Emerging Multimedia
Applications and Services for Smart Cities. ACM, 2014, pp. 23–28.
[4] E. J. Bubrick, S. Yazdani, and M. K. Pavlova, “Beyond standard polysomnography: Advantages
and indications for use of extended 10–20 eeg montage during laboratory sleep study evaluations,”
Seizure, vol. 23, no. 9, pp. 699–702, 2014.
[5] M.-F. Lee, G.-S. Chen, J. C. Hung, K.-C. Lin, and J.-C. Wang, “Data mining in emotion color with
affective computing,” Multimedia Tools and Applications, vol. 75, no. 23, pp. 15 185–15 198, 2016.
[6] J. A. G. Sanchez, A. Shibata, K. Ohnishi, F. Dong, and K. Hirota, “Visualization method of emotion
information for long distance interaction,” in Humanoid, Nanotechnology, Information Technology,
Communication and Control, Environment and Management (HNICEM), 2014 International Confer-
ence on. IEEE, 2014, pp. 1–6.
[7] M. K. Torkildson, K. Starbird, and C. Aragon, “Analysis and visualization of sentiment and emotion
on crisis tweets,” in International Conference on Cooperative Design, Visualization and Engineering.
Springer, 2014, pp. 64–67.
[8] A. Scharl, A. Hubmann-Haidvogel, A. Jones, D. Fischl, R. Kamolov, A. Weichselbraun, and
W. Rafelsberger, “Analyzing the public discourse on works of fiction–detection and visualization of
emotion in online coverage about hbo’s game of thrones,” Information processing & management,
vol. 52, no. 1, pp. 129–138, 2016.
79
[9] U. Krcadinac, J. Jovanovic, V. Devedzic, and P. Pasquier, “Textual affect communication and evo-
cation using abstract generative visuals,” IEEE Transactions on Human-Machine Systems, vol. 46,
no. 3, pp. 370–379, 2016.
[10] S. D. Kamvar and J. Harris, “We feel fine and searching the emotional web,” in Proceedings of the
fourth ACM international conference on Web search and data mining. ACM, 2011, pp. 117–126.
[11] C. Quan and F. Ren, “Visualizing emotions from chinese blogs by textual emotion analysis and
recognition techniques,” International Journal of Information Technology & Decision Making, vol. 15,
no. 01, pp. 215–234, 2016.
[12] F. Y. Wang, A. Sallaberry, K. Klein, M. Takatsuka, and M. Roche, “Senticompass: Interactive visu-
alization for exploring and comparing the sentiments of time-varying twitter data,” in Visualization
Symposium (PacificVis), 2015 IEEE Pacific. IEEE, 2015, pp. 129–133.
[13] I. Hupont, S. Baldassarri, E. Cerezo, and R. Del-Hoyo, “Advanced human affect visualization,” in
Systems, Man, and Cybernetics (SMC), 2013 IEEE International Conference on. IEEE, 2013, pp.
2700–2705.
[14] M. Gavrilescu and F. Ungureanu, “Enhanced three-dimensional visualization of eeg signals,” in
E-Health and Bioengineering Conference (EHB), 2015. IEEE, 2015, pp. 1–4.
[15] D. Cernea, C. Weber, A. Ebert, and A. Kerren, “Emotion-prints: interaction-driven emotion visual-
ization on multi-touch interfaces.” in Visualization and Data Analysis, 2015, p. 93970A.
[16] D. McDuff, A. Karlson, A. Kapoor, A. Roseway, and M. Czerwinski, “Affectaura: an intelligent system
for emotional memory,” in Proceedings of the SIGCHI Conference on Human Factors in Computing
Systems. ACM, 2012, pp. 849–858.
[17] S. Du, E. Shu, F. Tong, Y. Ge, L. Li, J. Qiu, P. Guillotel, J. Fleureau, F. Danieau, and D. Muller,
“Visualizing the emotional journey of a museum,” in Proceedings of the 2016 EmoVis Conference
on Emotion and Visualization. Linkoping University, 2016, pp. 7–14.
[18] J. Guerreiro, R. Martins, H. Silva, A. Lourenco, and A. L. Fred, “Bitalino-a multimodal platform for
physiological computing.” in ICINCO (1), 2013, pp. 500–506.
[19] A. Bangor, P. Kortum, and J. Miller, “Determining what individual sus scores mean: Adding an
adjective rating scale,” Journal of usability studies, vol. 4, no. 3, pp. 114–123, 2009.
[20] H. P. da Silva, A. Fred, and R. Martins, “Biosignals for everyone,” IEEE Pervasive Computing,
vol. 13, no. 4, pp. 64–71, 2014.
80
[21] I. B. Mauss and M. D. Robinson, “Measures of emotion: A review,” Cognition and emotion, vol. 23,
no. 2, pp. 209–237, 2009.
[22] R. W. Picard, E. Vyzas, and J. Healey, “Toward machine emotional intelligence: Analysis of affective
physiological state,” IEEE transactions on pattern analysis and machine intelligence, vol. 23, no. 10,
pp. 1175–1191, 2001.
[23] E. Cambria, “Affective computing and sentiment analysis,” IEEE Intelligent Systems, vol. 31, no. 2,
pp. 102–107, 2016.
[24] R. W. Picard and R. Picard, Affective computing. MIT press Cambridge, 1997, vol. 252.
[25] W. James, “What is an emotion?” Mind, vol. 9, no. 34, pp. 188–205, 1884.
[26] J. Owyang, “The future of the social web: In five eras,” Retrieved April, vol. 24, p. 2012, 2009.
[27] V. U. Druskat and S. B. Wolff, “Building the emotional intelligence of groups,” Harvard business
review, vol. 79, no. 3, pp. 80–91, 2001.
[28] N. A. Badcock, P. Mousikou, Y. Mahajan, P. de Lissa, J. Thie, and G. McArthur, “Validation of the
emotiv epoc® eeg gaming system for measuring research quality auditory erps,” PeerJ, vol. 1, p.
e38, 2013.
[29] P. Zikopoulos, C. Eaton et al., Understanding big data: Analytics for enterprise class hadoop and
streaming data. McGraw-Hill Osborne Media, 2011.
[30] J. Yi, T. Nasukawa, R. Bunescu, and W. Niblack, “Sentiment analyzer: Extracting sentiments about
a given topic using natural language processing techniques,” in Data Mining, 2003. ICDM 2003.
Third IEEE International Conference on. IEEE, 2003, pp. 427–434.
[31] B. Fasel and J. Luettin, “Automatic facial expression analysis: a survey,” Pattern recognition, vol. 36,
no. 1, pp. 259–275, 2003.
[32] I. A. Essa and A. P. Pentland, “Coding, analysis, interpretation, and recognition of facial expres-
sions,” IEEE transactions on pattern analysis and machine intelligence, vol. 19, no. 7, pp. 757–763,
1997.
[33] S. K. Card, J. D. Mackinlay, and B. Shneiderman, Readings in information visualization: using vision
to think. Morgan Kaufmann, 1999.
[34] C. Darwin and P. Prodger, The expression of the emotions in man and animals. Oxford University
Press, USA, 1998.
81
[35] P. Ekman, W. V. Friesen, M. O’sullivan, A. Chan, I. Diacoyanni-Tarlatzis, K. Heider, R. Krause, W. A.
LeCompte, T. Pitcairn, P. E. Ricci-Bitti et al., “Universals and cultural differences in the judgments of
facial expressions of emotion.” Journal of personality and social psychology, vol. 53, no. 4, p. 712,
1987.
[36] H. A. Elfenbein and N. Ambady, “On the universality and cultural specificity of emotion recognition:
a meta-analysis.” Psychological bulletin, vol. 128, no. 2, p. 203, 2002.
[37] B. Pang, L. Lee et al., “Opinion mining and sentiment analysis,” Foundations and Trends® in Infor-
mation Retrieval, vol. 2, no. 1–2, pp. 1–135, 2008.
[38] B. Liu, “Sentiment analysis and opinion mining,” Synthesis lectures on human language technolo-
gies, vol. 5, no. 1, pp. 1–167, 2012.
[39] W. Medhat, A. Hassan, and H. Korashy, “Sentiment analysis algorithms and applications: A survey,”
Ain Shams Engineering Journal, vol. 5, no. 4, pp. 1093–1113, 2014.
[40] E. Cambria, B. Schuller, Y. Xia, and C. Havasi, “New avenues in opinion mining and sentiment
analysis,” IEEE Intelligent Systems, vol. 28, no. 2, pp. 15–21, 2013.
[41] K. R. Scherer, T. Banziger, and E. Roesch, A Blueprint for Affective Computing: A sourcebook and
manual. Oxford University Press, 2010.
[42] J. Tao and T. Tan, “Affective computing: A review,” in International Conference on Affective comput-
ing and intelligent interaction. Springer, 2005, pp. 981–995.
[43] P. E. Ekman and R. J. Davidson, The nature of emotion: Fundamental questions. Oxford University
Press, 1994.
[44] S. Hamann, “Mapping discrete and dimensional emotions onto the brain: controversies and con-
sensus,” Trends in cognitive sciences, vol. 16, no. 9, pp. 458–466, 2012.
[45] J. Allanson and S. H. Fairclough, “A research agenda for physiological computing,” Interacting with
computers, vol. 16, no. 5, pp. 857–878, 2004.
[46] S. H. Fairclough, “Fundamentals of physiological computing,” Interacting with computers, vol. 21,
no. 1-2, pp. 133–145, 2008.
[47] A. Holzinger, M. Bruschi, and W. Eder, “On interactive data visualization of physiological low-cost-
sensor data with focus on mental stress,” in International Conference on Availability, Reliability, and
Security. Springer, 2013, pp. 469–480.
82
[48] S. Amiri, R. Fazel-Rezai, and V. Asadpour, “A review of hybrid brain-computer interface systems,”
Advances in Human-Computer Interaction, vol. 2013, p. 1, 2013.
[49] M. Duvinage, T. Castermans, M. Petieau, T. Hoellinger, G. Cheron, and T. Dutoit, “Performance
of the emotiv epoc headset for p300-based applications,” Biomedical engineering online, vol. 12,
no. 1, p. 56, 2013.
[50] J. van Erp, F. Lotte, and M. Tangermann, “Brain-computer interfaces: beyond medical applications,”
Computer, vol. 45, no. 4, pp. 26–34, 2012.
[51] M. Bares, M. Brunovsky, T. Novak, M. Kopecek, P. Stopkova, P. Sos, and C. Hoschl, “Qeeg theta
cordance in the prediction of treatment outcome to prefrontal repetitive transcranial magnetic stimu-
lation or venlafaxine er in patients with major depressive disorder,” Clinical EEG and neuroscience,
vol. 46, no. 2, pp. 73–80, 2015.
[52] M. Brunovsky, M. Viktorinova, A. Bravermanova, J. Horacek, V. Krajca, and C. Hoschl, “Qeeg
correlates of emotionally negative state induced by autobiographic script in patients with affective
disorder and healthy controls,” Clinical Neurophysiology, vol. 127, no. 3, p. e43, 2016.
[53] S. E. Thompson and S. Parthasarathy, “Moore’s law: the future of si microelectronics,” Materials
today, vol. 9, no. 6, pp. 20–25, 2006.
[54] A. Teller, “A platform for wearable physiological computing,” Interacting with Computers, vol. 16,
no. 5, pp. 917–937, 2004.
[55] A. E. Hassanien and A. Azar, “Brain-computer interfaces,” Switzerland: Springer, 2015.
[56] M. B. Khalid, N. I. Rao, I. Rizwan-i Haque, S. Munir, and F. Tahir, “Towards a brain computer inter-
face using wavelet transform with averaged and time segmented adapted wavelets,” in Computer,
Control and Communication, 2009. IC4 2009. 2nd International Conference on. IEEE, 2009, pp.
1–4.
[57] V. Rozgic, S. N. Vitaladevuni, and R. Prasad, “Robust eeg emotion classification using segment
level decision fusion,” in Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE Interna-
tional Conference on. IEEE, 2013, pp. 1286–1290.
[58] J. Onton and S. Makeig, “High-frequency broadband modulations of electroencephalographic spec-
tra,” Frontiers in human neuroscience, vol. 3, 2009.
[59] S. Koelstra, C. Muhl, M. Soleymani, J.-S. Lee, A. Yazdani, T. Ebrahimi, T. Pun, A. Nijholt, and
I. Patras, “Deap: A database for emotion analysis; using physiological signals,” IEEE Transactions
on Affective Computing, vol. 3, no. 1, pp. 18–31, 2012.
83
[60] E. Cambria, D. Olsher, and D. Rajagopal, “Senticnet 3: a common and common-sense knowl-
edge base for cognition-driven sentiment analysis,” in Twenty-eighth AAAI conference on artificial
intelligence, 2014.
[61] M. Soleymani, S. Asghari-Esfeden, Y. Fu, and M. Pantic, “Analysis of eeg signals and facial expres-
sions for continuous emotion detection,” IEEE Transactions on Affective Computing, vol. 7, no. 1,
pp. 17–28, 2016.
[62] J. A. Russell, “Core affect and the psychological construction of emotion.” Psychological review,
vol. 110, no. 1, p. 145, 2003.
[63] T. Munzner, Visualization analysis and design. CRC press, 2014.
[64] A. J. Jerri, “The shannon sampling theorem—its various extensions and applications: A tutorial
review,” Proceedings of the IEEE, vol. 65, no. 11, pp. 1565–1596, 1977.
[65] A. Kasamatsu and T. Hirai, “An electroencephalographic study on the zen meditation (zazen),”
Psychiatry and Clinical Neurosciences, vol. 20, no. 4, pp. 315–336, 1966.
[66] A. A. Fingelkurts, A. A. Fingelkurts, and T. Kallio-Tamminen, “Eeg-guided meditation: a personal-
ized approach,” Journal of Physiology-Paris, vol. 109, no. 4, pp. 180–190, 2015.
[67] A. T. Pope, E. H. Bogart, and D. S. Bartolome, “Biocybernetic system evaluates indices of operator
engagement in automated task,” Biological psychology, vol. 40, no. 1, pp. 187–195, 1995.
[68] C. Berka, D. J. Levendowski, M. N. Lumicao, A. Yau, G. Davis, V. T. Zivkovic, R. E. Olmstead, P. D.
Tremoulet, and P. L. Craven, “Eeg correlates of task engagement and mental workload in vigilance,
learning, and memory tasks,” Aviation, space, and environmental medicine, vol. 78, no. 5, pp.
B231–B244, 2007.
[69] T. McMahan, I. Parberry, and T. D. Parsons, “Evaluating player task engagement and arousal using
electroencephalography,” Procedia Manufacturing, vol. 3, pp. 2303–2310, 2015.
[70] P. C. Petrantonakis and L. J. Hadjileontiadis, “An emotion elicitation metric for the valence/arousal
and six basic emotions affective models: A comparative study,” in Information Technology and
Applications in Biomedicine (ITAB), 2010 10th IEEE International Conference on. IEEE, 2010, pp.
1–4.
[71] R. J. Davidson, G. E. Schwartz, C. Saron, J. Bennett, and D. Goleman, “Frontal versus parietal eeg
asymmetry during positive and negative affect,” in Psychophysiology, vol. 16, no. 2. CAMBRIDGE
UNIV PRESS 40 WEST 20TH STREET, NEW YORK, NY 10011-4211, 1979, pp. 202–203.
84
[72] E. Harmon-Jones and J. J. Allen, “Anger and frontal brain activity: Eeg asymmetry consistent with
approach motivation despite negative affective valence.” Journal of personality and social psychol-
ogy, vol. 74, no. 5, p. 1310, 1998.
[73] J. Thie, A. Klistorner, and S. L. Graham, “Biomedical signal acquisition with streaming wireless
communication for recording evoked potentials,” Documenta Ophthalmologica, vol. 125, no. 2, pp.
149–159, 2012.
[74] S. Debener, F. Minow, R. Emkes, K. Gandras, and M. Vos, “How about taking a low-cost, small, and
wireless eeg for a walk?” Psychophysiology, vol. 49, no. 11, pp. 1617–1621, 2012.
[75] D. Lim, P. Condon, and D. DeSteno, “Mindfulness and compassion: an examination of mechanism
and scalability,” PloS one, vol. 10, no. 2, p. e0118221, 2015.
[76] A. Howells, I. Ivtzan, and F. J. Eiroa-Orosa, “Putting the ‘app’in happiness: a randomised controlled
trial of a smartphone-based mindfulness intervention to enhance wellbeing,” Journal of Happiness
Studies, vol. 17, no. 1, pp. 163–185, 2016.
[77] M. M. Bradley and P. J. Lang, “The international affective digitized sounds (; iads-2): Affective
ratings of sounds and instruction manual,” University of Florida, Gainesville, FL, Tech. Rep. B-3,
2007.
[78] R. Neelamani, A. I. Baumstein, D. G. Gillard, M. T. Hadidi, and W. L. Soroka, “Coherent and random
noise attenuation using the curvelet transform,” The Leading Edge, vol. 27, no. 2, pp. 240–248,
2008.
[79] P. Welch, “The use of fast fourier transform for the estimation of power spectra: a method based on
time averaging over short, modified periodograms,” IEEE Transactions on audio and electroacous-
tics, vol. 15, no. 2, pp. 70–73, 1967.
[80] I. H. Gotlib, “Eeg alpha asymmetry, depression, and cognitive functioning,” Cognition & Emotion,
vol. 12, no. 3, pp. 449–478, 1998.
[81] Y. Rogers, H. Sharp, and J. Preece, Interaction design: beyond human-computer interaction. John
Wiley & Sons, 2011.
[82] J. Brooke et al., “Sus-a quick and dirty usability scale,” Usability evaluation in industry, vol. 189, no.
194, pp. 4–7, 1996.
[83] A. Bangor, P. T. Kortum, and J. T. Miller, “An empirical evaluation of the system usability scale,” Intl.
Journal of Human–Computer Interaction, vol. 24, no. 6, pp. 574–594, 2008.
85
86
ACode of Project
estimateHR()
1 function estimateHR (){
2 var aux_queue= [];
3 queue = queue.map(function(x) { return x+1000;});// update time of
previous peak occurance (they are now 1sec late)
4 //The ~ bitwise not operator returns false if -1 and true if positive. A
perfect match for indexOf which returns the index 0 ... n if found and
-1 if not.
5 var pos = in2.indexOf (1023); //LOOK FOR PEAK OCCURENCEs
6
7 while (~pos) {
8 aux_queue.push(pos); //add peaks occurence QUEUE
9 pos = in2.indexOf (1023, pos + 1); // use old position incremented to
keep searching for peak occurence
10 console.log("peak found");
87
11
12 }
13
14 while(aux_queue.length>0){ // insert peak occurence in adequate
order to queue
15 queue.push(aux_queue.pop ());
16 }
17 while(queue [0]>10000){ // Remove peaks more than 10 seconds ' old
18 queue.shift ()
19 }
20 var diff=0;
21 for(var i = 0; i<queue.length-1;i ++){ //Add differences between each
peak occurence
22 diff+=queue[i]-queue[i+1]
23 }
24 // averages the distances and converts this ms value to BPM HR
25 curr_hr = Math.round (60000/( diff *1.0/( queue.length-1)))
26 if(hr_max<curr_hr){ hr_max=curr_hr; }
27 if(hr_min>curr_hr){ hr_min=curr_hr; }
28 data_HR.push(curr_hr);
29 return curr_hr;
30
31 }
88
BNeuroTech Artifacts
In this appendix, NeuroTech related work and collected feedback is presented.
B.1 NeuroTech Business Model Canvas
BMC details a developing, new business model as envisioned during the KickUP Sports in which
we participated. Figure B.1 captions how the proto-company was envisioned. It is credited to IBEB
associate investigator, Susana Navais Santos 1 who was responsible for the project’s application and its
subsequent developments.
B.2 Gaming Tournament Conversation Feedback
Flavio, 21 anos Joga Hearthstone. O ”bichinho” que o faz jogar e o convıvio e a adrenalina. Participa
em torneios nacionais, joga competitivamente ha 6 meses, comecou a jogar Hearthstone ha 1 ano
e meio. E um jogo de estrategia com cartas virtuais que requer muita concentracao e pensamento
1http://ibeb.ciencias.ulisboa.pt/pt/susana-novais-santos/
89
claro. Os jogadores pro tem apoio para ir a torneios internacionais. Estava a participar no seu
3º torneio offline, ja participou em +50 online. Os torneios internacionais costumam ter uma fee
associada. A Blizzard (criadora do jogo) e quem faz a gestao dos torneios mas tem sites parceiros
que tambem podem organizar. Cada campeonato dura 3 meses de acordo com as estacoes ao
longo do ano. Os jogadores ganham pontos numa ladder a nıvel europeu (existem outras para
USA, Asia...). Quando comecou o Flavio investiu 50C no jogo para comprar cartas e construir
um deck mais forte. Joga 2-3 horas por dia, estuda ao mesmo tempo e costuma deitar-se pelas
2 da manha e acordar as 10 quando nao tem aulas. Acha que e importante dormir bem. Como
se prepara: pesquisa no Twitter o que os jogadores famosos estao a fazer/dizer sobre o jogo, tem
a ajuda de um amigo mais experiente que lhe da dicas, e ve os decks mais jogados. Durante o
jogo: quando perde em jogos teoricamente faceis sente alguma raiva, mas ”tenta nao se enervar,
nao pensar muito nisso”. Apos cada derrota ve onde errou e o que poderia ter feito melhor. Os
jogos duram em media 10 a 12 minutos. Admite que e cansativo jogar de seguida, mas consegue
perceber quando parar (comeca a fazer erros no jogo - misplays). Para relaxar ve filmes, series
ou joga FIFA. E nos torneios que sente maior pressao. Como acha que poderia melhorar a
sua performance: ter mais tempo para ver outros jogadores (streams). Talvez experimentasse
meditacao mas ”nao acredita muito nisso”. Acha que nao precisa de feedback sobre o seu estado
de cansaco porque ja percebe quando deve parar de jogar (apos 2-3 jogos a cometer erros). Diz
que o wearable nao iria incomodar.
Nuno, 35 anos Joga PES e alguns First Person Shooter (FPS) Entrou na GrowUP e comecou por jogar
Unreal Tournament. Entrava nos torneios pela diversao e para conhecer pessoas novas. O mais
importante para uma boa prestacao e o ambiente e o apoio da equipa.
Em casa sente menos pressao, nos torneios ha mais ansiedade, sobretudo nos jogos mais im-
portantes, mas diz que a ansiedade se controla com a experiencia. Fazer jogadas erradas causa
alguma ansiedade mas para ultrapassar isso pensa que ”a seguir vai correr melhor ”.
Meirinhas, 23 anos Utilizou o brainBIT enquanto jogava Hearthstone (perdeu 3-0) Diz que usar o wear-
able nao influenciou o seu jogo mas sentiu-se mais observado.
90
Figure B.1: NeuroTech BMC
91