place cell latex report

49
Creating a Computational Model of Place Cells using a Continuous Attractor Neural Network Jacob Elliot Senior: 12001071 Physics and Smart Systems BSc CSC-30014 25th April 2016. School of Computing and Mathematics Keele University Keele Staffordshire ST5 5BG Word Count: Approx 8,700

Upload: jacob-senior

Post on 13-Apr-2017

31 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Place Cell Latex report

Creating a Computational Model of Place

Cells using a Continuous Attractor

Neural Network

Jacob Elliot Senior: 12001071

Physics and Smart Systems BSc

CSC-30014

25th April 2016.

School of Computing and Mathematics

Keele University

Keele

Staffordshire

ST5 5BG

Word Count: Approx 8,700

Page 2: Place Cell Latex report

“We live in a society exquisitely dependent on science and technology, in

which, hardly anyone knows anything about science and technology...”

Carl Edward Sagan

ii

Page 3: Place Cell Latex report

Abstract

Presented in this paper is a computational model and simulation of Place Cells,

a type of cell found in mammals that aid the animal with navigation. A

Continuous Attractor Neural Network was implemented to represent the Place

Cells and a Virtual Robot was created along with 4 different Environments.

Numerous studies have been performed by biologists, looking at the activity of

these cells in Rats (O’Keefe and Conway 1978, Knierim and Rao 2003). The

results of the studies were produced by measuring the responses of the cells when

different conditions were applied to the environment. These conditions were

applied to the simulation, and the Activation Pattern of the output of the

CANN was recorded.

The different conditions applied in the simulation produced changes in the

Activation Patterns of the CANN. These were recorded and then compared to

the responses of Place Cells, recorded in the studies on rats mentioned above.

Alone the model discussed and implemented in this paper does not provide a

complete method of Robot Navigation; however, with further development,

combining it with other cells and behaviours such as, Head Directions Cells, Grid

Cells and Path Integration, which also aids the animal during its navigation, we

begin to see a potentially viable model for an alternative method of Robot

Navigation.

Some simplifications and omissions were made in the model mostly due to time

constraints, and these issues would clearly need to be addressed when

considering further development of this model.

iii

Page 4: Place Cell Latex report

Contents

Abstract iii

Contents iv

Acknowledgement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vi

1 Introduction 1

1.1 Place Cells . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

1.1.2 Computational Comparisons . . . . . . . . . . . . . . . . . . 2

1.2 The Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.2.1 Development Considerations . . . . . . . . . . . . . . . . . . 4

1.2.2 Aims and Objectives . . . . . . . . . . . . . . . . . . . . . . 5

2 Methodology 6

2.1 The Continuous Attractor Neural Network . . . . . . . . . . . . . . 7

2.1.1 Description . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.2 Equations implemented . . . . . . . . . . . . . . . . . . . . . 9

2.1.3 Place Cell Activation Equation . . . . . . . . . . . . . . . . 11

2.2 Robot, Environment and UI Implementation . . . . . . . . . . . . . 12

2.2.1 Virtual Robot and Environments . . . . . . . . . . . . . . . 12

2.2.2 Geometric Calculations . . . . . . . . . . . . . . . . . . . . . 14

2.3 Combining Robot and the CANN . . . . . . . . . . . . . . . . . . . 15

2.3.1 Combining . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

iv

Page 5: Place Cell Latex report

2.3.2 Testing Methods . . . . . . . . . . . . . . . . . . . . . . . . 16

3 Results 20

3.1 CANN Outputs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.2 Findings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2.1 Condition 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2.2 Condition 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

3.2.3 Condition 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

3.2.4 Condition 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Discussion 29

5 Conclusion 31

5.1 Further Development . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5.1.1 Modifications to the Model . . . . . . . . . . . . . . . . . . . 31

5.1.2 Performance Improvements . . . . . . . . . . . . . . . . . . 34

5.1.3 Further Tests and Implementations . . . . . . . . . . . . . . 35

5.2 Shortcomings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.3 Contribution to wider field . . . . . . . . . . . . . . . . . . . . . . . 37

Appendix 43

A.1 Additional Documents . . . . . . . . . . . . . . . . . . . . . . . . . 43

v

Page 6: Place Cell Latex report

Acknowledgement

Before we begin, I would like to express my greatest thanks to Dr. Theocharis

Kyricaou for his guidance, encouragement and useful critiques of this work. Thanks

are also due to James Borg for running the module and for the freedom of choice

for final year projects.

Figures were taken directly from the simulation or created in Inkscape.

vi

Page 7: Place Cell Latex report

Chapter 1

Introduction

The Hippocampus has become one of the most studied areas of the brain.

This is due to the discovery of its importance in memory, particularly long term

memory, as well as spatial memory and navigation (OKeefe and Conway, 1978).

This has led to research into the underlying mechanisms of this area of the brain,

as it could not only help to provide a better model of brain theory, but also has

relevance in the Computational Intelligence field. The activity and interactions of

different cells in the Hippocampus, could lead to more sophisticated methods to

implement robot navigation that are computationally cheaper and more flexible

than current popular methods (Kyriacou, 2011).

1.1 Place Cells

1.1.1 Background

Place Cells are found in the Hippocampus region of the brain in most mam-

mals (O’Keefe and Conway, 1978; O’Keefe, 1984; Samsonovich and McNaughton

1997). They are thought to assist in building and providing a cognitive or mental

map of an animal’s environment, to aid the animal in navigation (McNaughton et

al. 1998). The Place Cells that fire, and how strongly they fire given certain cues

from a pattern of activity known as the Place Field.

Page 1

Page 8: Place Cell Latex report

Introduction

Studies on Place Cells have generally been performed using electrodes im-

planted into the brains of rats, providing a set of environments to navigate by

allowing them to explore different shaped mazes, monitoring the activity in the

cells (OKeefe and Conwy, 1978; Muller et al. 1994; OKeefe and Burgess, 1996).

Studies have also been performed, looking at the effect and changes to landmarks

in the environment in animals (Knierim, 2002; Scalpen et al. 2014) and landmark

and vision driven based models (Cruse, 2003; Lew, 2011; Deshmukh and Knierim,

2013).

Research performed over the last 40 years has suggested a mixture of de-

pendencies for the Place Fields, with some authors suggest that they are highly

dependant on the landmarks visible to the animal (OKeefe and Conway, 1978;

Jeffery, 2007). Minor movement in the locations of the landmarks will cause the

place field to also vary slightly; however, a larger change or removal of a landmark

would be perceived as a different environment entirely, producing a different Place

Field (Scalpen et al 2014) .

It has been proposed and observed that each time the animal returns to an

environment and they return to the same location within that environment, the

same Place Cells will fire to produce the same Place Field, thus providing a simple

form of location based memory.

There have been attempts at creating a computational model of Place Cells,

using visual landmarks as cues to Place Fields to fire (McNaughton et al. 1991;

Redish et al. 1996; Stringer and Rolls 2005;), which generally make use of a type

of Attractor Neural Network as the representation of the network of cells.

1.1.2 Computational Comparisons

Generally, the most popular current methods of implementing Robot Naviga-

tion involve either providing the robot with a pre-defined map of it’s environment,

or a programming technique to perform an operation known as Simultaneous Lo-

calisation and Mapping (SLAM) (Kyriacou 2011). The main disadvantages with

Page 2

Page 9: Place Cell Latex report

Introduction

these methods are, with a pre-defined map, the robot’s environment will always

be limited to the size of the map given to it and the fidelity of the map. SLAM

was created to provide a solution for the problem of building a spatial map of an

unknown environment while maintaining the localization of the robot within the

environment (Thrun and Leonard, 2008).

The most prominent issue is that it often requires multiple, highly accurate

sensors, which can be costly in terms of both money and computational power

requirements (Newman et al. 2001). This is because for every update for the

localization of the robot, each sensor needs to take new readings and compare them

to the previous readings, and perform calculations that determine information such

as the distance travelled, angle rotated, speed etc. Having multiple sensors helps

reduce the total error within the system. This is essential to keep as minimal as

possible as a small error in every update would translate into a large total error

over long distances, due to the large number of updates. It also provides the

robot with back up systems so if a sensor breaks, the robot would still be able to

continue, albeit slightly less accurately. The main draw back is the more sensors

you introduce, the more computational power is required.

Place Cells are an interesting topic of research from a computational perspec-

tive. Creating computational models of different areas of the brain have helped

attain a better understanding of the underlying mechanisms of neurons, which in

turn led to more sophisticated models of brain theory and more advanced imple-

mentations and uses for robotics and artificial intelligence systems. As Place Cells

are thought to build a cognitive map for animals, they could potentially provide

a computationally cheaper environmental mapping method than SLAM, but also

a more flexible option to supplying the robot with a predefined map.

All animals have evolved their methods of navigation to best suit the environ-

ment they need to navigate, even the smallest animals have efficient and effective

methods of navigation, so perhaps to further develop robot navigation, inspiration

can be taken from the basic underlying mechanisms of animal navigation rather

Page 3

Page 10: Place Cell Latex report

Introduction

than just trying to replicate the overall behaviours (Kyriacou 2011).

Of course, another key aspect of research is if it has any real world applica-

tions. When combined with computational models of other cells in the hippocam-

pus, such as Head Direction Cells and behaviours that aid navigation like Path

Integration, a different method to assist in, or possibly a full implementation of

robot navigation could be created.

1.2 The Simulation

1.2.1 Development Considerations

Java was chosen as the programming language primarily due to time con-

straints as it was the most familiar language, but it also has a number of ad-

vantages such as Platform Independence, and has an efficient garbage collection

system among others.

The Integrated Development Environment (IDE) used was Netbeans. This

is a complete environment for writing code, with features such as code comple-

tion, reference look up, debugging processes and an integrated GUI builder. The

other IDE available was BlueJ, which was not suitable for a large project as it is

considered to be an IDE to assist in learning how to code.

Although the software developed is for a specific purpose, and it was not too

complex, it was still necessary for Design Heuristics to be taken into consideration.

Jakob Nielsens heuristics are among the most popular used today, so were the

chosen set to be considered for the Simulation.

As this is fairly simple software, only a few of the heuristics were relevant.

Error prevention and User recovery from errors are two important heuristics for

any piece of software, as well as providing good and regular feedback with reason-

able timing as this increases the usability of the software. Some of the heuristics

such as ’Providing Shortcuts’ for experienced users and Recognition rather than

Recall are not relevant due to the lack of overall interactions between the User

Page 4

Page 11: Place Cell Latex report

Introduction

and the Software.

1.2.2 Aims and Objectives

With all this taken into consideration, a set of aims and objectives can be

made for the project:

1. Create a model of Place Cells using a Continuous Attractor Neural Network

(CANN).

2. Implement the model into a simulation of a Virtual Robot.

3. Apply different conditions to the simulation and record the Activation Pat-

tern produced by the CANN.

4. Compare these to responses from studies performed by Biologists.

Page 5

Page 12: Place Cell Latex report

Chapter 2

Methodology

Discussed in this section is how each component of the simulation was created

and implemented. For each part of the model and simulation, the most basic

element of the component was constructed first. This was then used to build the

next part of the component. For example, a CANN is a network of fully connected

nodes, so it was constructed by building a single node first, then creating multiple

of them. The connections between the nodes were initialised, the network was

trained, and finally the network was made to respond to an applied stimuli.

The CANN and Virtual Robot simulation were created separately and then

combined when a User Interface (UI) was created. Four different environments and

positions for the Virtual Robot were created, and three different test conditions

were applied to the simulation to see if the CANN behaved similarly to the Place

Cells of rats when exposed to the same conditions. The different tests applied to

the simulation were inspired by studies performed on rodents by Muller (1994),

Lew (2011) and Scalpen et al. (2014).

Page 6

Page 13: Place Cell Latex report

Methodology

2.1 The Continuous Attractor Neural Network

2.1.1 Description

The type of neural network currently believed to best represent this type of

biological neural network is the CANN (Stringer et al. 2002, Kyriacou. 2011).

To better understand this system, the following analogy can be made. Imagine a

grid of 100 light bulbs, an amount of electricity is randomly applied to the grid,

enough to light up a single bulb maximally. A bulb is randomly chosen to light

up each time the current is applied. To spread the electricity throughout the grid

each bulb is then connected to every other bulb in the circuit. This time, when

the electricity is applied to a single bulb, multiple bulbs light up. However, as

there is only enough electricity to light up one bulb, only the light bulbs closest

to the one supplied with electricity manage to get enough power to light up.

Figure 2.1: A diagram of a 5x5 CANN, showing connections of the central node.

This is similar to an Artificial Neural Network. In this network, instead of

a grid of light bulbs, there is a grid of nodes, which are computational versions of

biological neurons. The nodes in this project have a variety of properties such as

its location in the grid, firing rate and activation. Each node is connected to every

Page 7

Page 14: Place Cell Latex report

Methodology

other node, including itself, like the light bulbs in the analogy above. The strength

of the excitatory and inhibitory connections, and the initialisation of these prop-

erties are discussed in the Section 2.1.2. As in biology, the nodes have weighted

connections between them which are trained and adjusted during the network’s

learning phase. Returning to the light bulb analogy, the learning phase is akin

to changing the lengths of the connections between the light bulbs depending on

certain information or cues. This would result in a different dispersal of electricity

throughout the grid, which would make a different pattern of illuminated light

bulbs when the current is applied.

Attractor Neural Networks are recurrently connected network whose dynam-

ics allow it to settle into a subset of states. Different types of attractors have dif-

ferent useful implementations, Line Attractors are good for motor control, Point

Attractors are useful in long term memory and content addressable memory.

Continuous Attractors are one of the most apt artificial networks for Place

Cells. To explain why, another example is needed. Imagine a person is sitting

in his kitchen, a certain Place Field would be produced by his Place Cells for

that location. If his visual sense was removed and he had no other source of

information, the same Place Field would be produced by his Place Cells, as he

would have received no other information to suggest otherwise. If he was then

moved from his kitchen into his lounge (still with no visual sense and with no other

source of information), the Place Field would stay the same until the blindfold was

removed, and the Place Cells received the information they needed to update. This

is a property that Continuous Attractor Neural Networks possess, as they have

one or more quasi-continuous sets of attractors, allowing the activation pattern to

remain even if inputs are removed.

To find the Firing Rate, change in Synaptic Weights and Activation of the

Nodes, the equations below had to be implemented and used. These three variables

were combined to produce the Activation Patterns of the CANN, which was the

Place Fields produced in the simulation.

Page 8

Page 15: Place Cell Latex report

Methodology

2.1.2 Equations implemented

The following equations were sourced from (Stringer et al. 2002, 2004). The

firing rate is initially set using equation 2.1 where α is the Sigmoid function, β is

the slope and hPi (t) is the activation of place cell i.

rPi (t) =1

1 + exp[−2β(hPi (t) − α)](2.1)

The synaptic weights between each node were initially set to a Gaussian function,

which produced excitatory and inhibitory connections described above, with a

location of peak activation. The firing rate and synaptic weights are then modified

during a learning phase.

There is a common phrase ”Practice makes Perfect”, this is, of course, re-

ferring to one of the most effective ways to improve a skill, through practise.

When someone attempts to perform a new task, a certain combination of neurons

in their brain activate. This activity pattern corresponds to the required bodily

operations to do the task. The more frequently the person performs this task,

the more effective those neurons will become at firing. The higher the efficiency

of firing, the better the person becomes at that task. This ability to grow and

strengthen synaptic connections is the fundamental basis of how humans learn to

perform new tasks. Drawing back to the light bulb analogy once more, it would

be akin to replacing the cables between the active bulbs to more efficient or more

conductive materials. This would result in more electricity reaching the bulbs, so

more light is produced.

The computational version of this process is known as the learning phase.

Each epoch of a Learning cycle, the firing rates and synaptic weights were updated.

This was then repeated for each node and every position of the Virtual Robot, in

an environment. The learning phase was comprised of learning cycles in different

environments, as explained below.

Each time the ’Learn Environment’ button in the UI was pressed, the neural

Page 9

Page 16: Place Cell Latex report

Methodology

network was trained for 19 epochs. Tests were performed, using 5, 10, 20, 50, 75,

and 100 epochs of training. Any training cycles over 20 epochs quickly led to an

over-trained network; below 10 epochs per training cycle, the speed of the learning

was much slower than cycles in the 10-20 epoch range. After further testing, the

optimum training was found to be the following: The Virtual robot visited each

environment sequentially, with one repeat. Each time the Virtual Robot visited

an environment it underwent a 19 epoch training cycle, and was then moved to

the next environment.

For each epoch, the firing rates and synaptic weights were updated according

to Equations 2.2 and 2.4 respectively.

rPi = exp−(sPi )

2

2(σP )2(2.2)

Equation 2.2 is the learning firing rate for Place cell i, rPi is the firing rate, sPi

corresponds to the distance between the Virtual Robot and the location which

Place Cell i fires maximally and σP is the standard deviation.

sPi was be found by using the following equation:

sPi =√

(xi − x)2 + (yi − y)2 (2.3)

Where xi, yi are the location of the Virtual Robot that causes Place Cell i to fire

maximally, and x, y are the location of the Virtual Robot.

Using Equation (2.4), the synaptic weight between Place Cell i and j is

updated, where δwRC is the change in synaptic weight, k is the learning rate

constant, rPi and rPj are equal to the firing rate of cell i and j respectively.

δwRC = krPi rPj (2.4)

Page 10

Page 17: Place Cell Latex report

Methodology

2.1.3 Place Cell Activation Equation

When the learning phase was complete, the output of the CANN changed

from a Gaussian peak of node activation into patterns where all the nodes of

the CANN had varying strengths of activation. The equation used to derive the

activation for each node of a CANN in this field is documented in (Stringer et al.

2002, 2004) and is the following differential equation:

τdhPi (t)

dt= −hPi (t) +

φ0

CP

∑j

(ωRCij − ωINH)rPj (t) + IVi (2.5)

Where τ is the time constant, φ0 is a constant as well as ωINH . CP is the number

of synaptic connections for each Place Cell. IVi represents the visual input to

Place Cell i. The equation implemented from this was found by setting t=0 and

re-arranging:

τhPi (t) =φ0

CP

∑j

(ωRCij − ωINH)rPj (t) + IVi (2.6)

As a differential equation simply describes how a certain quantity changes over

time, a timer was used to update the activation of each Place Cell. When each

timestep occurred, the activation of each node was updated using Equation 2.6.

Information that was previously used to calculate where each place cell fired max-

imally, along with the Virtual Robot’s x and y coordinates, was used to determine

how strongly each Place Cell should fire.

Finally, to produce the Activation Pattern of the CANN, the network was

represented as a 2D grid of squares. Each square then changes colour, based on

the activation of the node for that particular position, in that environment.

The activations of the nodes are highest at Red decrease as the colour moves

through orange, yellow, green and blue, displayed in Figure 2.2.

Page 11

Page 18: Place Cell Latex report

Methodology

Figure 2.2: Example of CANN Activation pattern

2.2 Robot, Environment and UI Implementation

2.2.1 Virtual Robot and Environments

With the CANN implemented and producing the expected responses, at-

tention turned to the creation of its host, a Virtual Robot and environments for

it to learn, navigate and eventually remember. To begin, a basic 400x400 pixel

environment was created and within it the Virtual Robot and the landmarks were

created to fill the environments.

Four different environments were created for the Virtual Robot to learn and

move in. Three environments had the landmarks set up with no clear symmetry,

and one with complete symmetry in which each landmark resides in a corner.

Time limited the number of testing environments that could be made, and would

be a relatively easy way to further the simulation. The final Environment was

chosen as a special case, to test how the CANN dealt with uncertainty. It was

expected to be an Environment where the Virtual Robots CANN struggled the

most to differentiate between positions within it. This is due to the information

from landmarks being much similar than asymmetric environments, meaning a lot

of the positions looked the same to the Virtual Robot.

Page 12

Page 19: Place Cell Latex report

Methodology

Figure 2.3:Position 1,

Environment 1

Figure 2.4:Position 2,

Environment 2

Figure 2.5:Position 3,

Environment 3

Figure 2.6:Position 4,

Environment 4

Two methods of controlling the Virtual Robot were implemented, using the

mouse pointer or the W, A, S and D keys in their traditional mapping. A basic

form of collision detection was included to stop the Virtual Robot from travelling

into landmarks and through the walls. This was achieved by simply checking if

the command the user applied moved the Virtual Robot on top of a landmark or

wall, if so the robot would not move.

Page 13

Page 20: Place Cell Latex report

Methodology

2.2.2 Geometric Calculations

To calculate the distance to each landmark, Pythagoras’ equation (Equation

2.7) was implemented by using the location of the Virtual Robot and the centre

of the Landmark, treating it as a point, rather than a whole shape.

a2 = b2 + c2 (2.7)

The differences in the x and y coordinates between the Virtual Robot and each

landmark were calculated, and substituted into Equation 2.7. This produced the

distance along a straight path between the Virtual Robot and the landmark.

θ = tanOpposite

Adjacent(2.8)

These differences are then used in Equation 2.8, to give us the angle between the

Virtual Robot and the landmark. These calculations are displayed in Figure 2.7

below.

Figure 2.7: Calculating the angle and distance from Virtual Robot to landmark

Page 14

Page 21: Place Cell Latex report

Methodology

2.3 Combining Robot and the CANN

2.3.1 Combining

Using the above method, the distance and angle to each of the landmark can

be found. This is then used to determine the location the Virtual Robot needs

to be at for each Place Cell to fire maximally. This was quite difficult to find

in literature as papers (Stringer et al. 2002, 2004) did not go into detail about

how this is determined, despite being one of the more important elements of the

simulation. Eventually, a viable solution was found, and an abstraction of Bota

et al. (2001) was implemented. Using the average distance and angles to each

landmark, normalised between 0-400; the X and Y Coordinates of the Virtual

Robot that causes each place cell to fire were found.

This was the final major obstacle of the simulation, as it was the Virtual

Robot’s sense of vision. Although humans generally do not need to think about

the precise distance and angles to certain objects, the information is almost always

being processed by the brain, and is the one of the primary functions for our visual

sense. The simplest way to emulate this for simulating purposes was to use the

coordinates of the centre of the landmarks and the coordinates of the robot. As

discussed in the Introduction, vision systems rely on maintaining the smallest

possible error or operate effectively, and this was the most accurate and simplest

implementation.

As the development of the CANN, Virtual Robot simulation and UI occurred

separately, all these pieces needed to be unified into the UI, which was achieved

by creating an instance of the Environment class in the CANN class. This also

allowed the transfer from using the mouse pointer to apply stimuli to the CANN,

to using the coordinates of the Virtual Robot.

Page 15

Page 22: Place Cell Latex report

Methodology

Figure 2.8: The completed simulation program

2.3.2 Testing Methods

Tests need to be applied to simulations to prove that they correctly emulate

the aspect of nature they are re-creating. In this project, the Neural Network

needs to display the expected behaviour of Place Cells. There have been numerous

experiments on this topic, providing different conditions that can be applied to the

simulation and the responses expected to be produced by the CANN. The simplest

test used was to see if the Virtual Robot ’remembered’ positions in environments.

The conditions applied have been inspired by the studies of Muller et al (1994),

Lew (2011), Scalpen et al. (2014) and Muller and Kubie (1987).

As previously described, the learning process in the brain occurs when certain

patterns of activity increase in strength, when a task is repeated. The equivalent

version of memory for this project was displaying the same patterns of activation

when the Virtual Robot returned to the same location, in the same environment,

after training had been completed.

Another test to see if the Virtual Robot ’remembered’ environments was to

Page 16

Page 23: Place Cell Latex report

Methodology

train the Virtual Robot in each environment individually. This would produce

the activation pattern for each separate environment. If the prominent features

of each pattern appeared in the activation pattern of the CANN trained in all

environments, then it could be inferred that the Virtual Robot would be able to

identify environments by comparing the activation patterns.

The easiest way to check this is to be able to place the Virtual Robot in

exactly the same place, which was accomplished by including four position buttons

in the UI. Four buttons for changing the Environment were included along with

four buttons for applying the different conditions, to simulate different situations

that have been tested in literature. The responses to different conditions were

the main metric for the success of the model. If the CANN replicated the same

behaviours as the Place Cells in Rats when exposed to the same changes, it would

suggest the model is an accurate recreation of Place Cells. The different four

conditions used were as follows:

1. Lights On

2. Lights Off (Loss of Vision, No other sense of input)

3. Landmark’s location changed by a small amount

4. Environment rotates 90 degrees clockwise

The first condition was to have the environments the same as during the

learning phase, and visual input enabled. This was needed for the initial test,

seeing if the Virtual Robot remembered different environments. This also allowed

the simulation to return to its standard state after different conditions had been

applied.

The second condition is effectively removing the light source to simulate loss

of visual input. The Virtual Robot depended on this input to determine which

Place Cells fire and how strongly. The response to this change is expected to be

no change of activation in the CANN; without the visual information and with no

other inputs, the Virtual Robot would think it is still in the same positions that it

Page 17

Page 24: Place Cell Latex report

Methodology

was in before the lights were turned off. This hypothesis can be extended further

to suggest that if the Virtual Robot’s environment changed while its vision was

disabled, it should maintain the same activation pattern as the position in the

environment that the Virtual Robot last “saw”. When vision is returned to the

Virtual Robot, the CANN would update its activity pattern with respect to its

new position and/or environment.

The third and fourth conditions should have a similar response, in terms of

the change to the environment we see, rather than to each other. Condition three

was a small change in the position of the landmarks (McNaughton et al. 1995;

Knierim and Rao, 2003), which should correlate to a similar, small change in the

activation pattern. If we return to the blindfolded person in his kitchen, this time

when his blindfold is removed, and objects in the room have been moved, he would

still think he was in his kitchen, only some of the visual cues would be slightly

different. This would naturally translate into slight variations of the Place Field

produced by the Place Cells, with the predominate features remaining, so is the

expected response of the simulation’s activation pattern.

Figure 2.9:Position 1,

Environment 1,Condition 1.

Figure 2.10:Position 1,

Environment 1,Condition 3.

Condition four was a 90 degree, clockwise rotation of the Landmarks, to

which the activation pattern of the Place Cells should rotate by 90 degrees in the

Page 18

Page 25: Place Cell Latex report

Methodology

same direction. Considering the person in his kitchen one final time, if somehow

when his blindfold was removed and his kitchen had been rotated by 90 degrees,

he would still recognise it as his kitchen, but rotated. A 90 degree rotation of

the Place field has been observed under these conditions in studies performed

by Knierim (2002) and Knierim and Rao (2003), and is therefore expected to be

emulated in our simulation.

Figure 2.11:Position 1,

Environment 1,Condition 1.

Figure 2.12:Position 1,

Environment 1,Condition 4.

Other possible conditions that could be applied to the simulation could be

to rotate the environments in the opposite direction, expecting to see the opposite

response compared to condition 4, and shifting 1, 2 and 3 landmarks by large

distances to see if there is a limit to the amount of change in landmark position

until the Virtual Robot would consider it to be a different environment.

Page 19

Page 26: Place Cell Latex report

Chapter 3

Results

This section will be primarily displaying, and briefly discussing, the different

Activation Patterns produced by the CANN. The expected responses from the

CANN under each condition was discussed in the previous section. For example,

if the Virtual Robot’s visual input was disabled, it would be unable to tell if the

environment changed, so the activation pattern should remain unchanged until

the visual inputs are enabled.

3.1 CANN Outputs

The first set of Activation Patterns recorded were the individual patterns

for each position in each environment. This was achieved by running eight, 19

epoch training cycles (as discussed in the methodology) on each environment and

resetting the CANN between each environment (Figures 3.1-3.16).

Page 20

Page 27: Place Cell Latex report

Results

Figure 3.1:Position 1,

Environment 1.

Figure 3.2:Position 2,

Environment 1.

Figure 3.3:Position 3,

Environment 1.

Figure 3.4:Position 4,

Environment 1.

Figure 3.5:Position 1,

Environment 2.

Figure 3.6:Position 2,

Environment 2.

Figure 3.7:Position 3,

Environment 2.

Figure 3.8:Position 4,

Environment 2.

Figure 3.9:Position 1,

Environment 3.

Figure 3.10:Position 2,

Environment 3.

Figure 3.11:Position 3,

Environment 3.

Figure 3.12:Position 4,

Environment 3.

Figure 3.13:Position 1,

Environment 4.

Figure 3.14:Position 2,

Environment 4.

Figure 3.15:Position 3,

Environment 4.

Figure 3.16:Position 4,

Environment 4.

The Virtual Robot then completed eight training cycles, without resetting

the CANN when changing environments. The expected response was that promi-

nent features of each environment’s activity pattern, as shown above, should ap-

Page 21

Page 28: Place Cell Latex report

Results

pear in the activation pattern for the CANN when it revisits each position and en-

vironment after learning each one. When comparing Figures 3.17-3.32, to Figures

3.1-3.16, it can be seen that for the first three positions in first three environments,

there were a number of similarities in the patterns between each position in each

environment, respectively. The biological Place Cell equivalent of this behaviour

would be recognition of an environment from the visual cues; this response implies

that the model has some type of spatial recognition.

The model did struggle identifying the fourth positions, located in the centre

of each environment (Figures 3.20, 24, 28, 32). The fourth environment was also

poorly represented; however, this was implemented precisely because it was a

perfect symmetric environment (Figure 2.5). This environment was the most

likely to ’confuse’ the CANN as the difference between the distance and angles

to all the landmarks was much smaller than in an asymmetric environment. The

difference between the Activation Patterns produced during the learning phase of

the environment was much smaller too.

Figure 3.17:Position 1,

Environment 1.

Figure 3.18:Position 2,

Environment 1.

Figure 3.19:Position 3,

Environment 1.

Figure 3.20:Position 4,

Environment 1.

Figure 3.21:Position 1,

Environment 2.

Figure 3.22:Position 2,

Environment 2.

Figure 3.23:Position 3,

Environment 2.

Figure 3.24:Position 4,

Environment 2.

Page 22

Page 29: Place Cell Latex report

Results

Figure 3.25:Position 1,

Environment 3.

Figure 3.26:Position 2,

Environment 3.

Figure 3.27:Position 3,

Environment 3.

Figure 3.28:Position 4,

Environment 3.

Figure 3.29:Position 1,

Environment 4.

Figure 3.30:Position 2,

Environment 4.

Figure 3.31:Position 3,

Environment 4.

Figure 3.32:Position 4,

Environment 4.

For the second condition, visual inputs were removed from the simulation.

The expected response was actually no change in the Activity Pattern of the

CANN, when the Virtual Robot moved, or its environment was changed. As

discussed earlier, this is because the Virtual Robot has no other inputs, so no

way to tell if it had moved. This behaviour is displayed below; Figures 3.33-35

show the process from vision disabled to re-enabled when the Virtual Robot has

been moved. Figures 3.36-38 are showing the same, but for the change of the

environment without visual inputs.

Page 23

Page 30: Place Cell Latex report

Results

Figure 3.33: Virtual Robot has visiondisabled.

Figure 3.34: Position changes;CANN does not.

Figure 3.35: Vision enabled; CANNchanges.

Figure 3.36: Virtual Robot has visiondisabled.

Figure 3.37: Environment changes;CANN does not.

Figure 3.38: Vision enabled; CANNchanges.

When condition 3 was selected, the landmarks of Environments 1-3 were

shifted slightly. As described in previous sections, there should have been a slight

change in the activity pattern in the CANN, as the changes in the landmark co-

ordinates are relatively small (each landmark x and y location were displaced by

Page 24

Page 31: Place Cell Latex report

Results

10 pixels). This condition was applied to rats in Muller and Kubie (1987), and

Knierim (2002), and resulted in the expected response, described above. Environ-

ment 4 was not included in this test as the Activity Patterns recorded displayed

barely any noticeable change. The Activation patterns produced by applying the

third condition to the CANN were:

Figure 3.39:Position 1,

Environment 1.

Figure 3.40:Position 2,

Environment 1.

Figure 3.41:Position 3,

Environment 1.

Figure 3.42:Position 4,

Environment 1.

Figure 3.43:Position 1,

Environment 2.

Figure 3.44:Position 2,

Environment 2.

Figure 3.45:Position 3,

Environment 2.

Figure 3.46:Position 4,

Environment 2.

Figure 3.47:Position 1,

Environment 3.

Figure 3.48:Position 2,

Environment 3.

Figure 3.49:Position 3,

Environment 3.

Figure 3.50:Position 4,

Environment 3.

Finally, condition 4 was a 90 degree clockwise rotation of environments 1-3.

Environment 4 was not rotated as it would still be the same environment due to

Page 25

Page 32: Place Cell Latex report

Results

its symmetry. The activity pattern of the first three environments should respond

by a displaying the same pattern but rotated by 90 degrees, as described above.

The activation patterns produced were as follows:

Figure 3.51:Position 1,

Environment 1.

Figure 3.52:Position 2,

Environment 1.

Figure 3.53:Position 3,

Environment 1.

Figure 3.54:Position 4,

Environment 1.

Figure 3.55:Position 1,

Environment 2.

Figure 3.56:Position 2,

Environment 2.

Figure 3.57:Position 3,

Environment 2.

Figure 3.58:Position 4,

Environment 2.

Figure 3.59:Position 1,

Environment 3.

Figure 3.60:Position 2,

Environment 3.

Figure 3.61:Position 3,

Environment 3.

Figure 3.62:Position 4,

Environment 3.

Page 26

Page 33: Place Cell Latex report

Results

3.2 Findings

3.2.1 Condition 1

Condition one was implemented to represent the ’Control’ state for the sim-

ulation. This nullifies any changes made by selecting other conditions, but also

allowed the conduction of the first tests. Comparing the first two sets of Acti-

vations Patterns when the Virtual Robot was in the same position, in the same

environments, it can be seen that there are clear similarities between each of the

Activation Patterns. This indicates that the Virtual Robot can effectively remem-

ber the environments it has previously visited, which implies the implementation

has worked.

From literature, the Activation Pattern itself was expected to be a single

defined peak, rather than a pattern across the neurons (Samsonsovich and Mc-

Naughton, 1997; Stringer et al. 2002). This sort of response could be explained

by either growing activity, where the inhibitory connections are weak compared to

the excitation, which leads to the whole map becoming active, or the parameters

of the CANN need refining. The fourth position, in the centre of the environ-

ments, appeared to be the position that had the most similar Activation Patterns

between environments. It was for this reason the position was chosen, as it was

the most likely position within the environments to cause some confusion, as the

centre is a unique point in any environment.

3.2.2 Condition 2

When Condition 2, the disabling of visual inputs was applied to the simu-

lation, the expected response from the CANN was displayed. There should have

been no change in the Activation Pattern when the visual information was re-

moved as vision was the only source of information available to the Virtual Robot

about its environment. As there was no information to the update the CANN, the

activity pattern must remain the same. This was demonstrated when the Virtual

Page 27

Page 34: Place Cell Latex report

Results

Robot was moved in both Position and Environment.

3.2.3 Condition 3

The response expected from a small movement in the position of landmarks

was minor changes in the Activation Pattern of the CANN. This is because the

landmarks are generally in a very similar position so the environments appear

very similar to the Virtual Robot. This results in small changes in the Activation

Patterns, for each position in each environment. Figures 3.39-50 and 3.17-28

show the Activation Patterns for each Position and Environment, displaying minor

differences between the Patterns.

3.2.4 Condition 4

A 90 degree rotation of the landmarks was expected to have a 90 degree

rotation in the output of the CANN. This is because the environment still looks

essentially the same, but rotated. Comparing Figures 3.51-62 to Figures 3.17-28,

it’s visible that this was also the response of the CANN.

Page 28

Page 35: Place Cell Latex report

Chapter 4

Discussion

The results gathered from comparing the Activation Patterns in various con-

ditions indicate that this visual-landmark driven model of Place Cells, despite

being relatively simple, has worked as expected. This implication can be made

because the model simulated in this project displayed the same behaviours and

responses to changes displayed by the Place Cells of rodents when exposed to

the same changes. The Activation Patterns produced when the Virtual Robot

returned to a location in an environment were similar to the patterns that were

produced when the Virtual Robot visited that location previously; it can therefore

be inferred that the Virtual Robot can remember locations in environments it has

learned when given the same visual cues.

When visual inputs were disabled, the CANN responded correctly by not

changing at all. This is expected because if you were sat in a room and then

blindfolded and moved with no other sensual information, you would still believe

you were in the same position as you received no other information from stimuli

to inform you otherwise. So, as you would believe you were in the same location

and position, the Place Cells firing would also remain the same. Only when visual

inputs were regained would your Place Cells have the information required to

update.

The implementation discussed above shows that a CANN can be trained

Page 29

Page 36: Place Cell Latex report

Discussion

to operate as a collection of Place Cells, using information from landmarks to

determine which cells fire and the strength of their firing. The fact that the

CANN responses were generally very similar to the responses expected from other

studies(Muller, 1994;, Lew, 2011; Scalpen et al. 2014) means that, despite being

a fairly simple implementation of the overall biological system, it provides a good

basis for further research and development in the area.

This model presented here has some similarities to other models of cells

found in the Hippocampus. Work replicating Head Direction cells and Place Cells

using CANNs have produced similar results. A more thorough investigation into

the optimal parameters for the CANN would perhaps produce a more defined

Activation Pattern, and could be aided by the use of Evolutionary Algorithm

techniques.

Page 30

Page 37: Place Cell Latex report

Chapter 5

Conclusion

This paper presents a visually driven model of Place Cells using a Continuous

Attractor Neural Network and a Landmark based Virtual Robot Simulation. The

model worked as predicted, responding to different stimuli and conditions in the

same manner as Place Cells recorded in biological studies. The model on its own,

however, could not be used as a method of robot navigation. The simplifications

made, such as the crude representation of the visual inputs, hold the model back

somewhat. However, it does provide a proof of concept for creating a computa-

tional model of Place Cells, and has potential for a large number of directions for

further work.

5.1 Further Development

5.1.1 Modifications to the Model

The most obvious and unrealistic simplification used in the model was a 360

degree view range as this meant all the distances and angles to each landmark

could be used at the same time. While there are robotic vision systems that do

provide a 360 degree view of their environment (Gaspar et al. 2000), there are no

animals that have this property. Rats, for example, have a fairly wide, 270 degree

viewing angle. Chimps, on the other hand, have a much narrower, 35 degree

Page 31

Page 38: Place Cell Latex report

Conclusions

viewing angle, which has led to studies into the how the properties of Place Cells

change with the viewing angle of the animal (Stringer et al 2001). In regards to

this simulation, a change in the viewing angle would mean that all the information

from all the landmarks could not be used all of the time, as they may not all be

visible at the same time. The Virtual Robot would also need to rely more heavily

on its internal bearing, to help with its position and the direction it is facing in

an environment.

There were also simplifications applied to the neurons of the system. Our

assumptions about the neurons were that: nodes could be updated simultaneously,

the distance of the synaptic connections is negligible and time for propagation of

spikes is instantaneous. This is simply not the case we see in nature. Biological

neurons are far more dependent on the number and timing of spikes, and can

be thought to act more as a spike generator, firing according to the firing rate

rather than having a continuous output of activation at a constant value. Spikes

from other neurons are not transmitted instantaneously and have to travel a finite

distance. This would need to be considered, if a truer-to-biology model is desired.

Neurons of the same type can display a wide range of behaviours and are very noisy

in terms of how often and strongly they express their characteristic properties,

which was also not taken into consideration in this model.

Another interesting path that could be explored is to investigate how differ-

ent sized landmarks effect the firing of place cells; for example, a Landmark that

is double the size but twice the distance from the Virtual Robot than another

landmark could have an equal effect on the location of the firing of the Place

Cells. Scalpen et al. (2014) have suggested that there is a connection, that larger

landmarks give a broader sense of location, but smaller landmarks are needed for

the finer details of an animal’s location in an environment. This is relatable in

some sense as the closer to a destination you get, the more precise information is

needed to get to the actual location, e.g city-town-street-house number.

As mentioned briefly in the introduction, a more complete simulation and

Page 32

Page 39: Place Cell Latex report

Conclusions

model of Hippocampus functions would begin at including Head Direction cells

and Path Integration. Head Direction Cells act as a biological compass for the

animal, to give it a sense of orientation in an environment. Path integration is

also thought to play a key part in the forming of a cognitive map for an animal, as

it uses information such as distance and direction of travel from a start point to

help estimate its current position (Etienne and Jeffery, 2004). Some attempts have

been made to unify these three aspects of the Hippocampus (O’Keefe et al. 1995,

Bota et al. 2001), showing some promising results and would be an important

step to a more complete model.

A sense of self locomotion could also be implemented, which would help assist

the Virtual Robot when visual inputs are disabled. Rather than the Activation

Pattern remaining the same when the Virtual Robot moves in an environment, the

pattern would change as if the Virtual Robot had visual inputs, but the strength of

the outputs would be reduced. An uncertainty factor would need to be considered

too, as without visual information there will be a loss of precision. With a change

of environments applied to the simulation, the Activation Pattern would have to

remain the same as the pattern for the last seen environment for all positions of

the Virtual Robot.

For simplicity, the calculation of the distance between landmarks does not

take into account whether landmarks are blocked from the view of the Virtual

Robot by other landmarks. Of course, this is not an accurate representation

of how animals use visual information, as if an object is not visible, no visual

information can be processed from the landmark. This aspect of vision becomes

more complex when the size of landmarks are also taken into account, and if the

simulation was converted to 3D, the height of landmarks would also play a large

part in landmark visibility.

Page 33

Page 40: Place Cell Latex report

Conclusions

5.1.2 Performance Improvements

The above implementations would yield a more complete model, that could

be used for robot navigation, and this is also truer to the behaviours we find in

nature and literature (Samsonovich and McNaughton, 1997; and Stringer et al.

2002). The remainder of this section discusses changes to improve performance,

and possible implementations of the system.

One of the first areas to consider for further development of the software

would be to change to a more suitable programming language. MatLab and

Python are currently very popular languages for programming Artificial Neural

Networks. The developer of MatLab, MathWorks, includes an entire toolbox for

designing, implementing, and training Artificial Neural Networks. This would as-

sist with speed and efficiency of the CANN. Python is also a very useful scripting

language that can increase the speed and efficiency of complex calculations and

array searches. The actual simulation could be re-written in C++ and use the

ODE physics engine. This would be used to create full 3D environments that

the Virtual Robot is simulated in. Realistic Physics and collision detection are

already robustly implemented in ODE, which could potentially eliminate the need

to update those parts of the program. An extra dimension of information could

be incorporated in the visual inputs, for greater fidelity in the CANNs activation.

These two modifications would bring the model much closer to a full implementa-

tion on a real robot.

Evolutionary Algorithm techniques are useful when trying to determine the

optimum topology, weights, or parameters for a Neural Network. This is as they

provide a very efficient method to searching and comparing different solutions to

a problem. They have been used to determine the parameters for the behaviour

of a Head Direction Cell System (Kyriacou, 2011). Applying this approach to

the Place Cell CANN implemented in this paper could help bring the activation

patterns closer to the expected outputs found in literature. (Muller et al. 1994;

Samsonovich and McNaughton, 1997)

Page 34

Page 41: Place Cell Latex report

Conclusions

5.1.3 Further Tests and Implementations

One test that would be more difficult to implement is providing the robot

with an Activation Pattern of a location in an environment, seeing if it could

navigate to the correct place in the environment so the Activation Pattern of the

CANN matches the one provided to it. Another could be to train it in the manner

described in this model and then expose it to a previously unseen environment

and record its activation. These patterns would be compared to the Activation

Pattern of the CANN when it is trained separately in that environment. This

would provide an indication of the performance of the CANN when it is exposed

to new environments, if the patterns have some resemblance then it would be able

to learn new environments relatively easily.

The final development would be to attempt to combine all of this and im-

plement it into an actual robot and then compare it to a robot using a pre-defined

map and/or SLAM navigational techniques, over a series of tests. To be considered

as a potential method of Robot Navigation, the fully integrated system described

above would need to perform as well as or better than the other methods under

the same conditions, as they are the current methods commonly used today, and

therefore the metric new methods should be tested against.

Implementing the model on a real Robot would require a fairly large change

to some of the program. The distance to landmarks would have to be calculated

through actual visual information from cameras rather than knowing the coor-

dinates of the centre. This could also help increase the efficiency of the code

in general as there are nested for loops found in the simulation components for

drawing the robot, collision detection and drawing the landmarks. Without these

complex searches the program will run more efficiently; however, the complexity

of implementing vision system and processing that data will require a fair amount

of computing power instead.

Page 35

Page 42: Place Cell Latex report

Conclusions

5.2 Shortcomings

The biggest deviation from other similar implementations of CANNs is the

pattern of activation that is displayed. As previously discussed, it was expected

from Samsonovich and McNaughton (1997) and Stringer et al (2002) to be a

single defined peak instead of the whole map becoming active. The solutions

detailed above, describing the fuller implementation of the Activation Equation

and Evolutionary Algorithm Techniques, should produce a pattern more similar

to the output we find in literature.

One common shortcoming when attempting to model biological neural net-

works is that, due to technological constraints, the number of neurons that can

be simulated is a fraction of the number of neurons that these biological systems

contain. More neurons in the model would also not affect the overall behaviours

of the CANN when responding to different stimuli. It would instead increase

the fidelity of the model, which may be necessary for further and more complex

implementations of the system.

The 4th position in the centre of each environment seemed to have the most

similar patterns of activation, also being the patterns most dissimilar to their single

environment trained CANNs counter-parts. The inputs for the controls would also

need refining: especially if it were to be implemented into a 3D model, or in a real

robot. This is as they need a much higher degree of accuracy to properly move in

an environment, as well as responding and correcting for collisions and changes in

slope angle.

This model does not provide a method of robot navigation on its own, the

same way that an animal does not just use Place Cells to navigate environments.

Instead, this simple model should be viewed as a basis to start to build up a

different approach to implementing robot navigation. Rather than just trying to

replicate the overall behaviours of animals, understanding the underlying mecha-

nisms of these behaviours, and how they evolved, could prove to be an interesting

new method for creating computationally cheaper, and more flexible implementa-

Page 36

Page 43: Place Cell Latex report

Conclusions

tions for robot navigation (Kyriacou 2011).

5.3 Contribution to wider field

This model provides a good basis for modelling and implementing naviga-

tional cells found in the hippocampus to be used in robot navigation. However, a

more rigorous and fuller implementation of hippocampus navigational cells would

be required to be able to utilise this model for robot navigation. It does poten-

tially provide a new avenue to explore different implementations of robot navi-

gation. The mechanisms evolved by nature are fast, efficient, and adaptable. It

could be argued that these three things are among the top priorities for developing

autonomous vehicles and animal-like robots.

As this model is highly dependent on landmarks and fixed environments,

a potential use for the model could be in a warehouse style environment, where

levels of automation are rapidly increasing. A more complete version of this model

could provide robots in this type of environment with a fast, flexible, and efficient

method of navigating warehouses. This would lead to an increase in productivity

and efficiency whilst also reducing the costs of operation and number of accidents.

Household robots, like autonomous vacuum cleaners, could also make use of this

system as they generally operate in a relatively small environments.

During the Introduction, it was proposed that Place Cells could provide a

cheaper alternative to SLAM, and a more flexible option than providing a robot

with a pre-defined map. While the former is still to be proven with further de-

velopment of the model, the ability to learn new environments shown by this

simulation is more flexible than the latter. Therefore, more complex iterations of

this model could be an alternative to using pre-defined maps.

As previously mentioned throughout this paper, although the model pre-

sented operated as expected, it is far too simplistic to act as a navigational system

in its current state, this result is not entirely unexpected as Place Cells are a

Page 37

Page 44: Place Cell Latex report

Conclusions

single type of cell within the very complex systems that describe biological envi-

ronmental navigation. How well these biological mechanisms and behaviours can

be abstracted for implementations in robotics could be an important consideration

when building more advanced navigational systems, in the future.

Page 38

Page 45: Place Cell Latex report

References

[1] Bota M., Guazzelli A and Arbib M.A. 2001, A recurrent network for

landmark-based navigation. Hippocampus; 11: 216-239.

[2] Cruse H., 2003, A recurrent network for landmark-based navigation. Biol.

Cybernet., 88: 425-437.

[3] Deshmuk S.S. and Knierim J.J., 2013, Influence of Local Objects on Hip-

pocampal Representations: Landmark Vectors and Memory. Hippocampus 23:

253-267.

[4] Etienne A.S. and Jeffery K.J, 2004, Path Integration in Mammals. Hippocam-

pus 14: 180-192.

[5] Gaspar J. ;Lacey G. ;Santos-Victor J. 2000, Omni-directional vision for robot

navigation. Proc. IEEE Workshop on Omnidirectional Vision, South Carolina.

[6] Jeffery K.J, 2007 Integration of the Sensory Inputs to Place Cells: What,

Where, Why, and How?. Hippocampus 17: 775-785.

[7] Kyriacou T., 2011 Using an Evolutionary Algorithm to Determine the Pa-

rameters of a Biologically Inspired Model of Head Direction Cells. Journal of

Computational Neuroscience, 32 (2): 281-295.

[8] Knierim J.J., 2002 Dynamic Interactions between Local Surface Cues, Distal

Landmarks, and Intrinsic Circuitry in Hippocampal Place Cells. ournal of

Neuroscience. London: Academic Press. 22: 6254-6264.

Page 39

Page 46: Place Cell Latex report

References

[9] Knierim J.J., and Rao G. 2003 Distal Landmarks and Hippocampal Place

Cells: Effects of Relative Translation Versus Rotation. Hippocampus 13: 604-

617

[10] Lew A.R., 2002 Looking Beyond the Boundaries: Time to Put Landmarks

Back on the Cognitive Map?. Psych Bull 137: 484-507.

[11] McNaughton B.L., Chen L.L., and Markus E.J., 1991 Dead reckoning, land-

mark learning, and the sense of direction: a neurophysiological and computa-

tional hypothesis. J Cogn Neurosci: 190-202.

[12] McNaughton B.L., Knierim J.J., Kudrimoti H.S. 1995 Place Cells, Head

Direction Cells, and the Learning of Landmark Stability. Journal of Neuro-

science. London: Academic Press. 15: 1648-1659.

[13] McNaughton B.L., Knierim J.J., Kudrimoti H.S. 1998 Interactions Between

Idiothetic Cues and External Landmarks in the Control of Place Cells and

Head Direction Cells. J. Neurophysiol. 80: 425-446.

[14] Muller R.U., Bostock E.,Taube J.S., Kubie J.L 1994 On the Directional Firing

Properties of Hippocampal Place Cells In: Journal of Neuroscience. London:

Academic Press. 14: 7235-7251.

[15] Muller R.U. and Kubie J.L. 1994 The Effects of Changes in the Environ-

ment on the Spatial Firing of Hippocampal Complex-Spike Cells Journal of

Neuroscience. London: Academic Press. 7: 1951-1968.

[16] Newman P.,Dissanayake M.W.M.G., Clark S., Durrant-Whyte H.F. and

Csorba M., 2006 A Solution to the Simultaneous Localisation and Map Build-

ing (SLAM) Problem IEEE Trans. Robotics and Automation, 17(3): 229-241.

[17] OKeefe J., 1984 Spatial memory within and without the hippocampal sys-

tem. Seifert W, editor. Neurobiology of the hippocampus. London: Academic

Press. 375-403.

Page 40

Page 47: Place Cell Latex report

References

[18] OKeefe J., Reece M., Burgess N., 1984 A Model of Hippocampal function

Neural Networks 7: 1065-1981.

[19] OKeefe J. and Burgess N., 1996 Geometric determinants of the Place Field

of Hippocampal neurons. Nature 381: 425428.

[20] O’Keefe J. and Conway D.H. 1978, Hippocampal Place Units in the Freely

Moving Rat: Why They Fire Where They Fire. Exp. Brain Res. 31: 573-590.

[21] Touretzkyl D.S, Redish A.D, Elga A.N., 1996, A coupled attractor model of

the rodent head direction system. Netw. Comput. Neural Syst. 7: 671-685.

[22] Samsonovich A., and McNaughton B. L., 1997, Path Integration and Cognitive

Mapping in a Continuous Attractor Neural Network Model. J. Neurosci. 17:

5900-5920.

[23] Save E., Nerad L. and Poucet B. 2000 Contribution of Multiple Sensory In-

formation to Place Field Stability in Hippocampal Place Cells. Hippocampus

2000;

[24] Scaplen K. M., Gulati A. A., Heimer-McGinn V. L., and Burwell R. D. Ob-

jects and Landmarks: Hippocampal Place Cells Respond Differently to Ma-

nipulations of Visual Cues Depending on Size, Perspective, and Experience.

Hippocampus 24: 1287-1299.

[25] Stringer S. M, Rolls E. T., de Araujo I. E. T. 2001, A View Model Which

Accounts for the Spatial Fields of Hippocampal Primate Spatial View Cells

and Rat Place Cells. Hippocampus 11: 699-706.

[26] Stringer S. M, Rolls E. 2005. Spatial view cells in the hippocampus, and their

idiotheticupdate based on place and head direction. Neural Netw. 18: 1229-

1241.

Page 41

Page 48: Place Cell Latex report

References

[27] Stringer S. M, Rolls E.,Trappenberg T. P. 2004. Self-organizing continuous

attractor network models of hippocampal spatial view cells. Neurobiol. Learn.

Mem. 83: 79-92.

[28] Stringer S. M, Rolls E. T., Trappenberg T. P., de Araujo I. E. T. 2002.

Self-organizing continuous attractor networks and path integration: two-

dimensional models of place cells. Network: Comput. Neural Syst. 13: 429-

446.

[29] Thrun S. and Leonard J.J., 2008 Simultaneous Localisation and Mapping

Springer Handbook of Robotics: 871-889.

Page 42

Page 49: Place Cell Latex report

Appendix

A.1 Additional Documents

Page 43