active system for mapping - kth · active system for mapping ... this method could also be used in...

52
Active System for Mapping ALI MAJEED Master of Science Thesis Stockholm, Sweden 2006

Upload: vuongdan

Post on 18-Jun-2018

218 views

Category:

Documents


0 download

TRANSCRIPT

Active System for Mapping

A L I M A J E E D

Master of Science Thesis Stockholm, Sweden 2006

Active System for Mapping

A L I M A J E E D

Master’s Thesis in Computer Science (20 credits) at the School of Electrical Engineering Royal Institute of Technology year 2006 Supervisor at CSC was Henrik Christensen Examiner was Henrik Christensen TRITA-CSC-E 2006:024 ISRN-KTH/CSC/E--06/024--SE ISSN-1653-5715 Royal Institute of Technology School of Computer Science and Communication KTH CSC SE-100 44 Stockholm, Sweden URL: www.csc.kth.se

Abstract

Title: Active system for mapping.

This report describes a method that provide a solution to the problem of map-estimation. The main task here is to obtain a map of the working area for a robot.This map should be obtained in real time and with inexpensive requirements. Thesetwo tasks are important for robotics application since we want a tool (the robot)that can work as reliable (as fast) as us. In other words, the robot should be ableto orientate itself in a room easily and alike a human by using a fast-created map.Also to be use to such robots, its equipments should be cheap and simple.In this report the description of a new and simple method that provide the abovementioned solution is presented. Also this paper gives suggestions about the appli-cation areas and potential improvements of the system.

Sammanfattning

Titel: Aktiv system för framtagning av kartor.

Den här rapporten handlar om en ny lösningsmetod för skapandet av ett kar-ta för robot applikationer. Den huvudsakliga uppgiften som skall lösas här är attkunna få en beskrivning för robotens omgivning vilket enklast kan vara beskrivenav en karta.Den ovanstående uppgiften skall vara lösbar inom ramen av real tid samt att deanvända redskapen skall vara simpla. Dessa krav är viktiga att uppnå inom robotikapplikationen eftersom det är tänkt att roboten skall i slutändan kunna utföra deolika sysslor lika dugligt (lika snabbt) som oss människor. Med andra ord skall ro-boten för exempel kunna orientera sig i ett rum utan att bli beträffat av hinder. Fördetta ändamål, är omgivningens beskrivning (kartan) en viktig redskap. Till sidanav detta, vill man att redskapen skall vara billig så att experiment och forskningskall ske med mindre resurs vilket är efterstärvt.I det här examensarbetet beskriver en ny och enkel lösning till den ovanståendeproblem. Dessutom dras några förslag om användnings områden samt förbättringsmetoder för den skapade lösning.

Acknowledgment

First I thank God who made me able to create this work that I hope it wouldhelp the humanity in the future. Thus I thank everybody that helped me both inmoral support and practical ways. I would like to thank my family my friends andcomrades in KTH. A special thank to the people in CAS like Babak, Daniel, AliReza, Mårten, Paul and all the others.Grate thanks to the man that gave me the chance to complete this work, my advisorand examinator Prof. Henrik I Christensen.

Contents

Contents vi

1 Introduction 11.1 Stereovision . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Ultra Sonic Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.3 Infrared Sensor . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Laser Finder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.5 This work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Theory 72.1 The geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.2 Single source to multiple observer vs. Multiple sources to single ob-

server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.3 Single observer strategy . . . . . . . . . . . . . . . . . . . . . . . . . 112.4 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.5 The form of the structured light . . . . . . . . . . . . . . . . . . . . 122.6 The vision and the identification . . . . . . . . . . . . . . . . . . . . 132.7 A summary of the theory . . . . . . . . . . . . . . . . . . . . . . . . 17

3 Method Description 193.1 Theory in practice . . . . . . . . . . . . . . . . . . . . . . . . . . . . 193.2 The prototype . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203.3 The algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4 Experimental Evaluation 254.1 The first experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . 254.2 The second experiment . . . . . . . . . . . . . . . . . . . . . . . . . . 274.3 Error Variation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 284.4 The expansion of the idea . . . . . . . . . . . . . . . . . . . . . . . . 29

5 Application Areas 335.1 Simple range finder . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.2 Blind eye . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 335.3 Guard system . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

vi

6 Conclusions 35

7 Summary and Future Work 37

Bibliography 39

Chapter 1

Introduction

The problem to be solved in this project is to generate a contour-map that describesthe room where the robot is operating. The importance of such a map is to supplythe robot with the necessary information(the map) in order to complete its taskwithout being prevented by the obstacles while it is moving around.This process can be seen as fundamental for any mobile robot application since therobot should be able to localize on its own and thus complete its task. Observehere that these obstacles that can block the robot are not always static. When therobot is working in a real environment, the obstacles can simply be a human that ismoving around. To navigate the robot must have access to information about wallsand other obstacles. In other words the map creating process is a very importantissue in robot applications.As an introduction, we can say that the main ingredient in a map is the rangemeasurement from a global reference frame or from the observer itself (in our casethe robot) to the closest obstacles. As the nearest obstacles are the first things thatprevent the robot from moving free, the position of these obstacles are importantto know. So if we have a range finder, we could obtain a map by the collected rangereadings. So the main problem that will be solved in this project is to create asystem that can determine the range to observed objects (obstacles). Combiningthis system with a rotating system, ranges from each direction is used to create amap of the surrounding(the constructed prototype).

The main application of this work is SLAM (see ref. 1 ). The idea of SLAMis that the robot should be able to concurrently perform a task and estimate itsposition. The method here could be useful in the SLAM problem since the robotcould perform a task and when it changes position, the presented method here couldcreate a new map in a relative short time.This method could also be used in Path Planning, where the robot observes itssurroundings and according to the created map, calculates the optimal way fromits position to the goal.

1

1.1 Stereovision

A major difficulty in the map-creating process is range estimation. In general, thereare three families of methods that solve this problem. First we have Stereovision.This method uses two(or more) images of the same observed object but from differ-ent angle of approach. By comparing the two images from the cameras, we are ableto estimate the range to visible objects in the image. Actually, this system is oneof the most effective with the proof that the nature has developed its own versionof stereo vision which is used by many animals and also we humans uses this typeof range estimation system. This system uses the vision from the two eyes and thusprocess each image to obtain the range to the objects in our surroundings. It hasshown that it is difficult to create a similar system for robot application since theprocess of matching the image data between the right and the left cameras is verycomplex. Research that describes such a system can be found in ref. 2

Second, there is a family of range estimators that all are based on the theory ofcalculating the Time of Flight (TOF). Simply, these systems send out a signal inthe form of sound or light to the surroundings. Objects reflect this signal back tothe sender. By calculating the time difference between the release- and the receiptof the signal, the range to the object(s) is obtained by multiply the velocity of thesignal with the time difference. In general, the distance is calculated using the TOFas bellow:

l =ct

2(1.1)

Where l is the distance to the object, c is the velocity of the signal and t is thetime-difference between the release- and the receive of the signal. The equation isdivided by two because the signal travels to- and from the object. This system wasinvented by nature as bats and dolphins use range finder methods that are basedon the theorem of TOF.

In this family we can find three frequently used types/fashion of range estimationmethods. The first one is the Ultra Sonic Sensor, second is the Infrared Sensor andthird Laser Range Finder. All of them use the same idea of sending- and receivinga signal thus calculating the range to the object based on the TOF. These threemethods are widely applied in many contexts. These three methods have manybenefits but also drawbacks. The performance of every method is described bellow:

1.2 Ultra Sonic Sensor

When sending a sound wave into the environment, objects reflect this wave, whichcan be measured by a receiver. After this, a signal processing system calculate thetime difference between the release and the receive. Thus using equation (1) givesthe distance to the object. Research that describes such a system can be found inref. 6.

2

In practice, the system is sending a high frequency sound wave(not audible forhuman ear) through a speaker into the surrounding. The form of the transmittedwave is not symmetric so that the signal processing can be simplified compared touse of a symmetric signal. After this, the reflected sound wave(s) is received by amicrophone. Here the received sound wave does not only come from the reflectionfrom the object, it occurs even a direct transmitting between the transmitter andthe receiver, also a set of echo may be received several times from the surroundings.This occurs because the ultra sonic system has a big opining angle, which impliesthat the sound transmits in the almost all directions. In other words, we have tofilter the received signal so only the reflected signal from the object is used in theequation (1) and all other reflected echoes shall be neglected. Such signal processingcould be done in many ways, an introduction to signal processing methods that areused here can be found in ref. 7.Also there are other drawbacks to the ultra sonic system, where the physics ofthe sound transmission and reflection is dependent on many conditions like the airtemperature, the property of the material in the surroundings(the property of theobjects in the surroundings) like the reflection- and the absorbation. The conditionof usage for the ultra sonic system makes it weak in the sense of the applicationareas since the conditions have to be fulfilled to achieve a good range estimatorsystem. On the other hand the ultra sonic system is cheap and handy, which makesit popular for usage in robot applications at least in indoor environment but moreor less useless in outside environments.

1.3 Infrared Sensor

To avoid the problem of echoes and direct transmission to receiver problem, theinfrared may be used. Here the system is based on the theory of calculating the rangeusing the TOF but instead of using sound, it uses infrared light. Since the infraredwave has a long wavelength, there are sensors that can handle the calculating of theTOF. The main problem of using infrared is the absorption of the material. Themain benefit is that the infrared has less drawbacks in comparison to the ultrasonicsystem which makes it a possible alternative. The main drawback of the infrared isthe expensive equipment since the infrared is a electro-magnetic wave that travels inthe air with the velocity of light which demands a high sensitive camera that reactvery fast(at a range of nano second). Another drawback is the sun light includes aheight amount of infrared light which makes the usage of infrared system in a realenvironment not suitable. Research in this area is presented in ref. 8.

1.4 Laser Finder

A good alternative to the infrared is the laser. Since the laser has a concentratedpattern, the measurements of range is obtained with higher accuracy that usingthe infrared. The drawback in comparison to infrared is that the laser has shorter

3

wavelength, which demands a faster sensor than the one used in the infrared sys-tem. On other hand the laser system is less sensitive to the disturbance in a realenvironment, which makes this method more useful in applications that operate inreal environment for example in military application. Research that covers the laserranger is presented in ref. 9.

1.5 This work

The third family of range finding methods is called Structured light(the other famil-ies ware presented above as the stereo vision and TOF ). The idea of this method isto project a light pattern with a known appearance onto the observed object. Thusobserving the reflected pattern, the range is then calculated based on the differencebetween the projected light compared the observed pattern.This master work is based on this method, where the constructed prototype containsof one camera and several light sources that are projected onto the object, thus thepattern of these light sources can give information about the range to the object.There are many reports that present the structured light technique, an interestingexample that covers these efforts can be found in ref. 3.

However, the structured light technique is not so widely applied to solve themapping problem. The aim of this work is to provide a new suggestion aboutcreating a system that can supply the robot with the map that it needs using thestructured light technique. The objective of the present project has been to design asensor for mapping using structured light to characterize it theoretically and bench-mark it using a number if simple environment. My main motivation for creatingthis system is simply that I believe that the mapping process could be done in amuch simpler way comparing to the methods that are described above. The mainbenefit that should be achieved from this work is the simplicity of the theoreticalidea and the simplicity of the used equipment. All drawbacks of this system cannotbe analyzed in this work since this work applies a prototype and not a full-versionsystem that can be compared to the other systems. But in this work, some generaldrawbacks of this system will be shown later in this report.However, this masterwork does not cover the entire system since it is a new ideaand an extensive research of this system is not done.

Chapter two is presenting the theoretical model of the solution for the above-mentioned problem.In chapter three, the implemented solution is described. The results and the bene-fits/drawbacks are highlighted in chapter four.

The system could be applied in several application areas, the suggestions forsuch applications are outlined in chapter five. Also there are some conclusions thatwere obtained which will be described in chapter six In chapter seven, there are

4

some indications about how this work could be improved in future work.In the references, the name of all the papers and books that ware used to completethis work are shown. At the end of this thesis, you may find the appendix, here arethe parts that are too big to be included directly in the text are included.

In some words, this work can be defined as below:

• A freestanding system that can be applied in several types of robot applica-tions.

• The aim of this system is to create a 2D-contour map of the surroundings ofthe observer.

• The system is divided into five parts:

1. The set of the projection light sources

2. The observer (the camera)

3. The mechanical part that gives the system its mobility

4. The underlying program that combines the parts (1), (2) and (3) andcalculate the several ranges

5. The part that takes these ranges and transfer them into a contour map

• Each of these parts should be simple and cheep in comparison to similarmethods.

• The performance of this system should be better than the other systems insense of constructing- and computing complexity requirement.

5

Chapter 2

Theory

In this chapter, the theoretical model for the sensor is described. The theoreticalmodel describes how we get use of the physical and mathematical rules to achievethe final goal, which is to create a system that supply the robot with the map ofthe room where the robot is operating.

2.1 The geometry

Why do we study the geometry here? The reason for such a study is to under-stand how from images that are taken with a simple camera can supply us withthe information that we need to create a contour map of the entire surroundings.By this, we have a contour map in form of a 2D image. Some could ask why use a2D description of the surroundings when the real world is 3D?. The answer to thisquestion is that in robot application, it is not required that we have a full knowledgeof the 3D form of the robot surrounding. We human use a 3D description that iscreated by our stereovision system and our image memory. But, a contour map issufficient for basic navigation.

In this section we will describe about how the pattern from a light source isaffected by the geometrical performance of the observed object and how a singleobserver can use this information to estimate the geometrical form of the observedobject.In the beginning, it is good to understand one thing, which is that an ordinaryobserver (one camera or our eye) provides an image that in general follows the ruleof perspective. In other words the nearer the object is, the bigger it appears andverse versa. Figure 2.1 shows an example of the perspective effect where ball 2 andball 3 are of the same size but because of the perspective we see ball 3 is biggerthan ball 2. Also in this figure we can see that ball 1 and ball 2 appears to be ofthe same size but the fact is that ball 1 is smaller than ball 2 but they seems to beof the same size due to that ball 1 is near while ball 2 is far.

7

Figure 2.1. An example of how the perspective is affecting the object in the image.

By this, we are not able to know if an object is big or small by only observingthe object in one image. To get more information of the object location, we canestimate the object position be using the structured light technique. The mainidea of the structured light technique is to project a known pattern(structured) oflight perspective that is reflected from the observed object provides the necessaryinformation about the position of the observed object.By knowing this, an optimal design of a projected structured light source leads toan optimal estimation of the geometrical form of the observed object. By optimalhere means that from the image that is provided by the observer together with theobserved pattern can provide us with a 2D map that describe the 3D form of thesystems surroundings. Such a description of the optimal design will be described inthe next section.

As an example for the idea, we can assume that someone is in a dark room andis pointing a lamp onto the walls. The size of the visible pattern on the wall givesthe person the knowledge of the range between him and the wall. This processis made simpler for human since we use stereovision and have en advanced imageidentification system. Assume instead that the person is still in the same room anduses the same lamp, but the person uses only one eye to observe the pattern. Here

8

the range to the wall is hard to estimate since the pattern appears to be of the samesize not depending on how far away he is from the wall.This thing occurs because as we mentioned before, that things looks smaller whenthese are far away. The difference here is that an ordinary source of light spread inthe room with a beam angle θ. And when θ has the same size as the perspective,the light cone is equal to the perspective, the pattern is affected such it looks at thesame size not depending on how far away it is. However, the method of estimatingthe range using identification of a diverging light pattern could be used if it isimproved. A good example which describe how to measure objects in the room bythinking of the perspective can be found in ref. 9.

2.2 Single source to multiple observer vs. Multiple sourcesto single observer

By projecting a light source onto an object, it is easy to estimate the range to theobject according to the above description. The range to the object can be estimatedby knowing how the range affects the light source such the pattern is looking as itdoes on the object.The problem here is approached from two different ways, the first one is to usemultiple observer that observe the single pattern and by this the difference betweenthe observers can give the information of the range to the object.Secondly, the problem can be approached by using a single observer that observesa set of patterns that have a behavior that affects by the range(the rule of theperspective). Here the range is able to estimate by analyzing the pattern and sothe range to the object can be obtained.Both of these strategies are called structured light technique even if the first oneworks alike the stereo vision technique. The first method works as it can be seenfrom figure 2.2.

From this figure we see that the pattern that is represented as the dot on theobject, is observed by the two observers(the eyes). Here the range R is calculatedby using the law of cosinus together with the law of tangents as bellow:

A + b

A− b=

tanα+β2

tanα−β2

b = −A(1− k)1 + k

Where k is equal to:

k =tanα+β

2

tanα−β2

Which lead to:

R = b cos(γ) +

√(b cos(γ)2 − b2 + (

A

2)2)

9

Figure 2.2. Multi observes and single pattern case.

Where γ is equal to: γ = α + β − 180. From this calculation, we are able to obtainthe range R using the structured light technique by projecting the light source(thedot on the object) and thus observing it by the two observers.Here A is the distance between the centre of the two observers which is a knownparameter by the construction. Also we have the parameters α and β that areobtained from the image. These two parameters describe the angle of the patternseen from each observer. Using these three parameters, we are able to calculate therange R as it is described above.

The second strategy of structured light method is to project a light source thatcan be divided into several parts where these parts behave different in the room.By this, the range to the object can be calculated by analyzing the parts of thislight source. The benefit that can be obtained using this strategy is that we needonly one observer to achieve the observation.This strategy of structured light method can be described of figure 3.3. Here thelight is described as several dots on the object where the distance B(generated bylight source that does´t different by the distance) is not variating by the range. Onthe other hand the distance A is variating because the distance. By assuming that

10

Figure 2.3. Single observer and multi pattern case.

the observed area is locally plane, we can calculated the rang R using the paramet-ers A and B.The image that is observed by the observer includes all of these dots. Those dotsare thus used in the triangulation so the range to the object is easily obtained. Therange R in this strategy is calculated as the equation (2).

By analyzing the two strategies described above, we see that the second strategyis the best one because it has a lower requirement than the first one in sense ofcomputing complexity. Also the second strategy is has less error than the firststrategy because we need to process and analyze one image instead of two. Bythis, we can justify the decision of using the strategy of estimating the range to theobject using the single observer strategy according to the description given above.

2.3 Single observer strategy

As we decided that the single observer strategy is the most suitable to use in thiswork, this section is describing how this strategy works. In general, the strategy

11

of single observer is based on using one observer together with a set of light sourcecontaining of a spread light source and a straight light source.The aim of using this combination is that observer is able to estimate the size ofthe spread light pattern when this one can be compared with fixed pattern fromthe straight light source.This set of light source has to be constructed such the range measurement is op-timized. The description of the optimal construction will be soon described.

2.4 Physics

According to the physical laws that treat the light, an ordinary light source isspreading in the room because of its electro-magnetically performance. But as itspreads in all directions, it loses energy as the formula 1

r2 , where r is the distancebetween the light source and the observer.This phenomena makes the usage of a spreading-light source not practical sinceafter some meters from the light projector, the energy of the pattern is so week thatthe pattern will not be visible for the observer. A light source that has a differentphysical performance is the laser. The laser has a straight spreading direction andalso a saved amount of light energy along the spreading direction.So according to this assumption, we can construct a system that follow the theoret-ical solution if we use laser beams as the light sources. By this, all the light sourcesshould be constructed using the laser beams so we avoid the loss of energy causedby the spreading light effect.

2.5 The form of the structured light

As it mentioned before, an optimal design of the structured light set gives an optimalrange measurement. So the main task to be solved now is how to design an optimalset?Back to the reality, an object can always be seen as an observed surface since weare only able to observe one surface of the object at once. At this moment ourassumption is that the surface is straight as a wall. This wall can be placed towardsthe observer in five ways. The first way is if the normal of the wall is perpendiculartowards the normal of the observer. The four other cases occur when the normalof the wall is angled in comparing with the normal of the observer. In other wordsthe wall can be placed in front of the observer in five different ways depending onhow the surface is tilted towards the observer.So an optimal set of light sources should be able to give the information of thedistance to the surface using the visible pattern independent of the pose of thesurface. In other words, the pattern of this set of light sources should be unique forthe five cases and also the distance to the surface. So one observation should givethe information of the placement of the object(surface) and its pose.Here it is important to analyze how the five cases affect the pattern from the light

12

source so an optimal design is obtained. Let us get back the example of the projectedhand-torch, where the pattern seems to be small when the wall is near and big whenthe wall is far away. Assume here that we are one meter from the wall and let usassume that the size of the pattern is circular with the radius of one meter. Thusif we rotate some degrees while still holding the torch projected straight ahead, thepattern will look oval now because the light is affected be the angle of the wall.This effect occurs because sloping of the wall which implies that the pattern of thelight hit the wall in different ranges, which makes the pattern looks oval.If we make several experiments like the one above, we discover that the pose of thepattern can be described by five points. One center point and four points placed onthe border of the pattern where there is 90-degree difference between these points.This set of identification points gives us exactly the information of the pose of thepattern that is visible on the wall. While these points are the ones that are mostimportant to identify the pose of the pattern, our constructed set of projection lightcould be constructed such only these points are visible on the pattern. Here we cansimply replace the torch and replace it with five laser beams that gives exactly thesepoints. In other words five laser beams can give the same pattern as an ordinarytorch if these laser beams are placed with the same beam angle(θ) as the torch.The meaning of using the laser beams is to keep the amount of the light energy alongthe dispersion direction. Thus, placing the calibration light source(the parallel lightsource) at a known distance to the center of the spreading light (the set of the fivelaser beams), the pattern now has six points to identify and using the informationfrom these six points, the sloping angle and the range to the wall can be obtained.Figure 2.4 shows how the set of the structured light source can be used to estimatethe range to the object. Here we see a top view of a projection of the set onto astraight wall. We can see that the distance between the central- and the parallelpattern will have the same size while the distance between the central- and thediverging pattern will change depending on the range between the light source andthe wall where the pattern is reflected from. In other words, the range to theobject is obtained by using the triangulation by making the comparison betweenthe distances a and b. Here a is the distance between the pattern of the sloping- andthe central light source and b is the distance between the parallel- and the centrallight source.

2.6 The vision and the identification

In every view that the camera is observing, the most interesting part for our sys-tem is how the projected light sources are affected by the geometrical form of theobserved surface. So the most important part of the image that is observed by thecamera is the pattern of the laser beam that is visible on the object. One waythat is useful to get only the pattern is to filter these laser beams pattern. Imagefiltering is active research area and there many literatures that treats this problem,one of these can be found in ref. 4. Since the main aim of this work is to make

13

Figure 2.4. An example of how the structured light can be used to estimate therange to an object.

a simple map-creating system, the filter techniques that are usually used are notsuitable for use here, since they are computationally challenging. Instead of usingdifficult methods that demands a lot computing time, why not to use the mostsimple methods to identify the laser pattern? One way to identify only the patternfrom the laser beams is to observe only the pattern from the laser beams! By takingtwo pictures of the same observed view, one when no laser beams are projected andthe second when the laser beams are projected, thus subtract these two picturesfrom each other. From this differencing, we obtain a third picture where nothing isvisible except of the pattern from the projected light sources. Simply, this operationcould be described as:C = B AWhere the picture A is the image with no projected light sources, the picture B isthe observed object but when the light sources are projected.

To use these observed patterns, we have to identify each pattern that is relatedto its proper laser beam. Since we construct the system, we know how the projectedlight is behaving because we know at least its beam direction. In other words weknow in which region of the picture we expect to find each pattern. By this, we use

14

something that is called masking onto the picture to obtain the information of theposition of each laser-beam pattern. The masking is a method that is used to dividethe image into several parts, a research that treats the masking case can be foundin ref. 5. Simply, the picture that is available here is in digital format and canbe represented as a matrix. Using the masking strategy, we could split the imageinto several regions and this is exactly what we need. Here we get several pictureswhere in each of these pictures we can see the pattern from only one of the lightsources. The mask that is used in this thesis is simply a zero matrix of the samesize as the filtered image except for the interesting regions where there are ones. Byelement-wise multiplying the mask with the image C, we obtain an empty pictureexcept of the regions that we choose in the mask where we know that the patternis visible. In other words, after the masking process we have only the patterns thatare affected by the geometrical form of the observed object in sense of distance andpose. For identification of every pattern, we have to construct a mask containing ofsix zones, where every zone is identifying one of the six patterns in the image.This example shows how the masking works:

C =

1 2 32 3 43 4 5

,Mask

0 0 00 1 10 1 1

The masking process is getting the like:C. ∗Mask = B ,Where the matrix B is:

B =

0 0 00 3 40 4 5

And if we use the same strategy, on our pictures, we can select the regions of interestand neglect everything else in the picture using the masking strategy.The above described masking strategy is specific to this work. Otherwise the ordin-ary mask is an 3 x 3 matrix that looks like this matrix:

C =

−1 −1 −1−1 8 −1−1 −1 −1

This matrix starts in the upper corner of the picture matrix and thus iterate throughthe entire matrix. Once it hits a peak in the pictures (a pixel in the image that hasa high intensity), it return signal that in the pixel (x, y), the mask found a peak.This masking strategy could have been used to identify the pattern in the image. Itwas not used here for two reasons. The first reason is that the pattern in this workwas not single pixel in the image, instead it is a set of normal-distributed pixels thathas a peak in the middle. This implies that we need to use this masking method somany times as the numbers of pixels in the set of the pattern pixels which take a lot

15

Figure 2.5. An example of how the masking strategy here works.

of computing time. The second reason is that we actually do not need a maskingmethod with so high accuracy since we can combine a simple masking strategy withthe mean-value finding on the set of the pattern(because it is normal-distributed)to find the central peak (the centre of the pattern set). So the designed maskingstrategy can be used here since it can find the position of the pattern but with muchsmaller time requirement. By this, we can here justify the usage of the maskingstrategy that is described above. An example of how the masking strategy in thisworks can be shown figure 2.5.

After this, the distance between the visible points in the pictures are calculatedeasily. Thus from these distances, the range to the object is obtained using thetriangulation, which can be described as following:

r =k

tan(θ)b

a(2.1)

Where r is the range to the object, k is the known distance between the parallellaser beams (the calibration beam and the centre laser beam), b is the calculateddistance in the picture between the central- and the oriented laser pattern and lastlya is the calculated distance in the picture between the parallel laser patterns. Also

16

θ is the angle of the laser beams.

After processing one view, the system is mechanically rotated so a new view ofthe room is observed and a new range measurement process is performed. Once afull-rotation is made, all the range measurements are transferred into a map of theroom where the system is operating.

2.7 A summary of the theory

So what have we achieved now? Well, we have now a theory that will be the baseof creating a mapping system. Here we have only made assumptions based ontheoretical laws, which shows that the mapping system is able to create accordingto the theory described here.The main conclusion here is that the range is able to estimate using one observerif we use the light set described in this section and with the assumption of localplanety.

17

Chapter 3

Method Description

The objective of this project is to design a system for robot mapping. The basicmethodology was presented in chapter 2. This chapter presents the implementationof the system.

3.1 Theory in practice

The theory specifies a methodology for mapping. Now the task is to transform themethod into an operational system.The operational system was obtained by constructing a prototype that follows themethodology that is described by the theory. Thus making experiments using thisprototype, the results from these experiments can be analyzed. The analysis ofthese experiments shows the performance of the created prototype. And becausethe prototype was constructed based on the theory, the performance of the prototypeis describing how good the theoretical solution was.The constructed prototype was built using inexpensive material like laser-pens thatare used in presentation or for hobby usage. The used camera is an web-cameraof the type Logitech QuickCam Express and the body of the prototype is made byaluminum plates that are set together so it links the pan-tilt machine to the basethat holds the camera and the set of the laser beams. The used pan-tilt machine isa Directed Perception Inc. Pan-tilt unit, model PTU-D46 that is controlled usingits user’s-manual, see ref. 10.The design of the prototype and the algorithm were made so the created system canbe seen as a prototype that works alike the theoretical description of the solution.Thus the results of the experiments using this prototype will give us a valuableassumption of how good was the theoretical solution. This assumption will behighlighted soon in the section of results and conclusions. Simply, the createdsystem here can be described as the simple block-diagram given by figure 3.1.

Here we start in the place of projection light source where the view direction isthe angle φn(the direction of the prototype). Thus continue to operate accordingto the block diagram until we reach the last place which is obtaining a map of the

19

Figure 3.1. The block-diagram that describes the work sequence of the prototypehere.

room.

3.2 The prototype

The idea of creating a map using simple methods is proofed by using this simpleprototype that is created for the purpose of proofing that the theoretical method issuccessful.

The prototype that was used here was build of aluminum plates. This prototypeis put on the tilt/pan machine, where the machine is purposed to be put on therobot.On the prototype, a web-camera is put in the point of the centre position. To theside of the web-cam, the laser-beam holder is put in a vertical pose where three laserpens of an ordinary type are put in the same direction as the camera. The laser-beam holder includes of three directed holders for each of the three laser beams.The central laser beam is put on its holder such it projects a straight beam alongthe vision direction of the camera so the pattern from this laser beam does not

20

changes its position in the images that are observed by the camera.The upper laser beam is put in the same direction as the central laser beam butwith 4 cm centre to centre distance so this laser and the central laser constitutetwo parallel beams that are directed in the same direction as the observer. Here 4cm is a arbitrary value, the most important thing is that this value should not betoo large(more than 10 cm) or too small(less than 2 cm). The last laser is put insloped direction under the central laser with a sloping angle of θ about 10 degrees.Here the sloping value is also an arbitrary chosen under the same assumption thatit should not be chosen too large or too small.This set of lasers will give a unique pattern that gives the information of the rangeto the object seen from the observer. The calculation of the object pose is neglectedfor the moment because this prototype is not constructed so it can calculate therange and the pose of the object.

3.3 The algorithm

The underlying algorithm that takes the images from the camera and returns themap of the room is described in detail in this section. The algorithm is a programwritten in Matlab. Bellow, a summery of the algorithm is presented.

1. Control the direction of the system by controlling the pan/tilt unit.

2. Control the projection of the laser beam, in other words when to switch on/offthe laser beams.

3. Taking the pictures and save them in the image vector P.

4. Compute the difference-image between the image when the lasers are on andthe image when the laser beams are off.

5. Mask the image C so only the interesting points are visible, thus calculatingthe positions of these points in the image(the unit is pix).

6. Calculate the range to the object using the previous measurements in equation(2).

7. Converting all the ranges that are obtained from the previous part to a map.

8. Compare the created map with the real map of the room and give the differencein %.

Part one of the algorithm is about controlling the direction of the system, inother words the direction of the camera. The construction of the prototype is madesuch the camera and the laser beams always are directed in the same direction asthe camera. So a change in the tilt or the pan position of the system, affect bothof the camera and the laser beams.

21

Simply the system is rotated a full rotation with a step of 2π64 degree, which means

that a full rotation has 64 zones of view. The reason of choosing the number of 64 isthat we want small rotation step so the measurements have a higher accuracy, butthis will lead to higher computing time. But as we want a fast system, we have tomake as few step as possible. This consideration implies that we choice a number ofsteps that is so high so we obtain a good accuracy in the same time that the systemis fast and that is why the number 64 was suitable.In each of these zones, one image is taken of the observed view when no laser beamsare projected, thus a second picture is taken when the laser beams are projected.Thus when these two pictures are taken, the system is rotated to the next zone andthe process is repeated again. Due to slow the performance of the camera (it takesone second for the camera to capture an image), the rotation process was not madefull automatically. In other words when the two pictures are taken a function wasrun by (manually order) so the pan-tilt machine is moving to the next zone. Thussaving every token image in a vector for image processing that will be described later.

The second part is about controlling the projection of the laser beams. Here theaim is to project the set of lasers in a sequence that has to do with the process andthe rotation process.

Part three of the algorithm includes of the function pic = imread(’*.jpg’)that has the purpose of converting the images from the web-cam (in *.jpg-format)into matrixes that can be processed in Matlab. This format of images includesthe colored representation of image which is called the RGB-format. An image ofRGB format includes of three images for each of the three base-colors. Because thecolor of the laser here is red, it was naturally to only observe the red image of theobservation. I have tried to observe the green and the blue part of the images, butthere were no peaks that belonged to the laser patterns.In Matlab, a jpg-image is saved as a three layer matrix where the first matrix rep-resents the red part of the image. So selecting the red part of the image is madeeasily by using this matlab syntax: redpart = pic(:,:,1).

Part four is about obtaining only the pattern from the projected lasers. Asit mentioned before, this process could be made using the difference between theimage of the observed object when no lasers are projected with the image of thesame object but when the laser beams are projected. This part of the algorithmwas made easily in Matlab by using this function:C = B-A.Where B is the red-part of the picture of the object when the laser beams are onand A is the red-part of the picture of the object when laser beams are off. Actuallythis part of the algorithm is unique in image processing work in sense of time re-quirement which is a very important performance in this work. Simply this methodrequires M×N operations to achieve an image including of only the interesting pat-terns. The drawback of using this method is that we have to take double some much

22

pictures of the object while the laser beams and the camera have to be synchronizedso one picture includes the pattern and the next picture does not and so on. Butby comparing the time requirement of using this method and the other methodsto strain the patterns, we find that this method is profitable at least for this project.

Part five is about identifying each pattern in the image C so every pattern couldbe related to its proper laser beam. Since we have several laser beams that are pro-jected all together in the same time, this process is important for every pattern caneasily be related to its proper laser beam.As it was described in the section of theory, the masking technique that was usedhere is of a very simple type. Here we create a zero-matrix at the same size as thepicture C except of the interesting region where we know that the pattern will bethere according to our previous constructing of the light sources. This process isapplied for every projected laser beam, which means that we have a proper mask-matrix for every projected laser beam. Thus making an element-wise multiplyingbetween the picture C and the masks matrixes by using this function: patternx =C.*maskx.Thus from each of these created patternx matrixes, we calculate the position of thelaser pattern in sense of pixel. This process is made using the Matlab functions find,round and mean. Here we obtain the position of the pattern by rounding the meanvalue of all pixels that exceed a given threshold where this threshold is directlyrelated to the pattern. In other words, this threshold is choice such that only thepixels that indicate a laser pattern (often peak values) are strain. Thus roundingthe mean value of all these pixels, we obtain a single value that represent the centralpint of the pattern. Such method can be used here because the pattern of a lasercan be assumed as a small circular pattern that has a mass centre at its centre(themean value). The process of pattern identification and position estimating was donein the algorithm by using this syntax:[y,x] = round(mean(find(patternx > threshold))).The reason of using such method is that it can find the position of the center of thepeaks with small time requirement. One another method that has more accuracythan this one we used here is the 2D Gaussian Distribution method (2GD). Themain problem that prevent us of using this method is that the images in this pro-ject have high disturbance which prevent the 2GD to gain position measurement ofthe patterns with high accuracy. A further describing of this problem will be soondescribed in the section of Error Variation.

In part six, the calculated positions of the patterns are used here to estimate therange to the object. Here we use the triangulation method to estimate the range.Here b is the distance between the central pattern and the lowest pattern and ais the distance between the central pattern and the upper pattern (look at figure2.4). These two distances are obtained from the algorithm calculation by makingthis operation:a = sqrt((y1-y2)(y1-y2)+(x1-x2)(x1-x2)) and

23

b = sqrt((y2-y3)(y2-y3)+(x2-x3)(x2-x3)) .Where y1,x1 is the position of the upper pattern in the image, y2,x2 belongs to thecentral pattern and y3,x3 belongs to the lowest pattern.In other hand, the constants of k and θ are design constants that are known by theprototype construction. Here k is the true distance between the central- and theupper laser beam which equals to 4 cm and θ is the angle between the central- andthe lowest laser beam which equals to 10 degrees. The choice of the parameters θand the distance between the central- and the pararell laser baem(4 cm) is an arbit-rary choice. However, these parameters shall be small so the system can measurelarge ranges but they should not be so small so the pattern-identification work iscomplicated.Thus giving all of these parameters, the algorithm is calculating the range to theobject using this syntax:range = k*(b/a)/tan(angle) (angle here is θ).

From the previous section, the range to the object is calculated in the currentzone. This range is stored in a range-vector. Soon the map of the room is createdusing the ranges that are stored in this range vector where every element representsthe range to the nearest object in the current zone(direction). The elements in thisvector are sorted such the first element belongs to the first zone, the second belongsto the second zone and so on.Using the Matlab function polar, the range vector is translated into a map of theroom where the experiment was run. The map creation process was made in Matlabby using this syntax:polar(direction, range), where the direction is an angle that defines the direc-tion of vision for the prototype seen from a start direction. The system is rotatingin steps such that a full rotation is made and ranges from each step are obtainedby using the above-described algorithm.

The last part of the algorithm has the purpose of calculating the error in themap estimation process. Here the true map of the room is represented as a rangevector that includes of measurements of ranges between the system(the prototype)and the objects in the real room. The range vector here is sorted such that eachelement represents the true value of range at the same direction as its related ele-ment in the created range vector. Thus calculating the difference between the realvalues and the created values, we have here the error between the real map and thecreated map. Such results will be described later in the chapter of experimentalevaluation and in the chapter and error variation.

In the appendix, we can see how the created prototype looks like, we can seefrom these pictures the set of the laser beams and set beside the camera and all ofthis prototype is placed on the pan-tilt machine.

24

Chapter 4

Experimental Evaluation

To evaluate the performance of the developed system, two different experimentshave been performed. The first room did I made in my office using cardboardpaper with texture. The second experiment was done in the kitchen at CAS. Theaim of the first experiment was to prove that the idea of using spread light as ameasurement tool is working. The aim of the second experiment is to see how thesimple prototype could create a map of a real room with real obstacles.Soon both of these two experiments are used to analyze the error of the systemwhich will be discussed in the section of error variation.In figure 4.1 we can see a sample of the first experiment, here we see the patternof the projected laser beams on the paper-wall. Here we see how in reality theexperiment was run.

4.1 The first experiment

From the first experiment, we can see in figure 4.2, that the system is fairly failingin creating a map that describes the real map of the room where the real map isrepresented in the figure as the dashed line and the created map is represented asthe thick line. However, there is too many errors in the created map in comparisonto the real map. These errors occurs because the system is mainly not using anyhigh accuracy tools like the camera and the laser beams. Besides, the system doesnot use any type of filters or calibrations algorithm. Thus using some filtrationand calibration methods, the results became better. The filtration and calibrationseffects will be described later.

However, the results of this experiment can be described as the accuracy ofthe map-estimation process, which can be described as how good the system isestimating the range on the observed points in the room. In other words, the lesserror the system has the better it is. The error in map-estimation process is obtained

25

Figure 4.1. A sample of the first experiment.

using the statistical calculation as bellow:

error(x) =∑n

i=1(xi − x̂i)n

Here error(x) is the calculated expected value of the error where x is the currentdirection-range that variants from i=1...n and where n is the number of the alldirections in the full rotation and x̂ is the real value of range. The first experimentgave an expected value of error at error = 31.9%Also this error variate in the experiment where the error is sometimes high andotherwise low. The variance is calculated as following:

var(x) =n∑

i=1

(xi − x̂i)2error(x)

var(x) = error(x2)− error(x)2

From this calculation, the variance in the map-estimation process in the firstexperiment was equal to var(x) = 14.4%. This means that the range measurement

26

Figure 4.2. The created map of the first experiment. The unit here is cm.

is obtained with the error that is in this confidence interval: E = [17.6 46.4]%,where E is the interval where the error in our measurements variate in. This highamount of error is caused by the high noise of the camera. In other words, if wemake a range measurement of one view, we can obtain two different ranges becauseof the noise of the camera is affecting the images such the position of the pattern isdifferent between the two measurements.

4.2 The second experiment

The result of the second experiment is shown in figure 4.3. Here I have run the ex-periment in the kitchen of the institution CAS. The aim of the second experimentis to see how good the simple prototype can estimate the map of a real room withreal obstacles as a kitchen. In figure 4.3, the real map of the kitchen is given of thedashed line while the created map using the prototype is represented by the thickline. We can see from this figure that the created map is not so good in sense ofthe high amount of error compared with the real value tracking.However, the result from the second experiment shows that the system has an abil-ity of creating a map for a room. This system could be useful as a map-creatorsystem under condition that the system should be developed so the error in the

27

Figure 4.3. The created map of the second experiment. The unit here is cm.

map is minimized. The factors that affect the results negative will be describedlater in the section on error variation.

By these two experiments we can say that the system shows a promising per-formance. In other words, the system has the ability to create a map for the roomwhich fulfils the main task of this masterwork. And by improving the tools and thecalculations algorithm, we could obtain better results. The improvements will bedescribed in the section of The expansion of the idea.

4.3 Error Variation

As we can see from both of the two experiments is that the error in the map is toobig. This means that the created map using this prototype will not be useful formapping.However, in the beginning of this thesis, it was mentioned that this errors will occurhere due to the bad equipments we used in this prototype. In general, the errors inthe results ware caused of the low accuracy of the equipment and not because theidea is false.By analyzing the results closer, it was found that the system could be improved by

28

improving these point:

• a camera with larger number of pixel and with less amount of noise.

• better filtration technique gives a more accuracy result though we have badpictures.

• laser beams with a more concentrated pattern so the patterns are better loc-alized.

• using several than a set of two pictures(A and B) so the distance betweenthe pattern is calculated based on an average of several values not a singledifference value between A and B.

Otherwise the errors that occur not depending of these points mentioned above canactually be compensated by improving the algorithm. Such improvement is aboutenhancing the filtration method so a better accuracy can be obtained.However, the aim of this master work was not to create a system that gives resultsof high quality, since the idea here is quite new of its fashion. So it was not expec-ted that the prototype here would give a result with high accuracy. Yet by theseobtained results, the idea is demonstrated that it can create a map for a room usingrelative simple equipments as simple laser beams and a simple camera.

4.4 The expansion of the idea

Some analysis of the method was performed to determine ways to improve it. Theseanalyses lead to some image-filtration improvements. These improvements of theimage-filtration was applied on the first experiment. We can see the improvementsin figure 4.5 where the map of the room was created using exactly the same picturesas used in the first experiment. The improvement was only made in the algorithmwhere a better filtration method was used than the one used in the first experiment.In practice, the following improvements were made:

1. calibration of the constants k and θ such that optimal combination of thesetwo constants are obtained. Such calibration was made by iterating these twoparameters on one range measurement so the error between the estimatedvalue and the real value is minimized.

2. the filtration method was improved by adding a Non-Maxima Suppressionprocess. In other words the difference picture C is processed through thiscalculation:

D =

√C2

h

29

Where h is a high constant and with such process, only peaks in the picture C isreminded. By this, only the pattern from the laser beams is kept because these havethe largest intensity values. After this process, the accuracy of the laser pattern loc-alization is enhanced and the range measurement is obtained with a better accuracy.

By numbers, this improvement of the algorithm gave an expected value of errorof error2 = 24.7% and with the variation of var2 = 10% which means that thischange of the algorithm has lead us to an improvement.

One very important thing that I have discovered by developing this system. Thissystem could be improved to be working using only one camera and only one laserbeam! Simply by constructing the projected light such that the spreading angle (θ)is equal to the spreading angle of the camera such the pattern from the spreadinglight source has the same position in the image independent of how far away theobject is, also the pattern from the central laser beam has the same position inthe image. This means that we can neglect to calculate the position of these twopatterns and set these positions in the image as stated constants in the calculationof the triangulation. Thus the remaining pattern is the parallel laser pattern thathas different positions in the image depending on the range to the object, whichimplies that the system should concentrate on finding this pattern to estimate therange. In other words the system can be improved such it can estimate the rangeto the object using only on camera and one laser beam.An additional option is that a set of three laser beams that gives the pattern shownin figure 4.4 is able to provide the system with the full information of the range tothe object and also the pose of the object. Here φ is equal to 120 degrees and τ isarbitrary chosen but should be within 10 cm.The reason of such design is that the pose of the observed object(surface) can beobtained by using information of the three points that exactly capturing all of thepossible poses of the object. In other words, any pose of the observed object can bedescribed of the three points. Where these points are placed in the same distanceto each other and this placement can only be obtained when the angel φ is chosento 120 degrees.

30

Figure 4.4. The optimal design of the laser-beams.

31

Figure 4.5. Map of the room created using the improved version of the system. Theunit here is cm.

32

Chapter 5

Application Areas

As it already mentioned in the introduction of this thesis, the aim of this projectis to give a fast mapping method for robotics application using simple tools. Inother words, the system in this project has to be better in creating a map for robotapplication in sense of its simplicity and short time requirement.The system does have a number of alternative applications. Personally, I havethought about three possible applications that are not involved in robots.

5.1 Simple range finder

The developed system could be modified to be an ordinary range finder for suchapplications that require a modest accuracy range finder but cheep and easy in use.The range finder that is constructed here will probably not give the same result asthe similar range finder as the laser finder or the sonar. But due to its low cost,this rang finder could be involved in several applications than before which makesit competitive the other range finder methods.

5.2 Blind eye

Also because this system is able to generate the map of the room quickly, thissystem could be combined with an acoustical system to provide a blind person witha tool that could help her to find her way.

5.3 Guard system

As guard work is based on observing an area and identify an object to be acceptedor rejected, this system could be implemented as an guard system where it canlooking at an area and identify the distance to the objects and their direction ofmoving. Thus depending on the situation, the system could announce to the humanguard system that can handle depending on the situation.

33

Chapter 6

Conclusions

What was solved in this masterwork? Well, the main problem to be solved was toproof the idea of making a system that can find the map for a room using simplemethods in comparing to the known methods.The system in this masterwork was described in the section of theory, where thesystem was described as a model containing of the underlying physics and math-ematics combined with the theory of vision.The problem was then to create a simple system based on the theoretical modelthat can estimate the map of the robot. Thus the results ware analyzed to find ifthe created system is able to achieve the goal of this masterwork.The solution for this problem was drawn in the section of theory and then how thetheoretical solution was implemented is described in the section of method descrip-tion.In general, the problem of this masterwork was solved by analyzing the results thatgave us the assumption that the created system is able to estimate the map of theroom using simple material but with errors that makes the created map very badin compression with the maps that are created with other methods.These errors can be summarized as 24% expected value of error with variation of10% of the range value. These values were obtained by analyzing the experimentwhere the constructed prototype was run using the constructed algorithm in a roomthat has a known map.The errors here occurred not because of that the idea is false, but because the usedmaterial ware of low accuracy that made the result became with so much error.

One could ask why didn’t I try to improve the system to such good results wereobtained? Here the aim was not create a perfect system that has the same perform-ance as the similar systems that can find the map of the room with high accuracy.The idea of this master work was to give a new suggestion about how to solve themap-estimation problem but with simpler and cheaper equipment than the othersystems, in other words the system created in this master work could be seen as anapproach for new opportunity of map-estimating method. By the results that were

35

obtained here, the above described task is achieved.

What can we say about this masterwork? Well, it has many drawbacks thatoccurs because the no high accuracy of the used equipments. But in other handwe have a very simple and relative fast method that is able to create a map for aroom for robot application. Remember here that the used systems in the robotstoday, demands many condition to be fulfilled so they can work well and oftenlythese systems are expensive. Lastly I would like to say that I am proud to findsuch a system that can be improved to be a very useful tool in the future robotapplication.

36

Chapter 7

Summary and Future Work

How could the results from this project be utilized is applications? The answer ofthis question is that this system could be used in applications that need a mappingsystems and especially on robots. The simplicity of this method makes it uniqueamong systems that achieve the same goals because it can be used in several robotapplications and experiments that needed such a system but has limited resourcesto spend on the map-estimation system.However, some improvements have to be done on this system to become optimal. Inother words depending on the application, a map-creating system of the same fash-ion of this project should be adjusted in sense of simplicity, accuracy and cheapness.Yet it is hard to construct a system that achieves all of these three conditions atthe same time.As it is mentioned above, depending on the application the system here should beadjusted such the required accuracy is achieved. Simply, the better equipments thatare used, the better results could be obtained. As a summary, the system could bedescribed as following:

• begin from a known direction φ0.

• project a set of sloping light patterns together with a parallel light pattern.

• filter the pictures so only the pattern is visible.

• compare the pattern from the sloping light with the pattern of the straightlight.

• use the triangulation method to obtain the range to the object.

• use this range together with the direction angle φ to draw the point of themap in this region.

• rotate to the next zone with the direction φn+1 and repeat the process againuntil the entire map of the room is created.

37

The above-described points can be seen as guidelines and not strict rules. Themost important factor is the performance of the created map, thus adjusting theabove-described system so we would be able to achieve the desired goal.

38

Bibliography

[1] John Folkesson, Simultaneous Localization and Mapping with Robots, doctoralthesis, ISRN KTH/NA/R–05/28–SE, NADA, Royal Institute of Technology,Sep 2005, ISBN 91-7178-145-5.

[2] Mårten Björkman, Real-Time Motion and Stereo Cues for Active Visual Ob-servers, doctoral dissertation, ISRN KTH/NA/P–02/13–SE, NADA, Royal In-stitute of Technology, Jun 2002

[3] Joaquim Salvi and Jordi Pagés and Joan Batlle, Pattern codification strategiesin structured light systems, Pattern Recognition, Vol 37(4), 2004.

[4] H.R. Everett, Sensors for mobile Robots Theory and Application, pages 35-220,A K Peters Ltd., Wellesley USA 1995, ISBN 1-56881-048-2.

[5] Rafael C. Gonzalez Paul Wintz, Digital Image Processing 2:nd edition,Addison-Wesley Publishing Company in Canada, printed in USA 1987, ISBN0-201-11026-1.

[6] A.Ohya T.Ohno and S.Yuta, Obstacle Detectability of Ultrasonic Ranging Sys-tem and Sonar Map, Understanding International Workshop on Biorobotics:Human-Robot Symbiosis, 1995-05 Tsukuba.

[7] Peter Händel, Signalteori 2:nd edition,Sinit, Stockholm 2002, ISBN 91-974087-1-9.

[8] M.Alwan Matthew B. G.Wasson and P.Seth, Characterization of InfraredRang-Finder PBS-03JN for 2-D Mapping, National Science Foundation grantto G. Wasson, award ID nr.0004247.

[9] James Davis and Xing Chen, A Laser Range Scanner Designed for MinimumCalibration Complexity, IEEE the 3:d International Conference on 3D DigitalImaging and Modeling, 3DIM year 2001.

[10] Pan-Tilt Unit(Model PTUY-D46) User’s Manual, Version 2.12, 5/13/2003. Dir-ected Perception Inc. Burlingame USA.

39

Appendix

Here we can see some pictures that showes the created prototype used in this work.

41

42

43

TRITA-CSC-E 2006:024 ISRN-KTH/CSC/E--06/024--SE

ISSN-1653-5715

www.kth.se