autonomous robot: for the purpose of lawn...

100
Autonomous Robot: For the Purpose of Lawn Maintenance by Hayden Banting Ryan Kruk Dale Maxwell Kristian Melo Brett Odaisky Final report submitted in partial satisfaction of the requirements for the degree of Bachelor of Science in Electrical and Computer Engineering in the Faculty of Engineering of the University of Manitoba Faculty Supervisor Dr. Douglas Buchanan, Academic Supervisor March 9, 2018 ©Copyright 2018 by Hayden Banting, Ryan Kruk, Dale Maxwell, Kristian Melo, Brett Odaisky

Upload: others

Post on 22-Mar-2020

5 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Autonomous Robot: For the Purposeof Lawn Maintenance

by

Hayden Banting

Ryan Kruk

Dale Maxwell

Kristian Melo

Brett Odaisky

Final report submitted in partial satisfaction of the requirements

for the degree of

Bachelor of Science

in

Electrical and Computer Engineering

in the

Faculty of Engineering

of the

University of Manitoba

Faculty Supervisor

Dr. Douglas Buchanan, Academic Supervisor

March 9, 2018

©Copyright 2018 by Hayden Banting, Ryan Kruk, Dale Maxwell,

Kristian Melo, Brett Odaisky

Page 2: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Abstract

Presented is the design and implementation of a proof of concept autonomous robot with

the purpose of maintaining and cutting an urban lawn. The robot can traverse through

any law with no prior information and avoid all obstacles. It keeps track of where obstacles

were located throughout the yard, so on subsequent lawn maintenance routines the robot

can optimize its route through the lawn. The robot utilizes precise motors and corrective

feedback to ensure the robot movement is predicable and accurate. Additionally, the robot

uses to sonar sensors as well as a camera with supporting software to pinpoint the locations

of obstacles in a timely manner. The robot monitors its own battery life and will return

to its starting location for charging if low battery is detected. A user can start or stop the

robot through an application on their phone. Also through this application the battery life,

where the robot is in the yard, as well as how long the robot has been going is displayed.

This autonomous robot serves as a great proof of concept that safe, efficient, and practical

technologies can be developed for the purpose of lawn maintenance.

i

Page 3: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Contributions

Throughout the entirety of this project there are some individuals that we must thank

for their priceless contributions. Without these individuals, this product may not have seen

the light of day. Thank you all for the knowledge and time you contributed.

• Dr. Douglas Buchanan - Oversaw our progress and kept us on track. Provided the

resources and knowledge to solve problems as they arose.

• Cory Smith - Built and assembled all the precision mechanical components and gener-

ally made our final product beautiful. Not only did we appreciate the fantastic job, but

also the extra hours that he put in for us.

• Daniel Card - Provided extremely helpful insight during our countless meetings.

• Bob McLeod - Helped us realize that there is more to engineering than pure efficiency,

and that we shouldn’t solve a traveling salesman problem if we don’t have to.

• Battery Man - Generously donated a 12V 38Ah battery along with a smart charger

to power and recharge our robot. Thanks to this we were able to afford other quality

components.

• Stackoverflow - Provided us the knowledge to achieve beyond what we thought was

possible.

ii

Page 4: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Table 0.1: Robot and Report Contributions

Hay

den

Ban

ting

Rya

nK

ruk

Dal

eM

axw

ell

Kri

stia

nM

elo

Bre

ttO

dai

sky

Power •Motor Controls •Computer Vision •Proximity Sensing • •Micro-controller Communications •Mapping and Obstacle Avoidance •Graphical User Interface •Software Integration • • Hardware Integration • •

• Main Contributor

Partial Contributor

iii

Page 5: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Contents

Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i

Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii

Contents . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv

List of Figures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . x

List of Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii

Abbreviations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiv

1 Introduction 1

2 Power 4

2.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 Test Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.3 Design Process . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3.1 System Circuit Topology . . . . . . . . . . . . . . . . . . . . . 7

2.3.2 Battery . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.3.3 DC-DC Step Down Converter . . . . . . . . . . . . . . . . . . 9

2.3.4 Motor Drivers . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3.5 Coulomb Counter . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.3.6 Motor Driver Thermal Design . . . . . . . . . . . . . . . . . 11

2.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

iv

Page 6: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

3 Motor Controls 13

3.1 Design Requirements and Hardware Selection . . . . . . . . . . . . 13

3.1.1 Motor Control Hardware Selection . . . . . . . . . . . . . . . 13

3.1.2 Motor Controller Selection . . . . . . . . . . . . . . . . . . . 14

3.1.3 Absolute Orientation Sensor . . . . . . . . . . . . . . . . . . . 15

3.1.4 Arduino MEGA2560 Micro-Controller . . . . . . . . . . . . . 16

3.2 Arduino Motor Controlling Software . . . . . . . . . . . . . . . . . . 17

3.3 Testing and Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

3.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

4 Obstacle Detection: Computer Vision 21

4.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

4.2 Image Capturing Subsystem Design . . . . . . . . . . . . . . . . . . 21

4.2.1 Camera Specifications and Performance . . . . . . . . . . . . 22

4.2.2 Area of Coverage . . . . . . . . . . . . . . . . . . . . . . . . . 23

4.3 Methods of Image Parsing . . . . . . . . . . . . . . . . . . . . . . . . 24

4.3.1 K-Means Clustering . . . . . . . . . . . . . . . . . . . . . . . . 25

4.3.2 Edge Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 26

4.4 Digital Filtering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4.4.1 Colour Hue Filtering . . . . . . . . . . . . . . . . . . . . . . . 28

4.4.2 Numerical Noise Suppression . . . . . . . . . . . . . . . . . . 29

v

Page 7: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

4.4.3 Discrete Fourier Transform Artifact Removal . . . . . . . . 29

4.4.4 Minimum Obstacle Size Filtering . . . . . . . . . . . . . . . . 29

4.5 Obstacle Representation . . . . . . . . . . . . . . . . . . . . . . . . . 30

4.6 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

4.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31

5 Obstacle Detection: Proximity Sensing 33

5.1 Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.2 Sonar Array Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

5.2.1 Sensor Specifications and Performance . . . . . . . . . . . . 33

5.2.2 Sensor Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.2.3 Area of Coverage . . . . . . . . . . . . . . . . . . . . . . . . . 34

5.2.4 Supporting Hardware and Circuitry . . . . . . . . . . . . . . 35

5.3 Sonar Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

5.3.1 Processing Measurements . . . . . . . . . . . . . . . . . . . . 35

5.3.2 Scan Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 36

5.3.3 Obstacle Representation . . . . . . . . . . . . . . . . . . . . . 37

5.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

6 Microcontroller-Microprocessor Communication 39

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

vi

Page 8: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

6.2 USB Interface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.3 Serial Protocol . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

6.4 Data Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6.5 End of Transmission Character . . . . . . . . . . . . . . . . . . . . . 40

6.6 Host Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

6.7 Client Functionality . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

7 Mapping and Obstacle Avoidance 42

7.1 First Pass Mapping . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

7.1.1 Mapping Nodes . . . . . . . . . . . . . . . . . . . . . . . . . . 42

7.1.2 Hardware Interpretation . . . . . . . . . . . . . . . . . . . . . 43

7.1.3 Regular Processing Logic . . . . . . . . . . . . . . . . . . . . 44

7.2 Go Home Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

7.2.1 Dijkstra’s Algorithm . . . . . . . . . . . . . . . . . . . . . . . 47

7.2.2 Algorithm Implementation . . . . . . . . . . . . . . . . . . . . 47

7.3 Efficient Cutting Route . . . . . . . . . . . . . . . . . . . . . . . . . . 48

7.3.1 Nodal Array Transformation . . . . . . . . . . . . . . . . . . 48

7.3.2 Regular Processing Logic . . . . . . . . . . . . . . . . . . . . 50

7.3.3 Instruction List Reduction . . . . . . . . . . . . . . . . . . . . 52

7.4 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

7.4.1 Mapping Accuracy . . . . . . . . . . . . . . . . . . . . . . . . 53

7.4.2 Non-90o Movement . . . . . . . . . . . . . . . . . . . . . . . . 53

vii

Page 9: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

8 Graphical User Interface 54

8.1 Features Unique to the Testing Application . . . . . . . . . . . . . . 55

8.1.1 Computer Vision Parameter Updates . . . . . . . . . . . . . 55

8.1.2 Text Log . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

8.1.3 Manual Directional Movement . . . . . . . . . . . . . . . . . 56

8.2 Features Unique to the Final Application . . . . . . . . . . . . . . . 56

8.2.1 Lawn Selection Menu and Start Button . . . . . . . . . . . . 57

8.2.2 Battery Life Indicator . . . . . . . . . . . . . . . . . . . . . . 57

8.2.3 Displaying Lawn Map and Current Location . . . . . . . . . 57

8.3 Features Common to Both Applications . . . . . . . . . . . . . . . . 58

8.3.1 Bluetooth Connection Button . . . . . . . . . . . . . . . . . . 58

8.3.2 Go Home Command . . . . . . . . . . . . . . . . . . . . . . . 58

8.3.3 Stop Command . . . . . . . . . . . . . . . . . . . . . . . . . . 59

8.4 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

9 Bluetooth Communication 60

9.1 Handshake . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

9.2 Host Device - Raspberry Pi . . . . . . . . . . . . . . . . . . . . . . . 60

9.3 Client Device - Android Device . . . . . . . . . . . . . . . . . . . . . 61

10 Conclusion 62

viii

Page 10: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Appendix

A Motor Controls 67

B Computer Vision 68

C Sonar Sensing 73

D Mapping and Obstacle Avoidance 75

ix

Page 11: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

List of Figures

1.1 System Overview Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

1.2 Final Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

2.1 Kill Switch Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.2 Kill Switch Response Time . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.3 Coulomb Counter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.4 TB6600HG Thermal Circuit . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

3.1 Motor Phase Voltage Steps . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

3.2 0.1 Ω Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

3.3 Accelaration and Velocity on a 13.13 Degree Slope . . . . . . . . . . . . . . . 18

3.4 Rotational Error Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . 19

3.5 1 m Forward Movement Error Measurement . . . . . . . . . . . . . . . . . . 19

4.1 Design of Image Capturing Subsystem . . . . . . . . . . . . . . . . . . . . . 22

4.2 Example of Image Parsed by Colour . . . . . . . . . . . . . . . . . . . . . . . 27

4.3 Example of Image Parsed by Contrast . . . . . . . . . . . . . . . . . . . . . 28

4.4 Example of Computer Vision Algorithm Output . . . . . . . . . . . . . . . . 32

5.1 Sample Output of Scan Algorithm For Short Range Test . . . . . . . . . . . 36

7.1 Obstacle Search Area With Respect to the Robot . . . . . . . . . . . . . . . 44

7.2 Flowchart for High Level Mapping Route . . . . . . . . . . . . . . . . . . . . 46

7.3 Simulation of the First Pass Mapping Route . . . . . . . . . . . . . . . . . . 48

7.4 Cutting Node Array . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

x

Page 12: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

7.5 Previous Cutting Node Array . . . . . . . . . . . . . . . . . . . . . . . . . . 50

7.6 Cutting Node Array Simulation . . . . . . . . . . . . . . . . . . . . . . . . . 51

7.7 Efficient Cutting Route Simulation . . . . . . . . . . . . . . . . . . . . . . . 52

8.1 Testing User Interface Layout . . . . . . . . . . . . . . . . . . . . . . . . . . 54

8.2 Final Application Layout . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

A.1 Clock2 Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

A.2 BNO055 Slope Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

A.3 BNO055 Rotation Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

B.1 Flowchart for Computer Vision Algorithm . . . . . . . . . . . . . . . . . . . 69

B.2 Effects of Adjusting Camera Mounting Angle . . . . . . . . . . . . . . . . . . 70

B.3 Experimental Setup With Two Obstacles . . . . . . . . . . . . . . . . . . . . 71

B.4 Computer Vision Output For Experimental Setup . . . . . . . . . . . . . . . 72

C.1 Flowchart for Sonar Sensing Algorithm . . . . . . . . . . . . . . . . . . . . . 73

C.2 Transducer Supply Voltage Signal . . . . . . . . . . . . . . . . . . . . . . . . 74

C.3 Transducer Trigger Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

C.4 Transducer Echo Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

D.1 Flowchart for Interpreting Obstacle Detection Data . . . . . . . . . . . . . . 76

D.2 Flowchart for Updating a Nodes Data Based on New Information . . . . . . 77

D.3 Flowchart for Selecting Appropriate Direction to Move Next During the Map-

ping Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

D.4 Flowchart for Finding and Performing a Path to Return to the Origin . . . . 79

xi

Page 13: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

D.5 Flowchart for Finding and Performing a Path to a ’New’ Node . . . . . . . . 80

D.6 Flowchart for High Level Cutting Phase Routine . . . . . . . . . . . . . . . . 81

D.7 Flowchart for Selecting Appropriate Direction to Move Next During the Cut-

ting Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

D.8 Flowchart for generating condensed movement instructions from list of node

locations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

D.9 Flowchart for sending list of instructions to motors . . . . . . . . . . . . . . 84

xii

Page 14: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

List of Tables

0.1 Robot and Report Contributions . . . . . . . . . . . . . . . . . . . . . . . . iii

1.1 Proposed and Accomplished System Performance Metrics . . . . . . . . . . . 3

2.1 Charging Voltage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

4.1 Summary of Image Capturing Subsystem Performance . . . . . . . . . . . . 24

5.1 Summary of Sonar Array Layout . . . . . . . . . . . . . . . . . . . . . . . . 34

5.2 Sonar Array Effective Area of Coverage . . . . . . . . . . . . . . . . . . . . . 34

6.1 Commands and Requests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

A.1 TB6600HG GPIO Pin Assignments . . . . . . . . . . . . . . . . . . . . . . . 67

B.1 Computer Vision Test For Small 2.5cmx2.5cm Obstacle Located at Origin . 68

B.2 Computer Vision Test For 5cmx5cm Obstacle Located at (0,20) . . . . . . . 69

C.1 Sonar Short Range Test For Obstacle Located at In Front of Robot at (0,32) 73

C.2 Sonar Long Range Test For Obstacle Located at Left of Robot at (144,0) . . 73

xiii

Page 15: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Abbreviations

ADC - Analog to Digital Converter

AGM - Absorbed Glass Mat

BT - Bluetooth

CPU - Central Processing Unit

CV - Computer Vision

DOD - Depth of Discharge

GPIO - General Purpose Input Output

GUI - Graphical User Interface

MOSFET - Metal-Oxide Semiconductor Field-Effect Transistor

RPI - Raspberry Pi Micro-controller

SOC - State of Charge

TSP - Traveling Salesman Problem

USB - Universal Serial Bus

UUID - Universally Unique Identifier

xiv

Page 16: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

(This page intentionally left blank)

xv

Page 17: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

1 Introduction

Automation and autonomous robots have been historically used in industry and manu-

facturing, but now are being used in day to day life. These robots can already be found in

the houses and yards of many. Roomba vacuum cleaners and some Husqvarna lawnmowers

are already autonomous and on the market for consumers. However these products do not

come without flaws. Although they are technically autonomous, they do not make intelligent

movements. Sometimes taking random turns or even relying on guides. This can make these

devices inefficient at best, and hazardous at worst.

This report discusses the design and creation of a fully autonomous robot which aims

to fix these existing problems. By using a sonar array and camera this robot successfully

detects obstacles with high accuracy for safe movement. The movement of the robot is

also very precise thanks to high resolution stepper motors. It also doesn’t need to make

random movements since it uses intelligent mapping algorithms designed to efficiently move

around any yard. In addition to all this, it offers a user friendly interface which provides live

information about the robot to the users phone.

The robot as a whole is comprised of many subsections as can be seen in Figure 1.1.

Blocks shown in red are parts of the Power module. Green blocks are part of the Motor

Controls module. Purple blocks are for both the Proximity Sensing and Computer Vision

modules. The light blue blocks show the Raspberry Pi microprocessor and the Arduino

micro-controller. Finally, shown in orange, is the Graphical User Interface. Dashed lines

between blocks denote a transfer of information, and solid lines denote the flow of power.

Other sections are such as Microcontoller-Microproccesor Communication, and Mapping and

Obstacle Avoidance are not shown here due to having no physical components.

1

Page 18: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 1.1: System Overview Diagram

The final build of the robot can be seen in Figures 1.2a and 1.2b with which tests were

performed. The mounting of the sonar array, camera, circuitry and battery can be seen here.

(a) Outer Bot Picture (b) Inner Bot Picture

Figure 1.2: Final Product

2

Page 19: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Described in Table 1.1 are the metrics proposed and achieved for the successful operation

of the robot. The additional metrics rows at the bottom of the table are those of which were

added after the initial project proposal. All the metrics in this table and their success will

be discussed in detail in their respective chapters.

Table 1.1: Proposed and Accomplished System Performance Metrics

Module Feature Proposed

Value/

Range

Tested

Value/

Range

Power • Robot operating time per charge > 30 mins 50-60 mins

• Kill switch response time < 10 ms < 10 ms

• Power capacity threshold to trigger return

home command

90% 52.5%

Motor Controls • Minimum slope robot can climb at 15 kg > 10° > 13.13°• Robot ability to perform turns accurately ± 1° ± 0.26°

Computer • Approximate size of horizontal obstacles ± 2.5 cm ± 1.0 cm

Vision • Approximate location of obstacles ± 2.5 cm ± 1.6 cm

• Smallest obstacle to detect 100 cm2 6.25 cm2

• Time to capture and process image < 5 s < 3.2 s

Proximity • Approximate location of vertical obstacles ± 2 cm ± 2.9 cm

Sensing • Time for sonar sensors to respond with data < 20 ms 0.6-29 ms

• Time to process vertical object location < 1 s 10-20 ms

Mapping and • Time to process an optimal cutting path < 15 s 20 msm2

Obstacle

Avoidance

• Worst-case propagating error of obstacle

location

± 30 cm ± 30 cm

• Maximum memory used for optimal route 512 MB 1.11 MBm2

Graphical • Time for bot to respond to command < 1 s < 25 ms

User Interface • Communication range > 20 m > 51 m

Additional •Ability to detect multiple obstacles at once 3 7

Metrics •Display map on android device Once On request

• Accurately travel 1 m forward ± 0.01 m ± 0.002 m

3

Page 20: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

2 Power

In Chapter 2 an overview of the working power module design is presented. The purpose

of the power module is to ensure that all electrical equipment and peripherals have the

necessary power to operate correctly. The final design of the system circuit is comprised of a

12 V, 38 Ah Absorbed Glass Mat (AGM) battery that powers a 12-5 V step-down converter,

which provides power to an Arduino and RPI, and two motor drivers that control the current

provided to two stepper motors. Test results of the completed system circuit are presented

in Section 2.2. The requirements that guided the design process of the power module along

with the design considerations are discussed in Sections 2.1 and 2.3 respectively.

2.1 Requirements

In Table 1.1 the three main requirements that the power module was required to meet

are outlined. The first requirement is the minimum operating time the robot must have

per charge. This metric was first broadly selected to be greater than 30 mins to provide a

minimum desired runtime. The runtime at present is estimated to be between 50-60 mins

based on the battery’s voltage characteristic curve in [1] and operating current.

Figure 2.1: Kill Switch Response Time

4

Page 21: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

The second requirement from Table 1.1 is to have a kill switch response time below 10ms.

The kill switch response time is defined as the amount of time it takes for the motors to stop

running once the circuit has been opened. This requirement has been met through the use of

a 15 A breaker switch that is placed between the battery’s positive terminal and the rest of

the system circuit. The oscilloscope reading in Figure 2.1 shows the breaker being opened at

t=0 and the motors stopping in 5 ms, thus satisfying the second requirement. The circuit’s

inductance then releases the remaining charge in an exponential decay.

The third requirement from Table 1.1 is a function of the battery’s state of charge. A

Go Home routine (discussed in more detail in Section 7.2) is activated if the battery’s state

of charge drops below a given threshold. This threshold had to be changed from 90% to

52.5% to conform to the recommended depth of discharge (DOD) of 50% for the AGM type

battery that was selected therefore leaving 2.5% of the battery’s capacity to execute the Go

Home routine.

2.2 Test Results

Testing of the power module was comprised of measuring voltage levels at key nodes

in the system circuit. These nodes include the motor drivers, the step-down converter, the

Coulomb counter and the battery’s terminal voltage. Tests on theses nodes were conducted

to verify that the measured voltage levels agreed with expected voltage levels under normal

operating conditions. Normal operating conditions were defined as power being delivered to

both of the stepper motors, the RPI, and the Arduino.

The current through the motor drivers were measured by adding a small 0.1 Ω resistor

in series with the motor phase such that it would not impact the circuit negatively. The

current that was measured through this resistor was 1.34 A, which is less than the 3 A that

the drivers were programmed to output. In 1/8-step excitation mode, the peak current draw

per motor is 1.90 A, which is described later in this section. The maximum current being

drawn from the battery is 1.90 A * 2 + 1 A = 4.8 A. The RPI and Arduino are using the

additional 1A. More detail on the motor current is provided in Section 3.3.1.

Testing of the 12-5 V step-down converter was simply done by measuring the output

voltage of the converter. The output voltage of the converter, which supplies the RPI and

5

Page 22: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Arduino, was measured to be 5.0 V. The operation of the RPI and Arduino for the Computer

Vision module, the Proximity Sensing module, the Mapping module, the GUI module and

the Motor Controls module further validates that the converter is working correctly.

The Coulomb counter was tested by measuring the fixed voltage drop across the TLV431

[2], which is a precision adjustable shunt regulator, and verifying that the code was calculat-

ing the proper current through the small series resistor in front of the load. The fixed voltage

drop across the TLV431 was 9.6 V. This measurement is consistent with the designed value

of 10 V. The code that is used to count the Coulombs being used in the circuit was tested

under a constant forward movement condition. The expected current draw from the two

stepper motors was 12 A under 1/1-step excitation. However the motor drivers are config-

ured for a 1/8-step excitation for smoother operation and is only drawing a peak current of

1.34 A per phase as shown in Figure 3.2. Therefore the peak current draw of 6 A per motor

was reduced to 1.90 A because the maximum current draw occurs when both phases of one

stepper are at 71% of their full current draw. 1.34 A * 0.71 * 2 = 1.90 A.

The battery’s terminal voltage was tested under three conditions. The first test was when

the battery was fully charged. The terminal voltage during this condition was 12.6 V. The

second test was conducted during various charging modes. The results of these tests are

summarized in Table 2.1.

Table 2.1: Charging Voltage

Charging Mode [A] Vt [V]

2 14.825

4 14.980

8 15.270

These tests were conducted to find an upper bound for the design of the Coulomb counter

that is discussed further in Section 2.3.5.

These fully integrated system testing indicates that the power module is correctly sup-

plying power to every other module. The GUI is able to communicate with the RPI and the

RPI is able to communicate with the Arduino. This indicates that the step-down converter

is operating as expected. The Arduino is controlling the motor drivers as expected and the

motors are able to move the robot. This indicates that enough power is being supplied to

the drivers and by extension the stepper motors.

6

Page 23: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

2.3 Design Process

The description of the design of the power module is presented in the following five

subsections. In the first section the system circuit topology is described where the control

of power flow is shown and discussed. The battery, which is the source of all power for the

robot is described in the second sub-section. Presented in the third section is the DC-DC

step down converter, which provides the correct voltage for the Arduino and RPI. In the

fourth section the motor drivers that control a pair of stepper motors are illustrated and

finally in the last subsystem the Coulomb counter, which monitors the battery’s state of

charge is examined in detail.

2.3.1 System Circuit Topology

The system circuit consists of a 12 V, 38 Ah battery and three components all connected

in parallel to the battery. The three components are two motor drivers, which controls the

stepping function of the motors, and a 12-5 V step-down converter, which provides power to

the RPI and Arduino. The circuit also includes a Coulomb counter that is connected in series

with the positive lead of the battery. The system block diagram in Figure 2.2 illustrates the

circuit topology.

This topology provides power to each motor independently through two motor drivers.

This allows opposite rotation of each motor and therefore an on-the-spot turning of the robot

is achieved. The 12-5 V step-down converter was chosen to match the operating voltage for

both the RPI and the Arduino.

2.3.2 Battery

An AGM [3] battery was chosen by weighing the pros and cons between various types of

batteries, such as flooded, sealed gel, AGM, and lithium-ion. The choice was also dependent

on budgetary constraints, current requirements and the performance runtime requirement.

The advantages of an AGM battery is that it is more powerful and cost-effective than gel

batteries and can deliver higher currents when required . An AGM battery is also not

prone to damage due to shaking because of its build [4]. This is important because the

7

Page 24: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 2.2: System Circuit

robot’s movement causes shaking and the terrain it is designed to go through can be rough.

Additional benefits of an AGM battery is that it can charge faster than a flooded battery,

requires less maintenance, and does not require ventilation or a Battery Management System

(BMS) [4] [5].

The disadvantages of an AGM battery is it is more expensive than a flooded type [4] and

it has a 50% depth of discharge (DOD), which is the amount of charge that can be used

from the total capacity of the battery, whereas a lithium-ion battery has an 80% DOD [6].

The AGM battery’s physical weight is greater than lithium-ion, its voltage characteristic

curve reduces with larger loads, it has a longer charging time and a shorter cycle life than a

lithium-ion battery [6].

The AGM type battery was chosen because of budgetary restrictions and its disadvan-

tages over lithium ion are not important in terms of the requirements. Additionally, the

AGM type was chosen to keep the complexity of the system low. The system would have

increased in complexity if a lithium-ion battery was chosen because a BMS would also have

8

Page 25: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

been needed [5]. The battery ratings were chosen to meet the current requirements of the

system components discussed in Sections 2.3.3 and 2.3.4. The performance requirements are

also discussed in these subsections. Finally, the AGM battery was generously donated by

the Battery Man and included a smart charger, which saved $200 out of the budget.

2.3.3 DC-DC Step Down Converter

The DC-DC step down converter [7] is the intermediary circuit that allows the 12 V

battery to power the 5 V RPI and Arduino. The current rating of the converter was based

on the current requirements of the RPI, its peripherals and the Arduino. The total current

required by all these components is at most 2 A therefore a 3 A converter is used. The

converter is programmed to output 5 V through an external resistor that is given in the

converter’s datasheet [7]. The converter has an added benefit of reducing the current draw

on the battery because the down conversion in voltage provides an up conversion in current

on the output. This translates to an approximate input current of 1 A for the 2 A output

current being used with a converter efficiency of 92% [7].

2.3.4 Motor Drivers

Two motor drivers control the current and stepping function that power the stepper

motors. The stepper motors have two phases that are rated for 3 A per phase. The total

current through each driver is 6 A because each phase is fully powered at all times in the 1/1-

step excitation mode [8]. The total current the battery must be able to provide, including

the current supplying the step-down converter, is therefore 13 A. The battery capacity was

designed to be able to provide a total of 13 A for the specified performance requirement of

30 mins. The TB6600HG motor drivers were selected because their specifications are able

to handle an 8-42 V input and deliver up to 4.5 A per phase [8]. They also have built-in

H-Bridge circuits that provide forward and backward rotation and internal circuitry that

produce a stepping function for a given clock. More information on the control of the motor

drivers can be found in Chapter 3.

9

Page 26: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

2.3.5 Coulomb Counter

The purpose of the coulomb counter is to monitor the output current from the battery and

extrapolate its state of charge (SOC). This is used to comply with the third requirement to

trigger the Go Home routine once the SOC is below a predetermined threshold. The coulomb

counter operates by measuring the voltage drop across a known resistor value and reducing

the voltage level by a set amount such that it is within the analog to digital converter (ADC)

range on the Arduino. The values read by the ADC can then be used to calculate the current

through the resistor and be added over a given period of time to calculate the number of

Coulombs used by the load. Figure 2.3 illustrates the coulomb counter’s topology.

Figure 2.3: Coulomb Counter

The Rs was chosen to be 5 mΩ because the voltage drop across it for a 13 A current

needed to be greater than 10 times the ADC resolution. The 5mΩ resistor was increased

however to 0.1175Ω because the actual measured peak phase current is only 1.34 A as explain

in 2.2 so the voltage drop could be at least 10 times the ADC resolution. The TLV431 was

then programmed by R1 and R2 to output 10 V given the equation

Vout = (1 +R1

R2

) ∗ Vref (2.1)

in the TLV431 datasheet [2]. This output was chosen because the battery’s terminal voltage

under normal operation is 12.6 V and the subtraction of 10 V by the TLV431 is a midpoint

in the ADC range of 0-5 V. R3 was selected to give a reasonable current of 2 mA.

10

Page 27: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

2.3.6 Motor Driver Thermal Design

The high currents through the motor drivers necessitated heat sinks to be attached to

the chips. The thermal design of these heat sinks can be modeled with a thermal circuit as

seen in Figure 2.4.

Figure 2.4: TB6600HG Thermal Circuit

Given an infinite heat sink, the temperature at HS (Heat Sink) is equal to Ta, the ambient

temperature. The thermal resistance for Rt−c + Rc−hs can be determined by 107−5040

, which

is approximately 1.5°C/W. 50°C is the selected ambient temperature the robot is designed

to work in and 40°C is the power dissipation Pd of the TB6600HG chip with an infinite heat

sink found in the chip’s datasheet on p.26 [8]. The 8W current source represents the power

dissipation in the chip that can be calculated using Pout = I2 ∗ (1.2 + 0.11) ∗ 2 = 24W and

assuming a 75% efficiency, Pin = 32W, which leaves 8W being dissipated in the chip. The

temperature drop across Rt−c +Rc−hs can now be computed by doing 8W*1.5°C/W = 12°Cmaking the temperature at HS = 95°C. The thermal resistance of the heat sink must then

be 95−508

= 5.60°C/W or lower.

Given this calculation, a heat sink with a thermal resistance of 2.1°C/W was chosen [9].

11

Page 28: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

2.4 Future Work

The electrical design of the power module works well, however the physical design can

be improved by reducing the weight of the battery. The best way to reduce the weight

is to select a lithium-ion battery with a lower capacity to maintain a comparable cost to

the existing AGM battery. A lower capacity battery is acceptable because the DOD for a

lithium-ion battery is 80% therefore a 24 Ah battery can supply 19 A for an hour before

recharging, which is equivalent to the 38 Ah AGM battery at a 50% DOD. The change to a

lithium-ion battery can also be justified by the reduction from 13 A to 10 A in total current

being used at one time due to the use of 1/8-step excitation mode instead of 1/1. This

reduction therefore validates the feasibility of reducing the capacity rating for a lithium-ion

battery.

2.5 Conclusion

To conclude chapter 2, the power module consists of a 12 V, 38 Ah AGM battery, a 12-5

V step-down converter, two motor drivers and a coulomb counter. The runtime performance

metric was met and exceeded by selecting a 38 Ah AGM battery, which gives us 50-60 mins

of operating time at a constant 13 A draw. The runtime will be further extended because the

total current being drawn from the battery is reduced to 10 A by using 1/8-step excitation

mode instead of 1/1. The kill switch response time performance metric has been met with

the use of a 15 A circuit breaker. The coulomb counter is the implementation used to satisfy

the Go Home threshold performance metric. The counter will indicate when the battery

drops below 52.5% of its total capacity.

12

Page 29: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

3 Motor Controls

The design and implementation of the control system used to operate the robot’s move-

ment will be presented in this chapter. The motor control system’s design provides accurate

and precise forward and rotational movements for the proper displacement and tracking

of the robot’s location from the starting position. The Arduino micro-controller receives

instructions from the Raspberry Pi microprocessor and properly executes commands using

orientation sensors, internal clock signals, and motor controller feedback. In this chapter the

required system specifications, the selected controlling hardware, software implementations,

and the successful control system performance will be discussed.

3.1 Design Requirements and Hardware Selection

The motor control design criteria reflects the need for accurate robot movements and

high output torque capabilities. In Table 1.1 the three main requirements that the Motor

Controls module met are outlined. The first requirement is the ability to move a 15 kg mass

up a minimum slope of 10 degrees. This metric represents the ability to travel through long

grass.

The second design requirement from Table 1.1 is the ability to rotate the robot to within

1 degree of the target rotation angle. This metric is crucial in reducing the relative location

due to improper orientation.

The third design requirement from Table 1.1 is the ability to move forward 1 m and end

within 0.01 m of the target location. This metric reduces the robot’s relative location error

from the starting position. By lowering the forward movement error the robot is able to

perform more movements and cover a larger area.

3.1.1 Motor Control Hardware Selection

A unique design feature of the robot is using motor rotations to track the robot’s relative

location. Stepper motors were chosen for their high output torque and high resolution

stepping ratio. To chose the proper stepper motors the robot physical requirements needed

13

Page 30: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

to be specified. To represent the torque required to travel through grass a benchmark was

set that would require overcoming a 10 degree slope with a 15 kg total mass. As shown in

Equation 3.1, this would require a force of 25.6 N.

FGrass = mass ∗ gravity ∗ sin(10o) = 25.6 N (3.1)

With a gear ratio of 2:1 the 0.1 m wheel radius was able to be reduced to a 0.05 m effective

radius. The torque and rotational inertia for the chosen mass, slope and radius was calculated

using Equations 3.2 and 3.3.

τgrass = FGrass ∗ radius = 1.278 N ∗m (3.2)

Imass =1

2mass ∗ radius2 ∗ kg ∗m2 (3.3)

With a large torque and precise movement requirements [10] two 2.2 N*m, 200 step, 3

A stepper motors were chosen. To overcome any additional frictional force from the physical

design, the maximum torque allowed is half the combined stepper motor torque of 2.2 N*m.

’The maximum step rate for permanent magnet stepper motors is 300 pulses per second.’[11]

Limiting the step rate to less then 300 steps/s an angular velocity may be derived and the

total torque verified using Equations 3.4 - 3.7.

250 stepssec

200 stepsrevs

=5

4

revs

sec→ ωmax =

2

rads

sec(3.4)

ωmax = ωo + α ∗ t→ α =ωmaxt

=5π

2

rads

sec2(3.5)

τα = (Imass + Irotor) ∗ α = 0.778 N ∗m (3.6)

τtotal = τgrass + τα = 2.056 N ∗m (3.7)

The output torque of the stepper motors satisfies the total torque requirement for the design.

3.1.2 Motor Controller Selection

The Toshiba TB6600HG motor driver chip [8], 8-42 V input voltage, clockwise and coun-

terclockwise directional control, multiple micro-stepping ratios, and output current selecting

14

Page 31: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

circuit. The TB6600HG meets the 3 A current requirement for the stepper motors and allows

the direct input of the 12.8 V supply voltage without a buck-convert. As shown in Equation

3.8 and 3.9 a reference resistance of 0.12 Ω and a reference voltage of 1.1 V were chosen. The

reference resistance and voltage gave a phase current of 3.0 A as shown in Equation 3.10

0.11 Ω <= RNF <= 0.5 Ω→ RNF = 0.12 Ω (3.8)

0.3 V <= Vreference <= 1.95 V → Vreference = 0.36 V (3.9)

Iphase =VreferenceRNF

= 3 A (3.10)

The TB6600HG has a built in over-current protection circuit that limits the phase current

to an unadjustable value of 6.5 A. The over-current protection circuit allows twice the rated

motor phase current therefore an 8 A fuse was added to the TB6600HG’s input. The 8 A fuse

provides over-current protection for the stepper motors. Table A.1 provides a description of

all input and output communication pins.

3.1.3 Absolute Orientation Sensor

The BNO-055 absolute orientation sensor [12] was chosen to monitor the relative angle

during rotational movements, and provide acceleration and slope measurements during the

forward movement tests of the robot. The BNO055 provides 16-bit precision for the x, y,

and z orientation angles. The 16 bit precision provides 1/16 degree precision which is well

below our 1 degree performance metric. Achieving the 1 degree performance metric greatly

decreases our relative location error by reducing the orientation error as shown in Equations

3.11 and 3.12.

Errorx = sin(1o/16) = 0.001 m (3.11)

Errory = cos(1o/16) ≈ 0 m (3.12)

The orientation angles and acceleration vectors are also used to verify forward movement up

an inclined slope. The acceleration vector values are evaluated using a trapezoidal integration

equation resulting in velocity vectors as shown in Equation 3.13.

vn = (tn+1 − tn) ∗ an+1 − an2

(3.13)

15

Page 32: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

The resulting velocity vectors and orientation vectors are used to show forward movement

up the given slope.

3.1.4 Arduino MEGA2560 Micro-Controller

The Arduino MEGA2560 [13] provides 54 digital general purpose input output pins

(GPIO) and 16 analog pins. These pins communicate with 22 GPIO and two analog pins from

the TB6600HG motor controllers and three GPIO pins from the BNO055 orientation sensor

as shown in Table A.1. The MEGA2560 provides a programmable pulse width modulation

(PWM) pin (clock2) which is used to control the TB6600HG’s current stepping rate. The

clock2’s output control register (OCR2A) is programmed to decrease or increase clock2’s

frequency as shown in Equation 3.14.

fclock2 =16x106

4 ∗ 256 ∗ (OCR2A+ 1)Hz (3.14)

The OCR2A can be set from 0 to 255 resulting in a clock2 frequency of roughly 15 to 3906 Hz.

The robot forward and rotational movement clock2 frequency of 120 Hz, as shown in Figure

A.1, is initialized by setting OCR2A = 129. The clock2 stepping frequency was verified by

measuring the motor phase voltage as shown in Figure 3.1

Figure 3.1: Motor Phase Voltage Steps

16

Page 33: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

3.2 Arduino Motor Controlling Software

The motor controlling software is comprised of a forward displacement function and a

rotational movement function. The forward displacement function is first passed a distance

in meters. The distance is then scaled into clock cycles and compared with clock2 pulses as

shown in Equation 3.15.

cycles =GearRatio ∗ SteppingRatio ∗RotorSteps

WheelRadius ∗ 2 ∗ π(3.15)

The function then enables both motors until the calculated amount of clock2 pulses is

reached.The rotation function is passed either a negative or positive angle indicating the

direction and size of the rotation. The angle convention determines which motor rotational

direction is initiated. The MEGA2560 then orients the motors in the clockwise or counter

clockwise directions and enables both motors. The BNO055 [14] orientation angles are mon-

itored continuously until the specified threshold of 1/16 degrees is met.

3.3 Testing and Results

Testing the motor controls module began by measuring the motor phase current by adding

a small 0.1 Ω resistor in series with a 0.5 Ω motor phase. The small resistor size ensured the

current characteristics were not adversely affected. The initial maximum phase current was

measured to be 0.8 A which is one quarter of the rated phase current. To increase the phase

current the reference voltage was increased to 1.85 V which falls within the range shown in

Equation 3.9.

Figure 3.2: 0.1 Ω Voltage

17

Page 34: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

The voltage across the 0.1 Ω resistor was measured as shown in Figure 3.2. The peak to

peak voltage was measured to be 268 mV and therefore the maximum phase current is 1.34

A.

Due to an increased robot mass the first design metric requiring the ability to move

a 15 kg mass up a minimum 10 degree slope had to be re-evaluated. Using the required

force of FGrass = 25.6 N calculated in Equation 3.1, and the new robot mass of 22 kg, the

new minimum slope is θ=6.81o. The success of this metric is shown by plotting the linear

acceleration and velocity while traversing an inclined plane as shown in Figure 3.3. The

robot was able to maintain a positive velocity until the end of the forward movement shown

at 2.75 s.

Figure 3.3: Accelaration and Velocity on a 13.13 Degree Slope

Figure A.2 shows the robot was able to overcome a 13.13 degree incline which is twice

the re-evaluated metric of 6.81 degrees, and meets our original metric of 10 degrees.

The second design criteria is the ability to rotate the robot within 1 degree of the target

rotation angle. This was tested by outputting the robots final angular position relative to

the rotational movement starting position. As shown in Figure A.3, the BNO055 is able to

disable the rotational movement to receive zero error. A physical measurement was also take

as shown in Figure 3.4. The 2mm measurement error shown results in a 0.26 degree error

per 90 degree turn. The second design criteria of 1 degree error has been met successfully

with an error of 0.26 degrees.

18

Page 35: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 3.4: Rotational Error Measurement

The third design criteria is the ability to move forward 1 m with an error of 1 cm.

This metric was tested by centering the robot wheelbase along a single axis and measuring

the distance of both wheels from the original axis. As shown in Figure 3.5 the 1 m forward

movement error is 2mm. The third design criteria has of 1 cm error has been met successfully

with an error of 2mm.

Figure 3.5: 1 m Forward Movement Error Measurement

19

Page 36: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

3.4 Future Work

Motor control design met all design criteria, however there is room for improvement.

The TB6600HG motor controllers are not able to output the required 3A motor phase cur-

rent. If the motor capabilities were increased it would be possible to increase the robots

velocity by increasing the stepping frequency. At high frequencies the the motors experience

additional slip therefore they must operate at a lower velocity. Increasing the current would

provide greater torque which helps eliminate slippage. An acceleration function could be

implemented to avoid slip between the grass surface and the robot wheel. Gradually increas-

ing and decreasing the wheel rounds per minute would help eliminate any surface contact

slippage.

3.5 Conclusion

Motor control design and implementation meet all the design metric. The NEMA23 step-

per motors provide a high resolution stepping ratio and powerful output torque to perform all

robot movements. The BNO055 provides high precision orientation and acceleration data

which controls robot position and orientation. The TB6600HG motor controller provides

robust motor controls and high output current. The Arduino MEGA2560 microcontroller

provides an easy to use communication platform to integrate all the hardware. These main

components provide the ability to travel 1 m within 0.002 m, perform rotational movements

within 0.26 degrees , and propelled the 22 kg robot up a 13.13 degree slope.

20

Page 37: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

4 Obstacle Detection: Computer Vision

Computer vision is the process of analyzing digital information representing the real

world and acquiring information from which a computer can make decisions [15]. In the

context of this robot, computer vision is used to process captured images from an integrated

camera and identify any obstacles to avoid. As this is a lawn maintenance robot, anything

which is not grass should be identified so a decision can made regarding the safe travel of

the robot. Presented in this chapter will be the means of capturing meaningful images, how

suitable computer vision techniques were selected for this particular system, the relevant

theory for said techniques, the required supporting digital filters, and how processed images

represent obstacles in the real world.

4.1 Requirements

Computer vision has three design criteria to meet; detect obstacle locations accurately,

be able to detect all obstacles which are larger than a minimum size, and complete the

previous two within a specified time window. These three criteria introduce a computational

accuracy and time trade-off, whereas the accuracy the of obstacle finding algorithm must be

balanced with the time it takes to complete.

4.2 Image Capturing Subsystem Design

The image capturing hardware is responsible for capturing meaningful images and send-

ing them to the computer vision algorithm. A meaningful image is one that captures the

surrounding area directly in front of the robot. In Figure 4.1 the camera is shown to be

mounted above the robot and angled downward towards the ground to ensure that the im-

ages captured contain only information about the ground. Also, the quality of the images

should be high enough that even small obstacles will not be missed by the computer vision

algorithm.

21

Page 38: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 4.1: Design of Image Capturing Subsystem

4.2.1 Camera Specifications and Performance

The selected camera is the Raspberry Pi Camera Module V2. This camera is directly

compatible and easily configured with the Raspberry Pi 3 microprocessor. In addition to

its ease of use, its small size is appropriate for the robot. The camera is configured to use

a resolution of (640x480). Through various tests it was observed that using this resolution

produced lower computational times when compared to higher resolutions, but still detected

the specified minimum obstacle size at a distance of about 2 m away. The detection of these

small obstacles at a distance of 2 m away is more than sufficient, and is discussed in detail

in chapter 7.

22

Page 39: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

4.2.2 Area of Coverage

The camera is mounted at the very top of the outer chassis shell and angled downward to

the area directly in front of the robot, as shown in Figure 4.1. Using the camera mounting

location (d, h), camera mounting angle β, and camera field of view α the amount of area

captured can be determined. This is important as the image captured should contain enough

information such that the robot is able to make an informed decision about its next move,

but also not too much distant information that the computer vision algorithms can no longer

detect obstacles consistently.

The minimum mounting angle is that angle above which the camera doesn’t capture the

robot itself. This is an important criteria to ensure the image does not include the robot

itself. The minimum camera angle may obtained using equation 4.1,

βmin = tan−1(d′

h′) +

α

2(4.1)

where βmin is the minimum mounting angle, d′ is the distance the camera is from the front

of the robot, h′ is the distance the camera is above chassis shell, and α is the camera field of

view. If the camera mounting angle is too high, then the camera will no longer be pointing

towards the ground and will capture information above the horizon. While information past

the horizon may still contain obstacle information, those types of obstacles are handled by

the sonar array as described later in Chapter 5. The largest camera mounting angle which

the camera still captures only the ground is given by:

βmax = 90− α

2(4.2)

The criteria described in Equations 4.1 and 4.2 constrain the possible camera mounting

angles such that:

βmin < β < βmax (4.3)

Working with camera mounting angles in the range of angle described in Equation 4.3, there

is a blind distance in front of the robot due to the need to project the camera slightly forward

to not capture the robot itself in the image. That blind distance is:

dblind = htan(β − α

2)− d′ (4.4)

23

Page 40: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

The effective area captured forward beyond that of the blind distance is,

∆x = h(tan(β +α

2)− tan(β − α

2)) (4.5)

and the effective area captured to the left and right past the blind distance is:

∆y = 2htan(α

2)

cos(β)(4.6)

Equations 4.4, 4.5 and 4.6 were found by analyzing the physical geometry shown in Figure

4.1. An optimization analysis was performed on these equations to select an appropriate

camera mounting height and angle which will produce an image which captures a large area

while minimizing the blind distance. In this analysis the practicality of a mounting height

and angle is also taken into account. The robot should be as compact as possible, so the

camera need not be mounted at unreasonably large heights. Due to limited manufacturing

machinery, constructing a camera mount for atypical or very precise angles is difficult. Also

as mentioned in 4.2.1, the computer vision algorithms are unable to detect obstacles reliably

past 2 m. Considering all of these points, a camera height of 0.5 m and camera mounting

angle 45° was selected. A summary of the camera mounting height and angle as well as the

corresponding area captured by the system is summarized in Table 4.1.

Table 4.1: Summary of Image Capturing Subsystem Performance

Mounting Height Mounting Angle Camera FOV Blind Distance Image Area ∆x x ∆y

0.5m 45° 62° 0.12m 1.89m x 0.85m

4.3 Methods of Image Parsing

The first step of the computer vision algorithm is to break apart an input image and

categorize different regions by different image properties. Presented in this section will be

two methods that were used to split apart an image and why it is necessary to first parse

an image before obstacles can be found. One method is used to parse an image by colour

and the other by contrast. The reason why two methods were implemented is that during

tests there were cases where one method would be unable to isolate an obstacle within

an image, but the other method did. By using both methods in parallel to produce a

single output ensures that no obstacles pass through undetected. Both of these methods

24

Page 41: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

were chosen to be accommodating with our Raspberry Pi Microprocessor in terms required

computational power and ease of use. The limited computational resources of the Raspberry

Pi constrain the complexity of methods which can be applied. Images need to be processed

in a timely manner to ensure the robot will not go long periods of time without having the

required information to make its next safe move. Additionally, these methods were chosen

by the available resources required to implement them. Both methods are implemented

using Python libraries which are compatible with the Raspberry Pi microprocessor, as well

as having relatively low computation times.

4.3.1 K-Means Clustering

K-means clustering is used to sort an image into different regions by colour. This method

is a vector quantization method that fits a set of n observations into k clusters [16]. Each

cluster has a so called centroid, representing the average value of that region. The observa-

tions in this case are pixels of an image, which have colour hues as values. The clusters will

be regions of average colour hues found within the image. After the algorithm is applied,

every pixel will be assigned to a different region based on which cluster centroid its value is

closest to. This will produce k different regions of the image, each with a different average

colour. From here a filter is applied to identify regions of an image which are not in an

acceptable range of grass green, as they are declared as obstacles. This is discussed in more

detail in Section 4.4

K-means cluster was implemented using libraries from the Scikit Learn [17] package

for Python. This particular implementation of k-means clustering uses Lloyd’s algorithm,

which is generally accepted as a standard approach by the computer science and software

engineering community [18].

In the K-means algorithm [16] a set of n observations [x1, x2, . . . , xn] will be sorted into

k different clusters [S1, S2, . . . , Sk]. Each cluster Si has a mean mi. A particular observation

xp is assigned to cluster Si if its least-squared distance between itself and that cluster mean

is lowest when compared to every cluster. The assignment of xp to a cluster Si at iteration

t is given by:

S(t)i = xp : |xp −mt

i|2 ≤ |xp −m(t)j |∀j, 1 ≤ j ≤ k (4.7)

After all observations have been assigned to a cluster, the mean of every cluster should be

25

Page 42: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

updated for the next iteration. The new cluster means is given by:

m(t+1)i =

1

S(t)i

∑xj∈S

(t)i

xj (4.8)

This two step process described by Equations 4.7 and 4.8 continues until the observations

are no longer are assigned to a different cluster. Once this occurs, this particular set of

observations has been sorted into the k averages which best represent this set.

To satisfy the computational time requirement, the k-means algorithm was implemented

with k = 3. Tests showed that for values of k > 3 the computational time criteria was exceed

consistently. Figure 4.2 is an example of how an image can be parsed into different colour

regions using k-means clustering.

4.3.2 Edge Detection

An edge detection algorithm is used as the second image parsing method to highlight and

outline parts of image that stand out from the background. It is known that input images

contain mostly grass, so if any part of the image has has a significant contrast from what

typical grass appears like, it should be identified as an obstacle. The edge detection works

by applying a high pass filter to the image, as contrasts within the image will represented by

high frequency components in the frequency domain [19]. Taking an image to and from the

spacial domain will require both 2-dimensional forward and inverse transforms. Computing

Fourier transforms have become highly optimized and efficient routines [20], which makes

this edge detection method a strong choice in terms of computational time to analyze an

image on the Raspberry Pi.

The edge detection algorithm was implemented using libraries from the Numerical Python

[21] package which contain the functions needed to complete the analysis in the frequency

domain.

The algorithm process is to take the forward transform of the image, and then shift the

frequency spectrum such that the zero-frequency component is in the center of the spectrum

[22]. A rectangular window is then applied to the center of this spectrum so that the zero-

frequency and surrounding low frequency components will be set to zero. This removal

26

Page 43: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 4.2: Example of Image Parsed by Colour

of low-frequency components is by definition a high-pass filter as all that remains is the

high-frequency components of the image. A window size of [100, 100] which eliminate the 50

lowest frequency components was used to implement the filter high-pass filter. The spectrum

is then shifted back to its original position and the inverse transform of the image can be

taken. All that remains are the edges outlining obstacles within the image.

While this algorithm is fairly straightforward, it is improved upon significantly by the use

of other filters which help suppress numerical noise, remove any artifacts, as well as remove

27

Page 44: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

any small obstacle outlines which fall below the minimum size obstacle criteria. These types

of filters will be discussed in the following section 4.4.

Figure 4.3: Example of Image Parsed by Contrast

4.4 Digital Filtering

Digital filters are applied as both pre-processing and post-processing steps to the image

parsing methods to improve upon the output of the parsed image. Presented in this section

will be various filters and how they are used to improve the quality of parsed images by

reducing noise and undesirable parts of the parsed image.

4.4.1 Colour Hue Filtering

A Colour hue filter can be designed specifically for the output of the k-means clustering,

as described in section 4.3.1. After the image has been completely sorted into different region,

28

Page 45: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

each region will represent an average hue value. The objective is to identify obstacles in the

grass, so any clusters which have an average hue value outside of an acceptable range of

green should be identified. Colour hues come with three channels; green, red and blue, each

ranging from 0 to 255 [23]. To filter parts of the image which are not green, the filter focuses

on large hue values in the red and blue channels, indicating that those regions contains parts

of an image which are either dominantly red or blue. As an example, in Figure 4.2 the two

top regions, which are mostly green, would not be considered to be an obstacle, while the

bottom region which is mostly red, would be.

4.4.2 Numerical Noise Suppression

As a pre-processing step to the edge detection algorithm, as descried in Section 4.3.2, a

low-pass filter was be applied to the image. This will have a blur or smoothing effect to the

image. Depending on the strength of the filter, faint edges which would have shown up due

to noise will no longer appear in the output [24]. The effect of this low-pass filter is shown

in the top-left subplot in Figure 4.3.

4.4.3 Discrete Fourier Transform Artifact Removal

One consequence of using the Fourier transform is the introduction of artifacts. Artifacts

occur from the rectangular window used in filtering the image [25]. They can also occur at

the boundary of the image due a discontinuity. The edge detection algorithm can detect these

artifacts as false edges. By using another window, this time not in the frequency domain,

the edges of the image can be forced to zero. This eliminates any false edges occurring at

the boundary of the image. Other artifacts introduced by the rectangular high pass filter

are known to be small in magnitude. These can be eliminated by zeroing small valued edges

which are significantly smaller than the maximum valued edge detected in the image.

4.4.4 Minimum Obstacle Size Filtering

Minimum obstacle size filtering is applied to both an image parsed by k-means clustering

or the edge detection method. After either of these methods have been applied, what remains

29

Page 46: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

is an empty image either of the obstacle region (k-means) or the outside edge of an obstacle

(edge detection). As discussed in Section 4.2.2, the amount of physical area an image contains

is known. Either of these two processed images can be discretized into several smaller images,

where the number of non-zero pixels in those smaller the images can be counted. This counted

number of pixels can be translated to a physical area. If the area is very small, that entire

piece of the discretized image can be set to zero. In summary, this small obstacle filtering

removes very small obstacles which are present in the parsed image output, which are below

the minimum size obstacle requirement. Exactly what percentage of the smaller image to

set as the threshold for small obstacles has to be set adaptively depending of the density of

discretization as well as whether it is a k-means or edge detection analyzed image.

4.5 Obstacle Representation

For both types of parsed images, obstacles representation is identical. Obstacles are to be

represented by a specified number of points, outlining the parts of the processed image which

are obstacles. This process is done by analyzing columns and rows of the matrix representing

the image for non-zero values, as the only non-zero parts of the processed image are obstacles.

The next step is to translate how these points, which represent an obstacle in pixel

coordinates, translate to real world physical coordinates. As described in section 4.2.2 the

amount of physical area captured is known, and depending on the the resolution of the image

a meter per pixel ratio, kptp can be computed. In addition to correct scaling, an simple linear

transform is needed to be applied to all the points as the origin in pixel coordinates is at the

top-left corner of the image whereas the physical origin should be the bottom-center of the

image. If the physical offset of the image with respect to the robot is known to be (xo, yo),

the image has a width w and height h in pixels, a point (x, y) in pixel coordinates can be

transformed to physical coordinates (x′, y′) using the following operator:[x′

y′

]=

[xo + kptp 0

0 −yo − kptp

][x

y

]+

[−xo − kptp 0

0 yo + kptp

][w2

h

](4.9)

One other consideration when representing an obstacle as an array of coordinates, is that the

robot may have changed orientation due to changing its travel direction. The computer vision

always take images relative to however the robot is orientated, and thus needs to address

30

Page 47: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

this change in robot orientation, θ. A rotation matrix [26] is used to transform physical

coordinates with respect to the robot back to how the robot was originally orientation at

the origin. If real physical coordinates with respect to the robot are known, (x′, y′), then the

physical coordinates with respect to the original robot orientation is given by,[x′′

y′′

]=

[cos(θ) −sin(θ)

sin(θ) cos(θ)

][x′

y′

](4.10)

Finally, the true obstacle coordinates with respect to robots original orientation and original

starting location, is given by: [xtrue

ytrue

]=

[x′′

y′′

]+

[xbot

ybot

](4.11)

where (xbot, ybot) is the current location where the robot was at when the image was taken.

In Figure 4.4 a sample output of an obstacle identified in an image and represented by

real world physical coordinates is shown.

4.6 Future Work

One potential area for improvement is in the edge detection algorithm, and specifically

how the high-pass filter was implemented. If instead of a rectangular window filter, other

types of filters, such as a Gaussian filter could be implemented. Implementing this type filter

instead would reduce the artifacts and noise introduced by the rectangular filter [25]. This

change would improve the the accuracy of obstacle locations.

4.7 Conclusion

The computer vision algorithms are able to detect obstacles accurately from images

provided by the camera. The size of small obstacles which the algorithm is able to detect

is well below the design criteria. This ensures that the robot will be able to avoid even

very small obstacles. Additionally, the algorithms are able to complete within the specified

amount of time. This ensures the robot isn’t going long periods of time without having the

31

Page 48: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 4.4: Example of Computer Vision Algorithm Output

necessary information to decide its next move. Through tests it was shown that the computer

vision algorithms can detect two unique obstacles simultaneously. Additional experiential

results as well as high level flowcharts for algorithm implementation can be found in Appendix

B.

32

Page 49: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

5 Obstacle Detection: Proximity Sensing

Proximity sensing is the process of detecting objects by emitting ultrasonic waves and

analyzing the returning waves. In this project such techniques were used to locate and log

object locations within a specified range of our robot. In this chapter the sensor orientations

for capturing objects, the hardware used to operate the sensors (transducers), and data

collection protocol used in our object detecting sonar array will be discussed.

5.1 Requirements

Proximity Sensing has three design criteria to meet; detect obstacle locations accurately,

be able to compute obstacle locations within a specified time window, and for the sonar

array to respond and produce measurements within a specified time window. Just as was

the case fore the other obstacle detection module, the proximity sensing module should

output accurate obstacle information in a timely manner.

5.2 Sonar Array Design

The design of the sonar array optimizes the placements of five transducers to collect the

most valuable obstacle information for the robot. The knowledge of individual transducers

performance is required to evaluate the performance of five transducers working together.

5.2.1 Sensor Specifications and Performance

The transducers emit a of 8, 40 kHz acoustic pulses when triggered by a 10 us signal from

the controlling hardware. The ultrasonic pulses radiate 30 degrees around the transducers

center axis. The transducers have an echo pin, which stays low until the sound wave reflects

back to the transducer. Once the echo pin returns high, the length of the echo pin pulse

can be measured, and that time will correspond to an obstacle position. This is discussed in

more detail in section 5.3. The transducers have a range of 2-500 cm with ±0.3 cm resolution

which exceeds the design requirements of ±2 cm resolution.

33

Page 50: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

5.2.2 Sensor Layout

The sonar array consists of five transducers, each angled in different directions covering

the left, the right, and the front of the robot. One sensor is responsible for the left direction,

another for the right, and the remaining three for the front of the robot. The reason three

transducers were allocated for detecting obstacles in front of the robot is to guarantee any

obstacles in front of the robot are detected, regardless of how the obstacle is orientated.

There are cases where an obstacle may cause the sound wave to reflect away from the sonar

instead of reflecting it back to it. Using three forward facing transducers, each with different

angles, the array is able to detect objects with different orientations. The transducer array

layout is summarized in Table 5.1 where the angle and mounting location for each transducer

is given relative to the center of the robot.

Table 5.1: Summary of Sonar Array Layout

Transducer Number 1 2 3 4 5

Transducer Angle [o] 90 -10 0 10 -90

Transducer Position [cm] (0.14, 0) (0.14, 0.11) (0, 0.11) (-0.14, 0.11) (-0.14, 0)

5.2.3 Area of Coverage

The transducer array layout provides for coverage in all directions around the robot

except directly behind. As mentioned in Section 5.2.1 each transducer emits a cone of sound

as shown in Figure 5.1. That means that information is not just received along a line at a

particular transducer angle, but rather at a cone at a particular sonar angle.These cones of

sound are illustrated in the output plot by two arrows that represent the size of the cone.

This means the the area covered by a single transducer in the real world depends on how this

cone looks, and if the cone overlaps with other transducer’s cones. However, the mapping

functions which consider the locations of obstacles as will be discussed in Chapter 7, discard

any obstacle information which is not directly to the right, left or in front of the robot. This

means angular information which comes diagonally from the cone at far distances is also

discarded, reducing the usable area of coverage the sonar array can provide.

Table 5.2: Sonar Array Effective Area of Coverage

Left Front Right

Effective Area 2.5m2 2.5m2 2.5m2

34

Page 51: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

5.2.4 Supporting Hardware and Circuitry

The transducer array is operated using six digital GPIO’s, five analog pins from the

Arduino micro-controller and a MOSFET switch to initiate all the sonars. The MOSFET

switch in controlled by one digital GPIO to output 5 V when GPIO is HIGH, and 0 V when

GPIO is LOW as shown in Figure C.2. The five analog pins are used to send a minimum

10 us pulse to each transducer as shown in Figure C.3. The five remaining digital GPIO’s

are inputs to receive the reflected ultrasonic signals.As shown in Figure C.4 the input pins

monitor the time from signal transmission and signal reception and transmit the data to the

Raspberry Pi.

5.3 Sonar Algorithm

The sonar algorithm is responsible for analyzing the information from the sonar array

sensors for potential obstacles, so the robot is able to use that information to make decisions

about where to travel without collision.

5.3.1 Processing Measurements

A single sonar sensor will provide a single time measurement tmes, proportional to it’s

echo pin pulse, as described in Section 5.2.1. This time measurement is the time for an

acoustic wave to travel to a target, reflect off the target, and return to the sensor. It is

assumed the wave travels in a straight line through air at the speed of sound co [27], so the

distance that target is away from the transducer is given by:

dmes =cotmes

2, co = 343m/s, T = 20°C (5.1)

The transducers have an effective range of 5 m, and each transducer will stop listening for

the echo after Tmax seconds, where Tmax is given by:

Tmax =2dmax343

(5.2)

35

Page 52: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

If a transducer returns a time value greater than or equal to Tmax, there is no need to compute

obstacle distance as no obstacle was detected in the effective sonar range.

5.3.2 Scan Algorithm

The scan algorithm executes a sweep to the sonar array, where one transducer is activated

at a time and its response is measured. The side-can is complete once each transducer has

been measured, starting from the far most left transducer to the far most right transducer.

The side-scan will return five time measurements from the five transducers in the sonar array.

Each time the measurement will be analyzed using equation 5.1, producing five potential

obstacle locations with respect to a particular sensor. Recall that if a time measurement is

greater than Tmax that means no obstacle was detected in the effective range of the sonar, so

there is no need to compute the obstacle location for that measurement. In Figure 5.1 the

technique showing a single data point computed for each transducer in the array is shown.

For this particular example, sensors two, three, and four all detected the same obstacle which

was located 32 cm in front of the robot to within a ±2.9 cm accuracy.

Figure 5.1: Sample Output of Scan Algorithm For Short Range Test

36

Page 53: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

5.3.3 Obstacle Representation

Obstacles will be represented in the same format as that used for computer vision dis-

cussed previously in Section 4.5. This is so outputs from either obstacle detection modules

can be processed in the same way. In the case of sonar sensing, a single output coordinate is

representative of an entire obstacle whereas in computer vision many coordinates together

represent the obstacle. Another key difference between representing obstacles from the sonar

sensing algorithm is that not all transducers are aligned with the robots current orientation,

as the array spans a broad range of angles. This means when translating obstacles from

coordinates with respect to the robot to with respect to the robot’s starting position and

orientation, the additional transducer angle is also accounted for.

If the sonar has a position (xio, yio) relative to the robot’s center, an angle θi with respect to

the starting robot orientation, and its time measurement gives an obstacle distance di, then

that obstacle with respect to the robot is located at (x′, y′) and is given by:[x′

y′

]=

[xi

yi

]+

[cos(θi)

sin(θi)

] [di

](5.3)

The robot’s current orientation θ needs to be considered to give coordinates with respect

to the robots original orientation (x′′, y′′). Coordinates with respect to the robots original

orientation is given by: [x′′

y′′

]=

[cos(θ) −sin(θ)

sin(θ) cos(θ)

][x′

y′

](5.4)

Finally, true obstacle positions which account for the robots current location is given by:[xtrue

ytrue

]=

[x′′

y′′

]+

[xbot

ybot

](5.5)

By combing equations 5.3, 5.4 and 5.5 an single operator is defined. When applied to a

particular transducer, it has its output representing an obstacle transformed from relative

to itself to the starting position and orientation of the robot.[xtrue

ytrue

]=

[cos(θ) −sin(θ)

sin(θ) cos(θ)

] ( [xiyi

]+

[cos(θi)

sin(θi)

] [di

] )+

[xbot

ybot

](5.6)

37

Page 54: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

5.4 Future Work

One area for improvement is for an improved calibration procedure of the sonar array. The

sonar algorithm is dependent on knowing the position and mounting angle of each sensor

accurately. Practically, there is some position and angle error when mounting a sensor.

These mounting errors translate to obstacle position errors, as where the algorithm thinks

the sensor is versus where it physically is slightly different. Being able to quantify precisely

the mounting position and angle would enable greater obstacle location accuracy.

5.5 Conclusion

Together the sonar array and sonar algorithm are able to work together to detect obstacles

towards the left, front, and right of the robot. The time to collect data and process data

is well below the specified amount. The sonar sensors occasionally take longer than the

specified amount to collect data. This occurs only if a sensor times-out, which occurs if no

obstacle is detected. However since the time metric for both collecting and processing is

always met, the sensors sometimes taking longer has no negative impact of the system. The

algorithm is able detect obstacle accurately, but not to within the specified amount. That

being said the obstacle locations are still precise enough that functionally the robot is still

able to use the information to avoid the obstacles without ever colliding. Through tests it

was verified that the sonar array and algorithm is capable of detecting up to five different

obstacles simultaneously. Additional experiential results as well as high level flowcharts for

algorithm implementation can be found in Appendix C.

38

Page 55: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

6 Microcontroller-Microprocessor Communication

6.1 Introduction

In this chapter the design and implementation of a communication channel between

the Raspberry Pi microprocessor and the Arduino micro-controller will be presented. This

communication channel is used to send information and commands back and forth between

the two devices. The Raspberry Pi is responsible for processing all the information and

making decisions on behalf of the robot. A protocol was implemented to send hardware

requests to the Arduino, such as to perform a particular movement command, and the

wait for a response back with notification that the command was successful. Additionally,

the Arduino can keep track of important information such as sonar time measurements or

remaining battery life. These both are sent to the Raspberry Pi upon request for further

analysis.

6.2 USB Interface

The communication channel is a universal serial bus (USB) cable connected through a

type A port on the Raspberry Pi side and through a type B port of the Arduino side. Type

A and type B ports only differ in shape, they contain the exact same four pins internally [28].

USB connectors have four pins, a Vcc for power, two data pins, and a ground pin [29]. The

two data pins are a differential pair, d+ and d−. A single bit of information is interpreted by

taking the difference of voltage on the two pins. This reduces any transmission errors due

to noise, as any noise affecting the data pins is reduced by taking the difference [30].

6.3 Serial Protocol

The protocol developed uses serial communication. When sending data, both devices

have to send information one byte at a time, one after the next. As a direct parallel, when

reading both devices will have to read only a single byte at a time, one after the next.

39

Page 56: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

6.4 Data Encoding

Data sent by the Raspberry Pi is encoded using Unicode (UTF-8) whereas data is encoded

using ASCII on the Arduino side. There are no conflict as UTF-8 is a one to one mapping of

ASCII of the first 128 characters, which contains all the alphabet and numeric characters [31]

[32]. Information sent back and forth between the two devices are messages which contains

only these characters.

6.5 End of Transmission Character

The end of transmission character is a character chosen so that the software responsible

for handling communication is able to identify when to interpret a complete message. The

selected end of transmission character is “\n”. If this byte is ever read, both devices know

that the sequence of every other byte read up until this can be interpreted as the compete

message.

6.6 Host Functionality

The Raspberry Pi can at any time send a command or request to the Arduino. The

Raspberry Pi knows if it should be expecting any data back, and how to handle that data.

For example such as if request is made for the remaining battery life or the sonar array time

measurements. For every communication, the Raspberry Pi will always expect a response

notifying that command or request was carried out correctly. If there is no data to keep,

the Raspberry Pi will discard the message and move on to other processing. All of the host

software was implemented using functions from the pySerial library allowing python code to

gain access to the USB interface [33].

6.7 Client Functionality

The Arduino is seen as a client of the Raspberry Pi. Upon power-up, it continuously

waits for a command or request from the Raspberry Pi. Once it receives a command or

request, it uses its other functions to carry out that command or request. Below is a table

40

Page 57: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

summarizing how the Arduino responds to commands and command arguments sent by the

Raspberry Pi. All of the client software was implemented using built-in serial functions

withing the standard Arduino libraries [34].

Table 6.1: Commands and Requests

Command Command Argument Hardware Function Data Returned

Battery Life None Coulomb Counter Remaining Battery Percentage

Sonar Data None Sonar Array Time Measurements

Move Forward Distance Motor Functions None

Rotate Angle Motor Functions None

41

Page 58: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

7 Mapping and Obstacle Avoidance

In order for the autonomous robot to successfully perform lawn maintenance it must

be able to navigate around any given yard. The mapping and obstacle avoidance module

generates movement instructions to traverse a yard while keeping track of and avoiding

obstacles. It does this to generate a more efficient path for subsequent cutting routes. To

explain how this process works the following sections will discuss how the robot interprets

the data received from the detection modules, what path-finding techniques it uses, and how

it keeps track of all the information. By understanding and applying these techniques our

metrics have been achieved as seen in Table 1.1. This includes low processing time for the

optimal cutting path, dealing with worst-case propagating error, mapping a sufficiently sized

lawn, and achieving very low memory usage.

7.1 First Pass Mapping

The goal of mapping and obstacle avoidance is to perform a route that safely traverses

the entire area of accessible grass in the yard. Prior to this, the robot must discover where is

safe to go. To do this the robot will first perform a mapping route. In this route it travels to

every area in the yard using an algorithm that will ensure that no spot is missed. While it

travels it keeps track of every location and whether or not that location contains an obstacle

using the previously mentioned detection hardware. Once it has all the information it will

return to the starting location and save the mapped information to memory. The following

highlights how the robot successfully creates a map of a yard, stores the minimum amount

of information deals with propagating error, and maps a sufficiently sized lawn.

7.1.1 Mapping Nodes

To successfully navigate around a lawn not only are details about the obstacles in the

lawn required, but also all this information needs to be saved. To do this, the entire lawn is

treated as a square grid. At the center of each square a node is placed. For this project a

node has been defined as a location which either the robot or an obstacle can occupy. The

robot will move between nodes and will only rest at the position of a node. As the lawn is

42

Page 59: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

traversed and more locations are encountered, nodes are placed and added as objects in a

one dimensional array.

The distance between nodes was carefully chosen due to the size of the robot, the max-

imum size of a future cutting blade and for simplicity, the later mentioned nodal array

transformation (Section 7.3.1). So the selected size between nodes is 60 cm. This is 5 cm

more than the diagonal length of the robot. This is important because when turning some

room is needed if the node is close to a wall. The largest cutting blade that can fit beneath

the robot is 30 cm. Notice that the selected size is a multiple of the blade size. This allows

for easy nodal array transformation later on.

In terms of the software, these nodes are saved as objects with a few helpful attributes.

Every node will have an x and y location, a “type” and a list of neighboring nodes that

are safe to travel to. The “type” attribute of these nodes is used to give some very useful

information and can be assigned one of three things; “obstacle”, “new”, or “old”. The

“obstacle” attribute simply means that there is something at or close to that node that

is preventing the robot from traveling there. “Old” means the node is at an obstacle-free

location that has been previously visited. “New” means the node is also obstacle-free, and

has not been visited previously by the robot.

Since nodes are kept track of in the way described here a lot of memory is saved. Every

attribute a node has contains very meaningful information, but it does so with values as

simple as integers. By using these simple, condensed data types, memory is handled in an

efficient way.

7.1.2 Hardware Interpretation

When doing software tests an area was classified in a binary fashion. A node is either

interpreted as an obstacle, or it is not. However, real hardware does not know, with perfect

certainty, whether a location contains an obstacle or not. Due to this, noisy data is received

and the robot must make a decision on whether there is an obstacle or not.

First, the areas in which obstacles may be found are noted (hereby referred to as “search

areas”). These search areas are chosen due to the range and amount of hardware on the

robot (Figure 7.1). Information may only be gained from what the sonar array and camera

43

Page 60: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

can see. Since the sonar array is arrange to gather information from the left, right, and front

of the robot, area was chosen to be the same distance as the distance between nodes. Such

that when an obstacle is found to be in one of these areas the single node that occupies the

center of that area is updated accordingly.

Figure 7.1: Obstacle Search Area With Respect to the Robot

To interpret data from the sonar array the received data is analyzed. Each transducer in

the sonar array returns a single x and y coordinate which is given with respect to the origin.

Since each transducer only gives a single coordinate point, each of these must be considered

very important and assumed to be very accurate. Therefore, if a single transducer point is

inside of any of our search areas, that node may be interpreted as an obstacle and the entire

search area is avoided.

As for computer vision, coordinates are received the same way as the sonar array. The

difference is that computer vision sends many more points. With computer vision, classi-

fying an area as an obstacle requires processing. For an area to be identified as having an

“obstacle”, at least 10% of the total number of points sent must be inside a search area. If

less than 10% of the points are inside of the search area, it is interpreted as noise and the

respective node is classified as “new”.

7.1.3 Regular Processing Logic

When the first pass mapping routine is executed, many other functions and algorithms

are used to complete its task. The flow of these tasks is seen in Figure 7.2. The first of

44

Page 61: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

these functions is the check for obstacles routine which interprets the surrounding obstacles

into nodes with “type” attributes (Figure: D.1). The information on the list of nodes is

updated, (Figure: D.2) then used to select a direction in which to move. As seen in Figure

D.3, moving to a “new” node has a higher priority than moving to an “old” node. Moving

left also has a higher priority than moving straight which has a higher priority than moving

right. This will enforce a spiral like movement that will ensure the area around obstacles

and edges is fully mapped. Commands are then sent to the motors to perform movement in

the selected direction. In some cases double checking our obstacle sensors is performed to

ensure the robots next destination is clear.

Once a movement cycle is completed, some checks are performed before the next cycle.

First, the robot checks if it is stuck in an infinite loop. If none of the nearest locations are

“new” for four consecutive movements, the robot will determine that it is stuck. If it is

stuck, it will then check if it is done mapping. If it is done mapping, it will perform the

go home algorithm described in Section 7.2. Otherwise, it will perform the seek new node

algorithm (Figure: D.5). This will produce instructions for the robot to travel to a node of

type “new” and continue mapping.

Every movement cycle there is also a check for exceeding the propagating error due to

a metric set in Table 1.1. This is because every time the robot moves a required distance,

it may not actually have moved that exact distance. These slight errors can build up and

become a big issue. The limit of propagating error set for the robot is 30 cm. By using

the worst case error calculated during testing the robot can estimate when it has exceeded

the 30 cm by counting the number of moves taken. With this threshold the robot can make

195 safe movements, meaning the maximum size lawn that can be safely mapped is 117 m2

which is sufficiently large for an urban lawn. Once this is exceeded, the robot will send a

warning to the user via the android application.

By repeating these processes until a “done” state is achieved, the robot successfully

produces a map of open and obstacle filled nodes within the yard. This map is saved using

numpy [21] and used to generate an optimized path that will be described in Section 7.3.

45

Page 62: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 7.2: Flowchart for High Level Mapping Route

7.2 Go Home Algorithm

Once the robot has traveled to every available location in a yard and has found all the

obstacles it returns to the origin. In the following sections the use of the go home algorithm

will be discussed.

46

Page 63: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

7.2.1 Dijkstra’s Algorithm

Dijkstra’s algorithm may be used to solve for the path required to move from one place

to another [35]. This algorithm was selected for its speed, success in finding the shortest

path [36], and easy accessibility.

Dijkstra’s algorithm utilizes a list of all edges between nodes, which node to start at, and

which node to end at. To find its way to the goal it uses the starting node and computes

the length from the start to every neighboring node. Then from each of these nodes it will

compute the length to all of their neighboring nodes. However, if multiple paths reach the

same node, only the shortest path is used. This repeats until the shortest path, from start

to end, is found. [35]

7.2.2 Algorithm Implementation

The go home algorithm is implemented in two major areas of the mapping software. The

first of which is to go home once the robot has finished mapping the entire lawn and has

checked to ensure it is done. Once this is satisfied it ends the regular processing and attempts

to return to its origin and its original orientation. It uses the aforementioned Dijkstra’s

algorithm to compute a list of nodes which leads to the origin. These locations then undergo

minor processing so movement instructions can be computed and then condensed, see Section

7.3.3.

The go home algorithm is also used to help the robot get out of possible infinite loops.

Since the mapping technique tends to suggest a spiraling movement it is possible to get stuck

circling the same portion of a yard. Once the robot realizes it is stuck in an infinite loop it

will use the go home algorithm in a slightly different way. Instead of using the coordinates

of the origin as its destination, it will instead look through the list of nodes and find one

with type “new”. The go home algorithm will then find and execute the path to travel to

this location (Figure D.5).

In the full systems tests the autonomous robot was observed to successfully call the go

home algorithm when appropriate. It also physically performed the actions as required to

safely return to the origin (or requested location) without hitting obstacles. Simulations of

this path being used during mapping can be seen in Figure 7.3.

47

Page 64: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 7.3: Simulation of the First Pass Mapping Route

7.3 Efficient Cutting Route

The efficient cutting route is executed after the mapping routine has been completed.

The goal of the efficient cutting route is to travel to every free mapped location from the

previous routine and all locations in between to ensure that a future cutting blade passes

over the entire area of available grass. The robot can accomplish this by transforming the

nodal array, making intelligent routing decisions, and condensing the instruction list. These

processes are described in detail in the following sections. These descriptions include the

methods used to accomplish a low processing time and deal with the propagating error as

stated in Table 1.1.

7.3.1 Nodal Array Transformation

The first problem in trying to ensure coverage of the area is that the space between the

nodes in the nodal array is no longer the same as in the mapping phase. In the mapping

routine the nodes were separated by a distance approximately equal to that of the robot

itself. Now the nodes are separated in such a way that traveling across all of the nodes

results in all area of grass being covered. To achieve this, the nodes in the new cutting array

were chosen to be arranged as seen below in Figure 7.4.

48

Page 65: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 7.4: Cutting Node Array

Although the uncovered areas between the nodes may look to be a problem, this node

layout is actually very beneficial. The first reason this was chosen as opposed to a layout

with 100% coverage is due to the excess of overlap. This overlap can be seen in Figure

7.5 where a single node overlaps with up to four other nodes. Although this overlap does

guarantee that all area of grass will be passed over, it is important to look at how much

overlap there is. The overlap between two circles is expressed in Equation 7.1

A =θ

2× r2 (7.1)

where θ is the angle (in radians) between the two intersection points to the center of the

circle as seen in Figure 7.5. Therefor if a single node has four other nodes overlapping it,

exactly half the area of the node is being covered twice. Not only is this unnecessary, but it

would cause more calculations and keeping track of nearly twice as many nodes.

49

Page 66: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 7.5: Previous Cutting Node Array

The other reason for this selected cutting node array was motivated by aesthetics and

simplicity of movement. As a lawnmower moves in a straight line, the path cut in the grass

is not a circle, it is cylindrical. Therefor when the robot moves from one node to another,

the empty area between the two nodes is also cut. However this only happens when the

robot leaves a node from the opposite direction it entered. This is enforced by some slight

changes in the Select Direction function that will be discussed in Section 7.3.2. A simulation

showing the success of the nodal array transformation can be seen in Figure 7.6.

7.3.2 Regular Processing Logic

The cutting route function performs very similarly to the mapping route function. How-

ever, there are some major differences that require some unique changes. When starting this

function, all the information about a given lawn is already available. This means that all the

movement decisions and calculations are performed before the robot even begins moving. So

the following logic described here is all performed via software.

As with the mapping phase the process begins by checking for obstacles. This time,

instead of calling the detection hardware, the surrounding nodes are investigated for their

“type” attribute. The list of nodes is then updated as before. Next, a direction is selected,

but this time with different priorities. If the robot is facing east or west it prioritized to

50

Page 67: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 7.6: Cutting Node Array Simulation

move straight, over left, over right. And if the robot is facing north or south it is prioritized

to move in order of left, then right, then straight ahead (Figure: D.7). This enforces the

sort of back and forth movement one might associate with cutting a lawn. The robot travels

along a long straight strip of grass, moves to a new strip of grass and moves along that in

the same or opposite direction. Once this movement is selected that movement is saved to

a list to keep track of the path taken. Lastly checks are performed for the stuck and done

cases. The same functions are used to calculate paths to either a ’new’ node or back to the

origin. Once all the instructions are generated they are sent to the instruction list reduction

function (Section 7.3.3) which will then send the the required instructions to the motors.

Successful simulation results of the cutting path can be seen in Figure 7.7.

Since the route finding technique used for the optimal path similar to the technique used

in the mapping phase a lot of processing time was saved. By only deciding on a direction

based off of 4 directions as an option, computations are very simple. Because of this, the

metric set for processing time was easily achieved. This not would have been possible with

route finding techniques such as a solutions to the traveling salesmen problem (TSP). Even

though a TSP solver such as Christofides algorithm can be efficient, it may also produce a

path as inefficient as 1.5 times longer than the ideal path [37].

51

Page 68: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 7.7: Efficient Cutting Route Simulation

7.3.3 Instruction List Reduction

Due to how instructions are generated in the cutting route function the number of

individual movements can be reduced by compressing the movement instruction list. For

example, if it is decided to go straight twice in a row, the instructions might read: ’Straight

0.6m, Straight 0.6m’. Instead, these instructions would be processed to read something more

like: ’Straight 1.2m’. Not only does this mean less communications between the RPI and

Arduino, but that also means less time for the motors to accelerate and decelerate, and less

instructions to store. This process is also shown in Figure D.8.

7.4 Future work

Although the mapping algorithms in this module function within its proposed values

there are always changes to be made. With a wider scope, and more time the following

sections highlight some changes others might use to improve this module.

52

Page 69: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

7.4.1 Mapping Accuracy

Despite the the mapping algorithms working as intended, there are some improvements

that can be made. Node separation is a major factor in this. Currently, nodes for the

mapping phase are spaced a distance equivalent to the length of the robot. Since nodes are

also only qualified as obstacles in their entirety, there are some locations that could be largely

obstacle free, but don’t get traversed due of a small obstacle. By modifying the software

to consider that the robot can occupy multiple nodes at once, the size of the nodes can be

reduced, therefor increasing accuracy. This however would require major software changes

and were not within the scope of the project.

7.4.2 Non-90o Movement

For most of the project all that is required to perform the movement algorithms was

simple 90o movement. This made selecting travel directions very simple, and also was not

necessary in the cutting route since the goal was to cut long straight lines. However it would

be convenient if non-90o was implement with the go home algorithm. This would require

a more complex neighbors attribute for the nodes and checks for possible collision paths

between nodes since traveling on an angle would mean partial overlap of the physical bot to

many nodes along its path.

7.5 Conclusion

To conclude, the mapping and obstacle avoidance module was able to meet and surpass

all metrics established in Table 1.1. Due to the simplicity of using the three primary search

areas to decide a direction, processing time averaged out to 20 msm2 . These simple movements

also allowed the robot to only need minimal information and keep memory usage very low at

1.11 MBm2 . Lastly propagating error was met by using it as our limit for lawn size and warning

users when this limit was exceeded. All these things ensures that the robot functions very

well, and has excelled in all test cases it has attempted. The software that performs all tasks

in this module can be found in Appendix D in the form of flow charts.

53

Page 70: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

8 Graphical User Interface

A graphical user interface is used to send commands to the robot as well as to receive

information being sent by the robot. This is done using an Android mobile device running

Android 8.1.0 and the interface was created using Android Studio. While designing the

user interface, two different versions were created; one for testing the robot and one for the

user of the final product. This decision was made due to the large number of differences

in the features of the two applications. The first application is better suited for testing

as it allows us to update computer vision parameters, manually control movement, and

provided a communication log of commands and data sent between the Android device and

the robot. The second application was designed to be ideal for a user of the final product and

implements very few manual features. Both of these applications are able to send commands

to the Raspberry Pi in less than 25ms.

(a) Main Screen (b) Parameter Update Screen

Figure 8.1: Testing User Interface Layout

54

Page 71: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure 8.2: Final Application Layout

8.1 Features Unique to the Testing Application

The testing version of the graphical user interface was created with the purpose of making

systems integration and testing faster and more efficient. Because of the features unique to

this application it was able to achieve it’s intended use. Using the test application the

user is able to manually control the robot’s movements, update computer vision parameters,

and view communication taking place between the android device and the Raspberry Pi.

These features will be explored in-depth in the following chapter. The layout of the testing

application can be seen in Figures 8.1a and 8.1b.

8.1.1 Computer Vision Parameter Updates

The computer vision software is calibrated using six unique parameters. These parameters

are updated to suit the environment in which the robot was being tested, to obtain the most

55

Page 72: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

accurate results. As this process is run on the Raspberry Pi microprocessor, a monitor,

mouse and keyboard are needed to modify the code. The ability to alter these values from

the android device improve the efficiency of the testing process. This feature is kept on its

own page of the test application as shown in Figure 8.1b

8.1.2 Text Log

To see the Raspberry Pi’s command prompt window, it must be plugged into a screen.

As the Raspberry Pi was kept on-board the robot while running tests, the command prompt

window was not visible. To improve convenience, the test version of the user interface features

a text-based log. This log is found on the main screen of the test application, above the stop

and go home buttons as seen in Figure 8.1a. Using this log, basic information pertaining

to the Bluetooth connection between the android device and the Raspberry Pi can be seen.

Primarily, this is used to confirm that Bluetooth connection is set up correctly and that all

commands are being received.

8.1.3 Manual Directional Movement

This feature consists of four buttons each corresponding to a direction (forward, left,

right, and backwards). These can be seen at the top of Figure 8.1a. By manually requesting

one movement from the bot at a time, the functionality and accuracy of both rotational and

forward movements are tested. As the robot is designed to be fully autonomous, this feature

was not required for the final version of the application.

8.2 Features Unique to the Final Application

The final application’s purpose is to be used by a hypothetical consumer in a real-world

scenario. Due to the differences in the use of this application, it must contain features

different from those in the testing application. As seen in Figure 8.2, the final application is

designed to be simpler and more user-friendly when compared to the testing interface. This

is because the robot is intended to be completely autonomous for the user of this application.

An in-depth explanation of each of the features unique to the final application can be found

56

Page 73: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

in the sections below.

8.2.1 Lawn Selection Menu and Start Button

When starting the application, the user is given the option to either map out a new lawn,

or skip straight to the cutting phase for an existing lawn in memory. To make this choice

as user-friendly as possible a combination of a spinner (a drop-down style list), a text box,

and a button were used. These are positioned as seen in Figure 8.2. The user first clicks the

spinner, which provides them with a list of options including “New Lawn”, followed by a

list of all lawns currently stored in memory. These lawns are transmitted to the application

via Bluetooth when the application first connects to the Raspberry Pi. If the “New Lawn”

option is selected, a text box becomes editable and the user is prompted to input a name

for the new lawn. Once a name is entered, the start button is enabled, ready for the user to

push. By pushing this button, the new lawn name is sent to the bot along with the command

to start the mapping phase. Alternatively, if the user instead chooses one of the existing

lawns in memory, the text box remains grayed out, and the start button enabled. The user

is then able to immediately press the start button. In this case, the application will send

the robot the name of the lawn selected, along with the command to commence the cutting

phase.

8.2.2 Battery Life Indicator

The final version of the user interface periodically requests the current battery life from

the Raspberry Pi. Once received, the value is displayed on-screen as a percentage. This

allows the user to monitor the battery life of the robot in real time so it can be charged

when necessary.

8.2.3 Displaying Lawn Map and Current Location

This feature allows the user to monitor the robot’s location on the lawn in real time on

the final application as seen in Figure 8.2. The application does this by sending a request

to the Raspberry Pi once the start button is pressed. If an existing lawn is selected, the

57

Page 74: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Raspberry Pi then sends back the coordinates of all the obstacles found for the requested

lawn. If the user asks the robot to map a new lawn, the coordinates of the obstacles are

sent individually as they are discovered. If any negative coordinates are received, the map

is shifted until all given values are positive. They are then scaled appropriately so they fit

within the designated space on screen. All obstacles are represented by a red spot. The

robot’s current location is sent separately from the obstacles. By periodically requesting the

robot’s coordinates, they get shifted by the same amount and displayed it on the application

accurately. The robot’s current location is represented by a green spot[38].

8.3 Features Common to Both Applications

Although designed for two different purposes, there is overlap in the functions the above

applications need to perform. For that reason, they share some common functions that are

discussed in this section.

8.3.1 Bluetooth Connection Button

On the main screen of both applications seen in Figures 8.1a and 8.2, a button can

be found in the top left corner. Pushing this button sends a Bluetooth pairing request to

the Raspberry Pi. Upon receiving and verifying this request, the Raspberry Pi will open

a Bluetooth socket connection to the android device. This pairing protocol is described in

Chapter 9. Once complete, the two devices are able to communicate, allowing the user to

send commands to the robot as well as receive data.

8.3.2 Go Home Command

The “Go Home” button can be found in the bottom left corner of both applications seen

in Figures 8.1a and 8.2. Pushing this button sends the go home command to the robot. Upon

receiving this command, the robot calculates the fastest route between its current location

and its starting, or “home” location using Dijkstra’s algorithm (see 7.2.1), and travels along

this path.

58

Page 75: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

8.3.3 Stop Command

The “Stop” button is designed to be easy to see and press for both applications. It can

be found in the bottom right hand corner and is red in color. This command acts as a kill

switch, stopping the robot from making any further movement. This is important when

testing to ensure the robot is not damaged and is also a safety feature for potential users of

the robot making it necessary for both user interface versions.

8.4 Future Work

The graphical user interface created has successfully achieved all of the design parameters

created at the start of the project. It has also implemented additional features that were

not originally part of the design. Moving forward however, there are always features that

can be added to the user interface to help improve usability and convenience for the user.

One of these potential features is the implementation of a timer. This timer would allow the

user to pre-program times throughout the week that they would like their lawn cut via the

android application. Finally, more work could be done to improve the aesthetic appearance

of the map being displayed. Although it plots the boundary correctly, this feature could be

further developed from the current use of generic spots being used now.

59

Page 76: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

9 Bluetooth Communication

Using Bluetooth, wireless communication was established between the android device

and the Raspberry Pi microprocessor [39]. This communication channel is used to send

commands from the graphical user interface and for the Raspberry Pi to send information

back from over 50 meters away. The Raspberry Pi is the host and the android device is the

client for the Bluetooth connection.

9.1 Handshake

To ensure the android device and the Raspberry Pi are synchronized, a software hand-

shake is used when initializing the Bluetooth connection. Upon being powered on, the robot

waits for a connection request from the Android device. The Raspberry Pi receives a univer-

sally unique identifier (UUID) from the Android device and compares it to the value stored

in memory. If the two values agree, the Raspberry Pi opens the Bluetooth socket and sends

a reply back to the Android device to confirm the connection has been created [40].

9.2 Host Device - Raspberry Pi

The Raspberry Pi must be capable of both sending and receiving information to the

Android device. The protocol for this is the Raspberry Pi sends a list of commands to the

Android device after each complete movement. These commands include requests to update

user interface parameters such as obstacle coordinates, the robot’s current location, and the

battery life. These are then read and responded to accordingly by the Android device as

stated in Section 9.3 below. The Raspberry Pi also checks for any commands that have been

sent by the Android device periodically to ensure commands such as stop and go home are

being carried out.

60

Page 77: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

9.3 Client Device - Android Device

After the connection has been established, the Android device constantly listens for both

user input and messages from the Bluetooth socket. When a message is detected on the

socket, it first gets split into individual commands. This is done by parsing the message by

the “\ n” end of transmission character (see Section 6.5). By doing this, the Raspberry Pi was

given the ability to send multiple commands at once. The Android device then loops through

the commands and handles each one individually. These commands are then further divided

into two parts; the command type, and the argument. For example, a potential command

type is “Obstacle”, and the corresponding argument is a list of x and y coordinates. The

android device then determines the command type and calls the appropriate function using

the argument as input.

61

Page 78: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

10 Conclusion

All metrics set at the beginning of this project were met and in some cases far surpassed

(Table 1.1). The power system was able to supply sufficient power to all the components

and activate commands based on level of charge. The motor controls were able to provide

enough mechanical force to move the robot yet move accurately enough to be inside the

performance threshold. Both computer vision and sonar sensing succeeded in detecting

obstacles smaller than their minimum specified size and did so in a timely manner. Route

optimization also succeeded in finding optimal paths in less time than specified, using less

memory than partitioned, and appropriately considered worst case propagating error. Lastly

the user interface was able to send and receive commands faster and from farther away than

proposed.

This robot addresses the issues many currently available autonomous devices have by

using obstacle detection hardware and optimizing paths to increase efficiency and safety.

Being a proof of concept device, this robot serves as a great stepping stone that can lead

toward more intelligent autonomous devices that our society is gravitating towards. Auto-

mated devices can save precious time, removing monotonous chores to free up time for more

meaningful work or leisure.

62

Page 79: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

References

[1] Enerwatt, “Wphr12-38,” 2018, [Online; accessed 09-March-2018]. [On-

line]. Available: https://cdn.shopify.com/s/files/1/0033/6262/files/WPHR12-38

BATT AGM 12V 38AH HIGH RATE ca72a28f-306f-4e7e-bd13-2fa27abadb99.pdf?

15082558664745153254

[2] O. Semiconductor, “Low voltage precision adjustable shunt regulator,” 2017, [Online;

accessed 09-March-2018]. [Online]. Available: http://www.onsemi.com/pub/Collateral/

TLV431A-D.PDF

[3] N. A. W. . Sun, “Agm - absorbed glass mat battery technology,” 2018, [Online; accessed

09-March-2018]. [Online]. Available: https://www.solar-electric.com/learning-center/

batteries-and-charging/agm-battery-technology.html

[4] Janine, “Flooded vs. sealed rechargeable batteries,” 2017, [Online; ac-

cessed 09-March-2018]. [Online]. Available: http://www.sure-power.com/2014/01/

flooded-vs-sealed-rechargeable-batteries/

[5] G. MacDonald, “Do you need a lithium battery management system?” 2016, [Online;

accessed 09-March-2018]. [Online]. Available: http://www.lithiumbatterysystems.com.

au/lithium-battery-management-system-required/

[6] J. Rushworth, “When to use which battery?” 2015, [Online; accessed 09-

March-2018]. [Online]. Available: https://www.victronenergy.com/blog/2015/03/30/

batteries-lithium-ion-vs-agm/

[7] D. Electronics, “Deliphi series dnt12 non-isolated point of load dc/dc power modules:

8.3 14vin, 0.75 5.0vo, 3a,” 2017, [Online; accessed 09-March-2018]. [Online]. Available:

https://www.mouser.com/ds/2/632/DS DNT12SIP03-1219272.pdf

[8] Toshiba, “Tb6600hg pwm chopper-type bipolar stepping motor driver ic,”

2016. [Online]. Available: http://www.mouser.com/ds/2/408/TB6600HG datasheet

en 20160610-771376.pdf

[9] wakefield vette, “wakefield-vette,” 2018, [Online; accessed 09-March-2018]. [On-

line]. Available: https://ca.mouser.com/ProductDetail/Wakefield-Vette/537-95AB/

?qs=sGAEpiMZZMttgyDkZ5Wiuita4PD18Ap718U%2fVjrVPxA%3d

63

Page 80: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

[10] A. Group, “Nema 23 stepper motors - bipolar, 82mm, 3a,” 2018, [On-

line; accessed 09-March-2018]. [Online]. Available: https://www.accu.co.uk/en/

nema-23-stepper-motors/394317-NEMA23-82-3-1-8

[11] P. Sen, Principles of Electric Machines and Power Electronics. Wiley, 2013. [Online].

Available: https://books.google.ca/books?id=7DvhCgAAQBAJ

[12] B. Sensortec, “Bno055 intelligent 9-axis absolute orientation sensor,” 2018, [Online;

accessed 09-March-2018]. [Online]. Available: file:///C:/Users/rivardc3/Downloads/

BST BNO055 DS000 14.pdf

[13] Atmel, “8-bit atmel microcontroller with 16/32/64kb in-system pro-

grammable flash,” 2018, [Online; accessed 09-March-2018]. [On-

line]. Available: http://ww1.microchip.com/downloads/en/DeviceDoc/

Atmel-2549-8-bit-AVR-Microcontroller-ATmega640-1280-1281-2560-2561 datasheet.

pdf

[14] GitHub, “adafruit/adafruit-bno055-breakout-pcb,” 2018, [Online; ac-

cessed 09-March-2018]. [Online]. Available: https://github.com/adafruit/

Adafruit-BNO055-Breakout-PCB

[15] W. contributors, “Computer vision— wikipedia, the free encyclopedia,” 2018,

[Online; accessed 03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/

Computer vision

[16] ——, “k-means clustering— wikipedia, the free encyclopedia,” 2018, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/K-means clustering

[17] ——, “scikit-learn— wikipedia, the free encyclopedia,” 2017, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Scikit-learn

[18] ——, “Lloyd’s algorithm— wikipedia, the free encyclopedia,” 2018, [Online; accessed 03-

March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Lloyd%27s algorithm

[19] ——, “Edge detection— wikipedia, the free encyclopedia,” 2018, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Edge detection

[20] ——, “Discrete fourier transform— wikipedia, the free encyclopedia,” 2018,

[Online; accessed 03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/

Discrete Fourier transform

64

Page 81: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

[21] ——, “Numpy— wikipedia, the free encyclopedia,” 2018, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/NumPy

[22] O. D. Team, “Fourier transform,” 2014, [Online; accessed 04-March-2018].

[Online]. Available: https://docs.opencv.org/3.0-beta/doc/py tutorials/py imgproc/

py transforms/py fourier transform/py fourier transform.html

[23] W. contributors, “Hue— wikipedia, the free encyclopedia,” 2018, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Hue

[24] ——, “Gaussian blur— wikipedia, the free encyclopedia,” 2018, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Gaussian blur

[25] ——, “Ringing artifacts— wikipedia, the free encyclopedia,” 2017, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Ringing artifacts

[26] ——, “Rotation matrix— wikipedia, the free encyclopedia,” 2018, [Online; accessed

03-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Rotation matrix

[27] ——, “Speed of sound— wikipedia, the free encyclopedia,” 2018, [Online; accessed

04-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/Speed of sound

[28] N. Instruments, “The difference between usb type a and usb type b plug/connector,”

2018, [Online; accessed 04-March-2018]. [Online]. Available: https://knowledge.ni.

com/KnowledgeArticleDetails?id=kA00Z0000019OQoSAM

[29] W. contributors, “Usb— wikipedia, the free encyclopedia,” 2018, [Online; accessed

04-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/USB

[30] ——, “Differential signaling— wikipedia, the free encyclopedia,” 2017, [On-

line; accessed 04-March-2018]. [Online]. Available: https://en.wikipedia.org/wiki/

Differential signaling

[31] ——, “Ascii— wikipedia, the free encyclopedia,” 2018, [Online; accessed 04-March-

2018]. [Online]. Available: https://en.wikipedia.org/wiki/ASCII

[32] ——, “Utf-8— wikipedia, the free encyclopedia,” 2018, [Online; accessed 04-March-

2018]. [Online]. Available: https://en.wikipedia.org/wiki/UTF-8

65

Page 82: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

[33] C. Liechti, “pyserial documentation,” 2015, [Online; accessed 04-March-2018]. [Online].

Available: http://pythonhosted.org/pyserial/

[34] Arduino, “Serial documentation,” 2018, [Online; accessed 04-March-2018]. [Online].

Available: https://www.arduino.cc/reference/en/language/functions/communication/

serial/

[35] W. contributors, “Dijkstra’s algorithm — wikipedia, the free encyclopedia,” 2018,

[Online; accessed 24-February-2018]. [Online]. Available: https://en.wikipedia.org/w/

index.php?title=Dijkstra%27s algorithm&oldid=827044239

[36] S. E. Dreyfus, “An appraisal of some shortest-path algorithms,” Operations research,

vol. 17, no. 3, pp. 395–412, 1969.

[37] W. contributors, “Christofides algorithm — wikipedia, the free encyclopedia,” 2017,

[Online; accessed 8-March-2018]. [Online]. Available: https://en.wikipedia.org/w/

index.php?title=Christofides algorithm&oldid=790058243

[38] A. Developers, “Drawables overview,” 2018, [Portions of this page are modifications

based on work created and shared by the Android Open Source Project and used

according to terms described in the Creative Commons 2.5 Attribution License.].

[Online]. Available: https://developer.android.com/guide/topics/graphics/drawables.

html

[39] ——.

[40] ——, “Bluetoothsocket,” 2018, [Portions of this page are modifications based on work

created and shared by the Android Open Source Project and used according to terms

described in the Creative Commons 2.5 Attribution License.]. [Online]. Available:

https://developer.android.com/reference/android/bluetooth/BluetoothSocket.html

66

Page 83: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Appendix

The code for every module in this project can be found at: https://bitbucket.org/

ECE4600_G20_2017-2018

A Motor Controls

Table A.1: TB6600HG GPIO Pin Assignments

Pin Name Feature Input/Output

M1 Micro-stepping control pin set to 1/8 ratio Input

M2 Micro-stepping control pin set to 1/8 ratio Input

M3 Micro-stepping control pin set to 1/8 ratio Input

CW/CCW Clockwise/Counter-Clockwise selection pin Input

CLK Input clock controls current stepping speed Input

RESET Initiate output current stepping Input

ENABLE Initiate output current Input

Latch/Auto Automatic return from overheating protocol Input

Vref Used to set output phase current Input

M0 Indicates phase current in initial position Output

ALERT Indicates circuit in thermal shutdown Output

TQ Set output current to 100 percent Input

67

Page 84: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure A.1: Clock2 Frequency

Figure A.2: BNO055 Slope Output

Figure A.3: BNO055 Rotation Output

B Computer Vision

Structure of the software written to implement the computer vision obstacle detection algo-

rithm.

Table B.1: Computer Vision Test For Small 2.5cmx2.5cm Obstacle Located at Origin

68

Page 85: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure B.1: Flowchart for Computer Vision Algorithm

Obstacle Center (cm) Width xmax − xmin(cm) Height ymax − ymin(cm)

(0.9, 1.3) 2.9 3.0

Position Error (cm) Width Error (cm) Height Error (cm)

1.6 0.4 0.5

Table B.2: Computer Vision Test For 5cmx5cm Obstacle Located at (0,20)

69

Page 86: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure B.2: Effects of Adjusting Camera Mounting Angle

Obstacle Center (cm) Width xmax − xmin(cm) Height ymax − ymin(cm)

(0.4, 20.1) 6.0 5.0

Position Error (cm) Width Error (cm) Height Error (cm)

0.4 1.0 0.0

70

Page 87: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure B.3: Experimental Setup With Two Obstacles

71

Page 88: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure B.4: Computer Vision Output For Experimental Setup

72

Page 89: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

C Sonar Sensing

Structure of the software written to implement the sonar sensing obstacle detection algo-

rithm.

Figure C.1: Flowchart for Sonar Sensing Algorithm

Table C.1: Sonar Short Range Test For Obstacle Located at In Front of Robot at (0,32)

Sonar Number Trial 1 (cm) Trial 2 (cm) Trial 3 (cm) Average Error (cm)

2 (3.09, 33.1) (0.34, 30.3) (3.25, 33.2) 2.83

3 (0, 32.4) (0, 32.4) (0, 32.5) 0.43

4 (0.11, 29.8) (0.11, 29.9) (0.11, 29.8) 2.17

73

Page 90: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure C.2: Transducer Supply Voltage Signal

Figure C.3: Transducer Trigger Signal

Table C.2: Sonar Long Range Test For Obstacle Located at Left of Robot at (144,0)

Sonar Number Trial 1 (cm) Trial 2 (cm) Trial 3 (cm) Average Error (cm)

5 (143, 0) (142, 0) (143, 0) 1.33

74

Page 91: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure C.4: Transducer Echo Signal

D Mapping and Obstacle Avoidance

Structure of the software written for the first pass mapping phase of our robot is shown in

the following flowcharts.

75

Page 92: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.1: Flowchart for Interpreting Obstacle Detection Data

76

Page 93: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.2: Flowchart for Updating a Nodes Data Based on New Information

77

Page 94: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.3: Flowchart for Selecting Appropriate Direction to Move Next During the MappingPhase

78

Page 95: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.4: Flowchart for Finding and Performing a Path to Return to the Origin

79

Page 96: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.5: Flowchart for Finding and Performing a Path to a ’New’ Node

80

Page 97: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.6: Flowchart for High Level Cutting Phase Routine

81

Page 98: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.7: Flowchart for Selecting Appropriate Direction to Move Next During the CuttingPhase

82

Page 99: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.8: Flowchart for generating condensed movement instructions from list of nodelocations

83

Page 100: Autonomous Robot: For the Purpose of Lawn Maintenanceece.eng.umanitoba.ca/undergraduate/ECE4600/ECE4600/... · Presented is the design and implementation of a proof of concept autonomous

Figure D.9: Flowchart for sending list of instructions to motors

84