from chef to saltstack on cloud providers - incontro devops 2015

43
from Chef to SaltStack @walterdalmut

Upload: corley-srl

Post on 17-Jul-2015

898 views

Category:

Internet


0 download

TRANSCRIPT

from Chefto SaltStack

@walterdalmut

Journey to automation(and cloud computing)

2008On-premise distributed system

DRBD + PacemakerAll things directly scripted in ssh sessions - no automation

2010AWS change my vision

Scalable infrastructures, automation, self-provisioning

AWS was launched in 2006 and in 2007 180.000 developers hadsigned up to use it

My nodes are dynamicallyadded/removed

Shell Scripting - A World ofpain

Satisfiedbut

Maintenance problems...

Chef-solo as provisionerImperative like shell scripting but awsome

Finally moved to SaltStackDeclarative and Imperative

My mistakes withChef automation

Idempotence came first

Application deployment withArtifacts or Docker Containers

and not flow basedgit clone ...composer installapp/console ca:cl -e=prod...

Automate only what you know

Ruby is not my primarylanguage

But we can always learn it!

Crossplatform recipes

Time consuming

Hard to achieve

Difficult to test

Need the perfect recipe?https://github.com/PUGTorino/application_zf

Lack of Orchestration

SaltStack

Communications using 0MQ

Only the master expose ports: 4505-4506

Salt foundationsStates

Our system statePillars

Info from master tominions

GrainsInfo from minions tomaster

Mines, Modules, Reactors, etc..

TOP.sls - What we have to dobase: '*': ­ tools 'proxy­eu­aws­*­prod': ­ nginx 'web­eu­aws­*­prod': ­ webserver ­ webapp 'cache­eu­aws­*­prod': ­ memcached

We need the top.sls for states and pillars

Use grains instead of namesbase: '*': ­ tools ­ firewall ­ firewall.munin ­ munin.node 'roles:manager': ­ match: grain ­ firewall.manager ­ redis.tools 'role:proxy': ­ match: grain ­ haproxy ­ firewall.haproxy

A recipe formula examplehaproxy: pkg: ­ installed service: ­ running watch: ­ file: /etc/haproxy/haproxy.cfg ­ file: /etc/default/haproxy ­ pkg: haproxy

haproxy_config: file.managed: ­ name: /etc/haproxy/haproxy.cfg ­ source: salt://haproxy/haproxy.cfg ­ template: jinja

haproxy_default: file.managed: ­ name: /etc/default/haproxy ­ source: salt://haproxy/haproxy.default

A simple file haproxy/init.sls

Here is a pillar:

Jinja templates for your state files

Jinja templates for managed files

Pillars on the stageapp/init.sls

app: path: /opt/app branch: develop remote: https://github.com/wdalmut/app.git hostname: www.an­hostname.tld

checkout_app: git.latest: ­ name: pillar['app']['remote'] ­ rev: pillar['app']['branch'] ­ target: pillar['app']['path']

<VirtualHost *:80> ServerName pillar['app']['hostname'] </VirtualHost>

Grains in your formulasSuper-smart recipes formulas

­d­m grains['mem_total'] * 3 / 4

Memcached uses ¾ of available RAMin your memcached config file memcached.conf

Pillars and grains[mysqld]

% if salt['grains.get']('rdb­master', none) == True %log_bin=/var/log/mysql/mysql­bin.log% endif %

server­id= grains['rdb­id'] bind­address= pillar['db']['bind']

max_allowed_packet=64M

% if salt['grains.get']('rdb­slave', none) == True %replicate­do­db= pillar['app']['db']['dbname'] % endif %

Handle mysql replication configs

Peer communicationMaster configuration

peer: .*: ­ .*

upstream app % for srv,ip in salt['publish.publish']('web.*', 'network.interfaces').items() % server ip.eth0.inet[0].address :80; % endfor %

The Salt MineMaster configuration

mine_functions: network.ip_addrs: [eth0]

% for srv, addrs in salt['mine.get']( 'role:web', 'network.ip_addrs', expr_form='grain').items() %

server srv addrs[0] :80 check% endfor %

Commands

salt '*' test.pingsalt 'proxy-eu-aws-[1-3]-prod' test.ping

salt '*' state.highstatesalt 'proxy-eu-aws-[1-3]-prod' state.highstate

salt '*' cmd.run 'du -hs /tmp'

salt '*' state.sls 'app.rback' pillar="rev: '1.4.2'"

Prepare your modulessalt 'web-*' app.stop

"""Execution module for my app."""def stop(): cmd = 'app/console maintenance:lock on' out = __salt__['cmd.run'](cmd, __pillars__.get('app.path')) return True if out else False

salt/_modules/app.py

Salt-CloudOrchestrate on Cloud Providers

An easy way to have your minions configured

Define providersmy­digitalocean­config: provider: digital_ocean personal_access_token: xxx ssh_key_file: /path/to/ssh/key/file ssh_key_names: my­key­name,my­key­name­2 location: New York 1

cloud.providers.d/do.conf

Define profilesdo­micro­ubuntu: provider: my­digitalocean­config image: 14.04 x64 size: 512MB location: New York 1 private_networking: True backups_enabled: True ipv6: True

cloud.profiles.d/do.conf

__________ __ _______________ _____salt-cloud -p do-micro-ubuntu web-1

salt-cloud -Pp do-micro-ubuntu web-1 web-2 web-3

Manually scale-outsalt-cloud -Pp do-micro-ubuntu web-4 web-5 web-6

salt web-[4-6] state.highstatesalt proxy-* state.highstate

Thanks to mines we can add more resources and update proxiesadding more web servers

% for srv, addrs in salt['mine.get']( 'role:web', 'network.ip_addrs', expr_form='grain').items() %

server srv addrs[0] :80 check% endfor %

salt-cloud -d web-1

salt-cloud -Pm dr.yml% for i in xrange(10):web: ­ web­$i: minion: grains: ­ roles: ­ webendfor %

% for i in xrange(3):proxy: ­ proxy­$i: minion: grains: ­ roles: ­ proxyendfor %

...

web-1 web-2 web-3 ... web-10proxy-1 proxy-2 proxy-3

Thanks for listeningTwitter: @walterdalmut