from chef to saltstack on cloud providers - incontro devops 2015
TRANSCRIPT
2008On-premise distributed system
DRBD + PacemakerAll things directly scripted in ssh sessions - no automation
2010AWS change my vision
Scalable infrastructures, automation, self-provisioning
AWS was launched in 2006 and in 2007 180.000 developers hadsigned up to use it
Application deployment withArtifacts or Docker Containers
and not flow basedgit clone ...composer installapp/console ca:cl -e=prod...
Time consuming
Hard to achieve
Difficult to test
Need the perfect recipe?https://github.com/PUGTorino/application_zf
Salt foundationsStates
Our system statePillars
Info from master tominions
GrainsInfo from minions tomaster
Mines, Modules, Reactors, etc..
TOP.sls - What we have to dobase: '*': tools 'proxyeuaws*prod': nginx 'webeuaws*prod': webserver webapp 'cacheeuaws*prod': memcached
We need the top.sls for states and pillars
Use grains instead of namesbase: '*': tools firewall firewall.munin munin.node 'roles:manager': match: grain firewall.manager redis.tools 'role:proxy': match: grain haproxy firewall.haproxy
A recipe formula examplehaproxy: pkg: installed service: running watch: file: /etc/haproxy/haproxy.cfg file: /etc/default/haproxy pkg: haproxy
haproxy_config: file.managed: name: /etc/haproxy/haproxy.cfg source: salt://haproxy/haproxy.cfg template: jinja
haproxy_default: file.managed: name: /etc/default/haproxy source: salt://haproxy/haproxy.default
A simple file haproxy/init.sls
Here is a pillar:
Jinja templates for your state files
Jinja templates for managed files
Pillars on the stageapp/init.sls
app: path: /opt/app branch: develop remote: https://github.com/wdalmut/app.git hostname: www.anhostname.tld
checkout_app: git.latest: name: pillar['app']['remote'] rev: pillar['app']['branch'] target: pillar['app']['path']
<VirtualHost *:80> ServerName pillar['app']['hostname'] </VirtualHost>
Grains in your formulasSuper-smart recipes formulas
dm grains['mem_total'] * 3 / 4
Memcached uses ¾ of available RAMin your memcached config file memcached.conf
Pillars and grains[mysqld]
% if salt['grains.get']('rdbmaster', none) == True %log_bin=/var/log/mysql/mysqlbin.log% endif %
serverid= grains['rdbid'] bindaddress= pillar['db']['bind']
max_allowed_packet=64M
% if salt['grains.get']('rdbslave', none) == True %replicatedodb= pillar['app']['db']['dbname'] % endif %
Handle mysql replication configs
Peer communicationMaster configuration
peer: .*: .*
upstream app % for srv,ip in salt['publish.publish']('web.*', 'network.interfaces').items() % server ip.eth0.inet[0].address :80; % endfor %
The Salt MineMaster configuration
mine_functions: network.ip_addrs: [eth0]
% for srv, addrs in salt['mine.get']( 'role:web', 'network.ip_addrs', expr_form='grain').items() %
server srv addrs[0] :80 check% endfor %
Prepare your modulessalt 'web-*' app.stop
"""Execution module for my app."""def stop(): cmd = 'app/console maintenance:lock on' out = __salt__['cmd.run'](cmd, __pillars__.get('app.path')) return True if out else False
salt/_modules/app.py
Define providersmydigitaloceanconfig: provider: digital_ocean personal_access_token: xxx ssh_key_file: /path/to/ssh/key/file ssh_key_names: mykeyname,mykeyname2 location: New York 1
cloud.providers.d/do.conf
Define profilesdomicroubuntu: provider: mydigitaloceanconfig image: 14.04 x64 size: 512MB location: New York 1 private_networking: True backups_enabled: True ipv6: True
cloud.profiles.d/do.conf
Manually scale-outsalt-cloud -Pp do-micro-ubuntu web-4 web-5 web-6
salt web-[4-6] state.highstatesalt proxy-* state.highstate
Thanks to mines we can add more resources and update proxiesadding more web servers
% for srv, addrs in salt['mine.get']( 'role:web', 'network.ip_addrs', expr_form='grain').items() %
server srv addrs[0] :80 check% endfor %
salt-cloud -Pm dr.yml% for i in xrange(10):web: web$i: minion: grains: roles: webendfor %
% for i in xrange(3):proxy: proxy$i: minion: grains: roles: proxyendfor %
...
web-1 web-2 web-3 ... web-10proxy-1 proxy-2 proxy-3