nagios conference 2011 - william leibzon - nagios in cloud computing environments

Download Nagios Conference 2011 - William Leibzon - Nagios In Cloud Computing Environments

If you can't read please download the document

Upload: nagios

Post on 16-Apr-2017

3.832 views

Category:

Technology


1 download

TRANSCRIPT

Nagios and Cloud Computing

Presentation by William Leibzon
([email protected])

Thanks for being here!

Nagios

Nagios 2011 Conference in Saint Paul, Minnesota

Hi,

My name is William Leibzon and today I'm going to talk about Nagios cluster in Cloud Computing environment.

I want to apologize because I do not have much experience speaking at conferences. What is even worth I got sick yesterday and have a soar throat. However I made sure to put everything I could into slides so you can follow that and will have that to take home.

Cloud Computing

What is Cloud Computing?

Virtualized systems independent of hardware and leased to various customers in what is referred to as Infrastructure as a Service

Image courtesy of thetechlabs.com

Ok, so lets begin. So you all heard the buzz word Cloud Computing but what is it? I pulled up this definition from some site but it is hardly THE definition. In a nutshell, cloud computing allows to run a lot of virtual servers on smaller number of hardware machines. And key to that is virtualization.

Another popular term is infrastructure as a serviceVirtualization and
Cloud Computing

VirtualizationSeparates Hardware from User Software - either one can be upgraded independent of the other

Efficient use of modern multi-core processors

Micro-Kernel design is simpler, easier to support

More Servers with Less HardwareUnused system resources can be utilized in other types of servers with different resource usage

Less energy, more power efficient use of resources

Less rack space in expensive datacenters

Virtualization is the core of Cloud Computing

Virtualization allows to separate hardware from software. OS is supposed to provide us this level of indirection but OS gets tied to hardware too much and software packages are now tied to specific OS.

With virtualization multiple systems running on the same hardware can more efficiently utilize resources so if say we have one system that uses more CPU and another that does more network io, we can potentially put them together on the same system and utilize its resources fully. And of course if we can put many systems on smaller piece of hardware that takes less space in a datacenter its less expensive. So business side all loves it.

Cloud Computing Architecture

Virtualized Systems in a CloudCan be managed entirely remotely

Can move (even live) from one hardware to another

Can be shutdown, saved to disk and started again when required

Can be easily cloned to have another alike system started exactly when it is needed

Cloud allows to automate scaling up of infrastructure to handle peak traffic load while scaling down after to keep overall cost lowThis requires monitoring of all system resources !

Cloud computing is an extension of virtualization where instead of having virtual servers on specific hardware, we assume that there is unlimited amount of hardware available where virtual server can run on and just focus on virtual servers. A good cloud environment will keep these servers running even if there is an issue with hardware so potentially servers can move live from one hardware host to another.

But what is even better is that we have control over what hosts we want to run and for how long. So we can have largest number of servers running at peak traffic load and scale it down to the minimum otherwise. Of course being able to do this requires monitoring of what resources are utilized and how.

With right architecture Cloud allows to automate scaling of infrastructure to handle peak traffic load while scaling down after to keep overall cost low

Cloud does not require technician working at to the datacenter, everything can be managed remotely

Cloud Solutions and Vendors

Hypervisors (Viritualization Kernels):Commercial: VMware ESX, IBM Z/VM, Microsoft VirtualPC

Open-Source: Xen, KVM, OpenVZ, Quemu, VirtualBoxXen originally implimented paravirtualization, which required modified OS and limited it to Linux. KVM and new Xen-HVM can do full virtualization, but require Quemu and CPU virtualization extensions (Intel's VT or AMD's SVM)

Virtualization and Cloud Software SuitesCommercial: VMware vCloud, Microsoft Azure

Open-Source: Eucalyptus, OpenNebula, OpenStack, Baracus

Commercial based on Open-Source: Citrix XenServer, Oracle VM, Ubuntu Enterprise Cloud, Redhat CloudForms, Parallels Virtuozzo

Cloud Infrastructure providersAmazon EC2 (modified Xen), Rackspace (Xen), Linode (Xen), Savvis (Vmware), many many more...

Now for those who want to build cloud environment there are a number of solutions available, both open-source and commercial. VMWare is by far the largest commercial vendor. For open-source, there is a number of packages available to create a cloud, most of your OS vendors have one.

And as far as hypervisors Xen dominates in open-source and gives better performance for Linux Virtual Servers on Linux than VMWare. There are also several competing hypervisors gaining popularity and in my opinion better.

If you don't want to build your own cloud hardware infrastructure, buying from cloud infrastructure providers is a choice. Amazon EC2 is by far the most well known and used.

Most larger organizations have built vitualization environments and are considering building private clouds that they manage before going to the public cloud - BarryOpen-Source Cloud Software

Open-Source Hypervisors used in Cloud SystemsXen - http://www.xen.org/

KVM - http://www.linux-kvm.org/

OpenVZ - http://www.openvz.org/

Open-Source Cloud Management SoftwareEucalyptus - http://open.eucalyptus.com/

OpenNebula - http://www.opennebula.org/

OpenStack http://www.openstack.org/

Baracus http://baracus-project.org/

Proxmox - http://pve.proxmox.com/

And these are the links to open-source cloud software from previous slide.

Commercial cloud management: rightscale, cloudswitchMonitoring for the Cloud

Monitoring of hardware (host OS) & hypervisorMore static, hardware does not change as often

Monitoring of system resources often integrated into virtualizer and info not available to cloud customer

Monitoring of virtual systemsDynamic, should be able to handle addition and removal of server instances

Focus on application and network performance

Ideally should monitor utilization and be able to launch new server instances (auto-scaling)

Monitoring system should itself be robust and handle more servers without impacting performance

So after this brief intro to Cloud Computing we now come to what we're here for monitoring.

There are two pieces to cloud monitoring - hardware systems that runs hypervisors and software virtual servers.

Hardware monitoring is similar to normal server monitoring, its static as far as new servers dont get added often and there aren't really any changes to it once everything is setup. Monitoring of system resources is often taking care of by cloud software but if its possible you should still monitor unix resources like system load, memory, etc and of environmental data can also be monitored.

For virtual servers monitoring is dynamic and should handle addition and removal of servers well. The focus is application and network performance.

Good thing about a cloud is once you reach a limit of what current servers can do, you can just launch a new server. This is auto-scaling and what makes cloud so useful. Nagios can be used to scale and itself should also be scalable.

Cloud Monitoring Architecture

Horizontal ScalingClouds can be as small as 10 servers and as as large as 10,000+. When developing architecture, you need to support its future growth from the start.

Scaling on DemandA pro-active system should handle big changes in the number of cloud instances. You may have 2 webserver instances at 6am and grow to 20 at 10pm.

High AvailabilityGood system design should be fully fault-tolerant and application as a whole should continue to function without interruption if any one server instance dies

This means cluster !!!

What we want from monitoring architecture is same as with other applications - something that is easy to grow automaticaly, does not have single bottleneck and it still functions if any one server dies.

This means Horizontal Scaling, Scaling on Demand and High Availability. And this means cluster.

Nagios Cluster Options

The base nagios-core package is for stand-alone monitoring where server does all service checks.It can be extended to Nagios Cluster with :Passive Service Checks (Classic Distributed Model)Old Way - NCSA used to forward results of checks from client servers to main nagios server, not robust

Shared database (Central Dashboard Model)NDO-Mod and Merlin projects implement this with a combination of NEB modules, daemon & database

Worker Nodes (Load Balancing of Checks)DNX and Mod-Gearman do it with combination of loaded NEB module, server daemon & client servers

There are 3 main ways to build nagios cluster.

The first is what I called "Old Way" and otherwise known as "Classic Distributed Model". This is use of passive service checks on central nagios server and NCSA is used to forward information from client nagios servers.

Second is "Shared Database" or "Central Dashboard Model" - database here is used to create a shared centralized view of several nagios hosts.

Third way is what I call "Worker Nodes" and in Nagios that is represented by DNX and Mod-Gearman projects. Here all plugin checks get distributed to a set of worker node servers automatically and a cluster can handle many more checks than what single nagios server could do.

Nagios was originally designed as a stand-alone monitoring server software which puts limitations on number of services that can be monitored.However people have setup nagios cluster even with nagios 1.0 and current version of nagios that support database.

Passive Service Checks

How- One central server with all services, it does not do any checks listing them all passive- Separate client nagios servers run plugins and do checks for specific sets of hosts, each has its own subset of full nagios config- Scripts are setup that capture results from each client host and send them to central server using NSCA, it puts them into nagios command queue

Advantages This will work with any nagios server, organizations have been doing it from at least 2002

Disadvantages

Nagios
Client
ServerNagios
Client
Server

Requires a lot of custom scripting to organize nagios configs. Not reliable if server dies. Not robust to automate cloud instances being added and deleted

NCSA NCSA

So here is Passive Service Checks model. I think everyone here already knows about it so I'll not go into it other than to say its not robust and it is difficult to configure client nagios hosts. It is also not a way to handle dynamically changing number of hosts and services.

Shared Database

Who: NDO-DB and Merlin

How- Multiple Peer Nagios servers, each has different config file specifying which services it would check- All servers use common database to share results of checks and status of services they are monitoring

Advantages- There is no master nagios server. There is master DB server, however it is a better understood topic how to create a db cluster- Using NEB avoids slow command-queue processing

DisadvantagesPartioning of monitoring infrastructure among servers is still manual process. It is not easy to use this for dynamic cloud environment, however it works very well for fault-tolerance

Shared database in Nagios is represented by Merlin and NDO-DB projects. Of these two I use Merlin.

So the advantage is there is no master nagios server and we just have a set of peer servers that share data by means of a database and you can have a cenralized view of that database through some web interface.

The disadvantage is you still need to partition what set of hosts each server monitors manually. Plus you replace a central nagios server with a central database which despite me putting it into advantage is a single bottleneck.

DNX and Mod-Gearman
Worker Nodes

How- Similarly to Passive Service Checks, there is a central Nagios Server,
it does not execute any plugins.- Unlike with Passive Checks, nagios
does schedule checks. Thereafter
NEB module takes over.- Module passes information on which
plugin(s) to run to DNX server (or
Gearman server for Mod-Gearman)
which manages worker nodes.

- Worker nodes are separate servers, each has special worker
daemon running. The daemon communicates with management
server and gets information (plugin command) on what to run. It
then passes results back to management server and NEB module
writes these results directly into nagios memory.

Now here comes what you've all been waiting here from me - DNX :) or more generally Worker Nodes model.

It is similar to classic distributed model as what you do is offload all active checks to a set of other servers. However this is all done automatically and nagios schedules these checks and not just sees them as passive. With NEB module architecture results of checks are written directly into nagios memory rather than put in a command queue.

Both nagios and mod-gearman have 3 main components - NEB module, distribution server and client nodes. A single distribution daemon runs side by side with nagios daemon, client nodes talk to it and run all the checks and NEB module is an interface between nagios and a distribution server. In mod-gearman two of these components are from gearman project and only module is custom written for nagios.

DNX also includes a sync script which can be used to make sure plugins are same on all servers, but personally I've just done it with ssh and rsync from cron.

Both dnxServer.so and dnxPlugin.so work like any other NEB modules; they'reloaded into the same process address space as Nagios upon start up.

The newer dnxPlugin.so works the same, but offloads much of the work of the DNX server to a child process (dnxServer) that the Plugin starts when it's loaded by Nagios.

The problem with the traditional integrated approach is that the DNX server starts several threads, which then reside and execute within the Nagios process. But Nagios is a multi-process application, rather than a multi-threaded application, and fork and threads don't get along very welltogether in the same process.Advantages of
DNX and Mod-Gearman

Robust and ScalableChecks are automatically distributed among all cluster worker nodes (round-robin on equal basis by default)

All worker nodes are essentially the same and there is no additional re-configuration necessary to add a new node

This fully achieves Horizontal Scaling of nagios checks

Easy to Use in a Cloud EnvironmentAs nodes are the same. Existing worker node can be replicated with no special config to start it

Adding node lets expand cluster on demand

Efficient Integration with NagiosUsing NEB loaded modules achieves low-level integration with nagios, much better than NCSA and command queue

So advantages of this solution is that it scales to handle essentially any number of service checks by just adding more servers with no additional configuration necessary. This is pretty much what you want for horizontal scaling.

And since all nodes are the same that works very well for cloud computing where you can just clone the server.

Its integration with nagios is as mention with a NEB module, it offloads checks and writes them back directly to and from nagios memory structures.

Remember to insure that plugins are distributed to all servers and syncDisadvantages of
DNX and Mod-Gearman

Single Instance of Nagios ServerThe solution has no direct disadvantages however it only achieves horizontal scaling of nagios checks.

This still relies on a single central nagios server to processes the results, send alerts and schedule new checks.

Does not achieve fault-toleranceIf central nagios server dies entire system is out

Author of this presentation does have a patch to DNX that allows results to be multicast to multiple instances of a nagios servers (second one of them would be stand-by and not scheduling checks only receiving results). This is experimental.

There is a whole slide here but disadvantage is essentially that you still have one single nagios server that ca handle all scheduling and notification.

This also means no fault-tolerance although I wrote a patch to DNX and nagios to do it. I have another nagios installation to do in October on which to try it and after that I will release it with some documentation.

DNX Architecture

DNX Server and DNX Client (Worker) Daemons are multi-threaded. Client thread model is controlled by these commands:

Communication between Server and Client using own UDP protocol passing XML packets
.

Almost all communication is from client to server. Client contacts DNX server dispatcher port, receives list of checks to run, runs them and returns results on collector port

DNX Client can support having common checks built into client. check_nrpe was included before, but was pulled out of a package as it required nagios source.

#poolInitial = 20#poolMin = 20#poolMax = 100#poolGrow = 10

channelDispatcher = udp://10.1.1.1:12480
channelCollector = udp://10.1.1.1:12481

I have couple more slides on DNX. Basically it is a multi-threaded server. On the server side there are Timer, Collector, Registrar and Dispatcher threads and client will increase and decrease number of threads as needed to run plugins. The settings to control this are similar to apache. You should test your systems to find upper limit number.

Communication between DNX client and server is using custom UDP-based XML protocol. UDP because we expect DNX clients to be located on the same network and don't want to bother with TCP overhead and if one or two packets get lost sometimes its not as important because nagios will schedule more checks..

DNX can support extensions that are meant to replace some of the common plugins without necesity to run external code. These only one that has been tried is check_nrpe module, which was basically NRPE source with a patch to make it into a library.

Client checks will be back if they can use external libraries, i.e. If check_nrpe can become library, DNX will support itDNX System Internals

DNX Server System Internals

DNX Client (Worker Node)
System Internals

And this internal diagram of threads. Client is using manager-worker thread model. Server is several static threads.

Mod-Gearman

MOD-Gearman System

Nagios Checks and
Mod-Gearman Queues

This is mod-gearman architecture. Gearman is a little like Mapreduce system. Essentially you have clients that look at if there are any commands to run from one or more queues they belong to and server distributes checks among the queues.

This queue system is rather flexible and its possible to create queues for specific hostgroup, servicegroup, etc. I do not know internals of Gearman well but I believe it is also written with manager-worker thread model.

DNX vs Mod-Gearman

Single package, no external dependencies. Includes all job cluster control componentsHard to maintain and test for non-Linux environment

Can use localCheckPattern in server configuration to direct jobs. But it is not documented

Supports nagios-2.x with a patch and nagios-3.x as is

Client can be extended with nagios- specific features. Planned are:
- Embedded Perl, check_icmp,
- check_snmp, check_nrpe

Mod-Gearman is built around Gearman ProjectBetter maintained since Gearman has many uses

Enjoys benefits of wider testing on new releases

Easy to configure and direct to separate queues depending on hostgroup & servicegroup

Only supports nagios 3.x

Supports eventhandlers and not just checks !

Nagios-only features are hard to add at node level

DNX

Mod-Gearman

Now here is comparison of DNX and Mod-Gearman.

DNX aims to be a single package with no external dependencies, it even has simple XML parsing library written as part of it. Unfortunately this also means its harder to maintain and test for new releases. Neither of the projects have full-time developer but Mod-Gearman is basically 90% Gearman and so it gets all the benefits from the larger project. DNX was sponsored by LDS but from 0.20 release its all done by comunity with John Calcote still its main maintainer, last release was 2010 so the project is live. However planned features do not get added until somebody volunteers to program it. The features that we planned for are: embedded perl, encrypting the communication channel for security reasons, optional TCP rather than just UDP, and parsing nagios environment variable into worker nodes to make it even more like it is running in nagios. Load balancing of event handlers maybe added as well

I do wnat to mention that DNX can support hanlding of certain checks by subset of servers using localCheckPattern directive, it was added into 0.20 release and was a patch before. Mod-Gearman as I mentioned supports this very nices with its queues and it supports offloading of event handlers too.

DNX uses minimum external code and prefers to have all components written as part of it, for example it includes its own XML parsing libraryDNX does not have one author, multiple people worked on it and nonne can claim > 50% of the code, but its been supported by LDS and

Mod-Gearman is built around Gearman Project and aims at maximum reuse of code from other projects.

Combining Shared Database
and Worker Nodes

Nagios cluster options can be combined !DNX or Mod-Gearman with Merlin or ADO are great fit :- DNX offers horizontal
scaling for all checks
and relieaves Nagios
of need to run them
- Merlin provides
horizontal scaling
and failover for
Nagios itself for
infrastructure of
thousands of hosts

So best news of all is you can combine different cluster nagios models to create something better The picture in this DNX project and I've done this but personally prefer Merlin over NDO because it offers failover capabilities.

Combination that makes most sense is Merlin or NDO with DNX or Mod-Gearman. This allows for horizontal scaling of nagios checks combined with horizontal scaling or failover capablities that Merlin offers for Nagios servers.

Ideal Fully Fault-Tolerant
Nagios Cluster Architecture

Nagios
ServerMerlin/ADO DBMerlin/ADO DB
Backup

Replication

DB Proxy

Nagios
Web Interface
Server

Backup Nagios
Web Interface
Server

Standby
DB ProxyWorker NodeWorker NodeWorker NodeWorker NodeBackup
Nagios
ServerPerformance
Data (RRD) Server
(like NagiosGrapher)

udpecho

Backup Performance
Data (RRD) Server

cross-monitor

Ideally you would have each of the above as a separate cloud server, but even those with 1000s of servers may find this hard to maintain

udp

udp

heartbeat

Now here is a overloaded diagram of a full nagios infrastructure that has is fault-tolerant and can be horizontally scaled. If you have all the resources in the world you can have each of the above boxes as separate servers, I've never gone quite that extreme and my largest install was 500 hosts.

Also just to explain above DB Proxy and Web Interface server should cross-monitor each other with a heartbeat and you should set it up so that if one server dies the other one starts to announce itself on the same ip. For those using Amazon, this would be done with changing Elastic ip.

DB Proxy on Elastic IP and standby takes over this ip if it dies

DB Proxy is Mysql Proxy or Spock Proxy instead of DB proxy it maybe a set of memcache serversNagios Cloud Cluster with 4 hosts

Nagios DaemonApacheMysql DBMerlin

PNP w/ RRD

N
P
C
D

DNX Server

DNX Client

DNX Client

Nagios DaemonApacheMysql DBMerlin

PNP w/ RRD

N
P
C
D

DNX Server

MAIN NAGIOS SERVER

STANDBY
NAGIOS SERVER

Standby Server has all checks disabled (except checking main nagios host)

Cross-monitor of other nagios does not use DNX cluster

If main server dies, backup takes over and registers itself in dynDNS server replacing primary.

DNX Clients use dynDNS address, they are restarted on server switch

replication

cross-monitor

If you're starting small this is a reasonable setup for a cluster. All chekcs are offloaded to worker nodes and this frees up cpu resource on nagios server to do performance graphing.

Elastic or shared ip can be used to point to active nagios server or you can register primary server in dynamic dns.

Standby server does not do any checks but is there ready if something happens to primary server. One thing to mention is monitoring of Worker Nodes and the other nagios server is an exception and should be done directly by nagios server and not by worker nodes.

As you grow you can begin to separate components into separate servers such as separate database server and separate performance graphing server.

Cross-monitoring of nagios servers should be done as local service and not through DNX.

When starting small what I often do is setup many services on a server allowing for example all the staff you see on Nagios may live on a server that also servers as a DNS server, central log host, puppet or chief server. All of them are setup as a failover cluster with DNX clients doing plugin checks on both of these servers (on standby nagios does not schedule checks). As things progress and you add servers, you clone this server and disable services that will no longer run to make an independent monitoring, dns, puppet server.Configuration of a cloud host

The best way to configure monitoring of cloud hosts with multiple instances is to have a template and define all services by hostgroups

Then starting new host of same type is just a matter of adding config like above but for w2, etcOne of the alternatives is to add a few extra hosts to nagios config and disable all service checks on those hosts, enabling them using script when server is launched

define host { use wprod-server