5 strategies for converged infrastructure efficiency - xangati

8
5 Strategies for Converged Infrastructure Efficiency May 2015

Upload: xangati

Post on 08-Dec-2015

6 views

Category:

Documents


2 download

DESCRIPTION

In a converged IT infrastructure, an organization pays thousands of dollars for IT processing capabilities with a sense that management would want shared compute, network and storage resources. But this does not mean blowing up the investment on the infrastructure but to maximize it.Read to know Xangati’s top 5 strategies for converged IT infrastructure efficiency.More about Xangati products: http://xangati.com/products/

TRANSCRIPT

Page 1: 5 Strategies for Converged Infrastructure Efficiency - Xangati

5 Strategies for Converged Infrastructure Efficiency

May 2015

Page 2: 5 Strategies for Converged Infrastructure Efficiency - Xangati

2Xangati White Paper

RUN HOT...BUT NOT TOO HOT Imagine buying a new sports car with the intent to drive it only one day per year. The driver would either have to be wastefully wealthy or insane, right? When you acquire something of value like that, you want to take it out, let it run, enjoy it — put it to the full use you envisioned before the purchase. You want to use it, but not abuse it. Wherever the line is between those two states, you want to live just a bit on the side of prudence. That’s the way to get the most value from your investment.

The same holds true for converged IT infrastructure. When an organization pays thousands upon thousands of dollars for IT processing capabilities, it makes intuitive sense that management would want shared compute, network and storage resources running on just this side of abuse. Keep them hot, but don’t burn them out; maximize the investment, but don’t blow it up. After all, isn’t that part of the strategy behind virtualization? When one application can’t fill up a server’s capacity, adopting multiple virtual machines on the same hardware can.

However, this IT infrastructure efficiency best practice is not always followed. According to a June 2015 study by sustainability consultancy Anthesis Group and Stanford research fellow Jonathan Koomey, business and enterprise data center IT equipment utilization “rarely exceeds six percent.” Adding insult to injury, current data from the Uptime Institute reveal that “up to 30 percent of the country’s 12 million servers are “actually ‘comatose’ – abandoned by application owners and users but still racked and running, wasting energy and placing ongoing demands on data center facility power and capacity.” The Anthesis study used data from TSO Logic spanning an install base of 4,000 physical servers. Thirty percent of these servers proved to be “using energy while doing nothing.”

Businesses that care about server efficiency and converged infrastructure ROI keep a steady eye on their resource utilization statistics. There is always the temptation to push resource consumption into the red and get the most bang for the invested buck. But the danger of maximizing utilization is obvious to anyone who has ever experienced the logjam of running a client system with 100 percent CPU or memory utilization. (Typically, the “red line” starts far before 100 percent utilization. In fact, IBM recently boasted a new server capable of holding 70 percent utilization without any performance impact.) The obvious path around such stalling is to add more resources, and that leads to a second inevitable truth: Ultimate performance carries incremental, hidden and unpredictable costs.

Efficiency is not about maxing out utilization, nor is it about achieving the highest possible MIPS, IOPS or any other standard metric. Technically, efficiency is about the ratio of useful work performed to the energy expended in doing that work. You want the greatest amount of output from your IT infrastructure for the lowest possible cost. It’s a bit like long-distance driving and working to achieve the highest possible miles per gallon through constant observation of time, speed and gas consumption in the face of varying traffic and weather conditions. Flooring the gas pedal is not always the best option. What can you do to optimize your converged infrastructure efficiency? While there are no 10-second, quick-fix answers, here are five strategies you can start implementing right now to bring your organization much closer to optimal efficiency and long-term cost savings.

How can you measure the impact of such an environment of inefficiency? The National Resources Defense Council recently reported that if even half of the possible technical savings the group recommends, including elimination of zombie servers and implementing a standardized server utilization metric, were implemented, “electricity consumption for U.S. data centers could be cut by as much as 40 percent”. In 2014, this represented a savings of 46 billion kilowatt-hours annually, equivalent to the annual electricity consumption of nearly all the households in the state of Illinois. Such improvement would save U.S. businesses $4.5 billion annually.”

Page 3: 5 Strategies for Converged Infrastructure Efficiency - Xangati

3Xangati White Paper

STRATEGY #1: LINK CAPACITY MANAGEMENT WITH INFRASTRUCTURE PERFORMANCE

The point of optimal resource utilization will vary according to your infrastructure, OS platform, application and data characteristics. While targeting a 95 percent utilization rate may seem intuitive to some financial minds, such levels don’t allow for usage spikes or provide leeway for load balancing. Very broadly speaking, IT should target utilization thresholds in the 50 to 70 percent range. Once that level is reached and maintained, then you have a solid case for adding capacity to take before the finance department. Otherwise, if you walk in with 30 to 40 percent threshold goals, or no stats at all, finance is simply going to close its wallet and tell you to improve your existing levels of resource efficiency.

If you are in the 50 to 70 percent range, the standard quick-fix option is to buy more capacity. With servers, however, the relationship between capacity and performance is not so linear, and judging the efficiency benefit of a capacity addition can be difficult. For example, some operating systems will consume as much memory as they’re offered without using that memory. They’re merely reserving the memory in case it’s needed later. The reservation of this capacity shows up as utilization in resource analysis, making accurate efficiency analysis much harder to perform.

In response to a request for more capacity, finance will often ask if lower-efficiency resources are available to help, thinking (perhaps correctly) that a cheap pool of shared resources in the hand is better than unbudgeted new acquisitions in the bush. However, often, these resources may reside in a different administrative area, which may cause bureaucratic complications. More significantly, IT may run into the “weakest link” issue. An influx of inefficient capacity may impose a bottleneck on existing resources, causing the opposite effect to what was intended on both total application performance and end-user satisfaction. For instance, if IT’s request for more capacity is rewarded with instructions to use some older, slower systems currently sitting idle, yes, that’s more capacity, but those additional resources may drag on total infrastructure performance and sacrifice overall efficiency.

With this level of efficiency intelligence in hand, IT will have a better-informed and more persuasive case to take to finance and

play a more responsible, positive role in boosting the company’s bottom line.

Clearly, there are many variables to weigh when seeking to balance capacity and performance. Xangati can inform these decisions with dashboard analysis tools that assess all available resources in the infrastructure and counsel IT managers about suitable capacity and performance possibilities. Xangati’s measurement models focus on performance degradation, so its service assurance analytics platform is particularly sensitive to finding the levels at which capacity saturation yields performance loss. Recommendations will determine where the sweet spot exists before that loss happens.

Xangati has the breadth and robustness to correlate cross-silo intelligence, so it will provide a better analysis for organizations adopting hybrid-cloud infrastructures. If there is slack in the capacity, either on-premise or off-site, Xangati will find it and determine whether it would make a prudent home for VM migration.

Page 4: 5 Strategies for Converged Infrastructure Efficiency - Xangati

4Xangati White Paper

STRATEGY #2: IDENTIFY WORKLOAD TYPE AND INFRASTRUCTURE PERFORMANCE REQUIREMENTS

Not every workload requires a juggernaut to drive it. If anything, some workloads are performance-insensitive and can function perfectly well on older or partially occupied infrastructure already running higher priority loads. Lower performance needs make achieving high efficiency markedly easier, which can come into play at multiple points in the IT planning and review process.

Public cloud providers are particularly sensitive about efficiency. Consider the workload models of a Google Mail or Microsoft OneDrive infrastructure. Ultra-fast responsiveness for a budget-oriented (or even free) app isn’t nearly as important as keeping performance at modest, “good enough” levels while prioritizing the lowest possible back-end cost of operation. In contrast, private clouds don’t directly derive revenue from infrastructure. Rather, they often support other revenue-generating operations. There is often a higher emphasis on performance and responsiveness, even if it means sacrificing efficiency.

You may not be able to change the performance parameters of your workload, but you might be able to better fit your workload to appropriate infrastructure based on that workload’s true requirements.

STRATEGY #3: DETERMINE HOW MUCH PUBLIC CLOUD BELONGS IN YOUR MIX

The benefits of public cloud infrastructure are well-known and headlined by the duality of cost savings and greater control for management, not to mention cloud compute resources that are cheaper than ever but that contribute to cloud-sprawl risk and governance issues. However, shared infrastructure almost always carries an inherent performance penalty, and the more critical the workload being placed on those shared resources, the higher the risk of incurring a performance limitation. Users can pay extra for more virtualized IT resources (i.e., CPU, memory and data stores), and doing so will generally deliver the desired performance benefits, but the more one upgrades, the more tenuous the case for using shared infrastructure.

Still, the public/private decision is rarely so simple. One complication lies in the fact that workloads are dynamic intheir resource utilization. Shared environments excel at quickly adapting to shifting workload demands. Consider a tripling of traffic during the holidays. By the end of January, nobody wants to continue paying for all of those extraresources, and public infrastructure enables the elimination of that overhead with little more than a few mouse clicks.

The higher the performance need, the lower the concern for efficiency, and thus the less you’ll likely want to rely on public resources in your hybrid mix. To know just how much public infrastructure belongs in your operations, you’ll want a way to visualize your workload’s performance as a function relative to capacity. Is it linear or do more benefits accrue at certain levels? Similarly, you want to see the relationship between performance and efficiency. Shared infrastructure can improve both ratios, but to realize these benefits will require assiduous resource management tools in order to determine optimal targets and improved ROI.

Page 5: 5 Strategies for Converged Infrastructure Efficiency - Xangati

5Xangati White Paper

STRATEGY #4: BEGIN WITH END-USER QUALITY OF EXPERIENCE (QoE)

When you listen to great music on quality headphones, you may know that the full sonic range produced by your high-end cans is 20 Hz to 20 kHz, but that tells you nothing about your listening experience. When the bass drum ceases to rumble, or when the cymbals no longer sizzle, you can tell. You may not know that you’ve lost 150 Hz off the lows and your highs now cut off at 14 kHz, but you know that your fidelity has tanked. Your Quality of Experience, or QoE, ratio is terrible. The same holds true for an enterprise tablet end-user trying to enter a customer order. When every tap of the stylus results in a five-second lag, the QoE monitoring tool is going to generate a trouble ticket.

Increasingly, the starting metric for infrastructure performance is end-user QoE. IT may think that operating at 70 percent of capacity is acceptable, but if QoE reports start trickling in at 50 percent and become a torrent at 60 to70 percent inflation, then you need to know which fine-grain metric to watch.

By the same token, increasing performance may not increase the QoE index. It’s the old sports car at rush hour analogy: Driving a faster car won’t matter when no one can go over 40 MPH. If this is the case — and you’ll obviously want analysis that proves either/or — then paying for more performance is a waste of money. In theory, one can have maximum performance and maximum efficiency, but, in the real world, these two things tend to be mutually exclusive. If other methods fail to decide which of these to prioritize, perhaps QoE metrics, including application health, can decide the matter.

STRATEGY #5: ESTABLISH A BASELINE, THEN EXTRAPOLATE

Gauging the ultimate capacity of a set of cloud infrastructure resources can be very difficult if testing begins when that infrastructure is already under dynamic overload. The best way to obtain a solid, dependable baseline against which future assessments can be made is to start with an empty, “bare metal” configuration running nothing but the base platform.

One by one, add applications and VMs, monitoring how utilization characteristics change as load increases all the way up to current, “production-level” utilization. With this in hand, you should be able to extrapolate the gap from present utilization to your threshold target. Be sure to build in multiple pertinent variables. For example, are the apps you’ll soon be adding likely to exhibit the same workload characteristics as those added over the last couple of years? If not, how do you need to adjust your model? Are there opportunities for increasing efficiency that could positively impact your extrapolations?

SERVICE ANALYTICS MAKES THE DIFFERENCE

The quest for efficiency is, in large part, a search for pockets of inefficiency, and finding those pockets manually can be incredibly troublesome and time-consuming. Xangati’s virtualization management software specializes in hunting down these opportunities and presenting their potential in graphical and intuitive terms. The Xangati platform excels in integrating app visibility into its performance analysis engine and conclusions about the IT infrastructure’s total efficiency picture. These conclusions, in turn, can form the basis of optimization strategies for greater efficiency balance against capacity demands.

Page 6: 5 Strategies for Converged Infrastructure Efficiency - Xangati

Public cloud resources maximize efficiency. This gets you the best output per dollar but often at the compromise of performance. How much efficiency you want will depend in part on your workload.

EfficiencyToo little utilization tells finance that you’re wasting money. Too much utilization means you’re crushing performance and user experience.

UTIL

Fina

nce F

reak

-Ou

t Zon

e

IT Freak-Out

Zone

OptimalEfficiency

Up to 30% of servers are comatose.1

Utilization “rarely exceeds 6%”.2

Utilization metrics can help cut power

by up to 40%.3

How much of your infrastructure belongs on-premise? Can you track your dynamic workloads in real-time?

Private PublicHYBRID

MIX

No matter where you are on the road, Xangati analytics will reveal your infrastructure’s true utilization and help fuel you to real OPTIMIZATION.

How Hard Should You Drive Your Infrastructure?

Run hot. . .but not too hot

2015 Xangati Download the Whitepaper |

1: http://bit.ly/1E0VwR02: http://bit.ly/1d9CCKD3: http://bit.ly/1WCoBYV

www.xangati.com

Page 7: 5 Strategies for Converged Infrastructure Efficiency - Xangati

6Xangati White Paper

XANGATI IT INFRASTRUCTURE EFFICIENCY INDEX

When IT infrastructure administrators are focused on ensuring high availability of virtual desktops and infrastructures, Xangati can be deployed as an intelligent data analytics layer to report about:

• Resolution of performance contentions (referred to as Performance Healing)

• Avoidance of resource saturation (referred to as Capacity Planning )

• Improvement of resource under-utilizations (referred to as Efficiency Optimization)

The objective of a software-defined data center is to deliver SLAs to application owners running in the software-defined data center (SDDC) by minimizing capacity saturation while maximizing resource utilization. In this context, minimizing resource saturation is referred to as capacity planning; maximizing resource utilization is referred to as efficiency optimization.

Capacity planning determines whether or not to add resource capacity to avoid SLA-reducing performance contentions. Efficiency optimization assesses whether and how to balance resource usage to avoid SLA-reducing performance contentions. Both of these activities should be performed regularly (usually daily), thus making them proactive. If the above is done perfectly, there will be no need for “performance healing,” thus making it reactive.

Both capacity planning and efficiency optimization can be used to avoid and remediate performance contentions. Xangati adds significant value to this scenario by providing the service assurance analytics platform that ensures the health of SDDC applications while maximizing ROI through integrated and intelligent capacity planning, efficiency optimization and performance healing.

Xangati Executive Dashboard

Page 8: 5 Strategies for Converged Infrastructure Efficiency - Xangati

www.xangati.comPhone: +1 (408) 252-0505Sales Inquiries: [email protected]: [email protected]

Xangati is a leading virtualization performance monitoring and service assurance analytics innovator serving complex virtualized

data centers and hybrid cloud environments. Over 400 customers among enterprises, government agencies, healthcare

organizations, educational systems and cloud managed service providers trust Xangati to gain real-time insights into the

performance of their virtual machines, Web applications and virtual desktop infrastructure environments, as well as underlying

network, storage and compute components.

Xangati management dashboards, built on patented in-memory architecture, provide a live, continuous and interactional view

into the entire IT infrastructure with predictive analytics and prescriptive remediation actions. Organizations such as Comcast,

British Gas, Colliers, Harvard University and the U.S. Army, have leveraged Xangati to resolve end-user issues more quickly,

optimize virtual applications, diagnose root causes of contention storms and assure overall infrastructure health. Xangati is

headquartered in Silicon Valley and can be found at www.xangati.com.

Additional Resources: Fully-Functional 14-Day Free Trial: https://xangati.com/downloads/

Xangati’s live, second-by-second infrastructure management dashboard provides 300 times the granularity of performance metrics available from agent-based competitor solutions, delivering an unparalleled service assurance analytics framework for virtual infrastructures:

• Performance Problem Triage with performance problem tracking, analysis and remediation using deep analytics that correlate cross-silo contention issues and pinpoint root causes

• Performance Problem Prevention with capacity analytics that are tied to performance analytics so that end-user experiences become more predictive

• Performance and Efficiency Monitoring with real-time, streaming data with unprecedented speed and

end-to-end scale, from app to VM