2011 analyst(s): milind govekar, patricia adams hype … papers/hype...hype cycle for it operations...

84
G00214402 Hype Cycle for IT Operations Management, 2011 Published: 18 July 2011 Analyst(s): Milind Govekar, Patricia Adams Adoption of new technology and increasing complexity of the IT environment continue to attract IT operations vendors that provide management technology. Use this Hype Cycle to manage your expectations regarding these management technologies and processes. Table of Contents Analysis..................................................................................................................................................3 What You Need to Know..................................................................................................................3 The Hype Cycle................................................................................................................................3 Changes and Updates................................................................................................................4 The Priority Matrix.............................................................................................................................8 Off The Hype Cycle........................................................................................................................11 On the Rise....................................................................................................................................11 DevOps....................................................................................................................................11 IT Operations Analytics.............................................................................................................12 Social IT Management..............................................................................................................14 Cloud Management Platforms..................................................................................................16 Behavior Learning Engines........................................................................................................17 Application Release Automation...............................................................................................18 SaaS Tools for IT Operations....................................................................................................20 Service Billing...........................................................................................................................22 Release Governance Tools.......................................................................................................23 IT Workload Automation Broker Tools.......................................................................................26 Workspace Virtualization...........................................................................................................28 At the Peak.....................................................................................................................................29 Capacity Planning and Management Tools...............................................................................29 IT Financial Management Tools.................................................................................................30

Upload: vodieu

Post on 08-Mar-2018

216 views

Category:

Documents


3 download

TRANSCRIPT

G00214402

Hype Cycle for IT Operations Management,2011Published: 18 July 2011

Analyst(s): Milind Govekar, Patricia Adams

Adoption of new technology and increasing complexity of the ITenvironment continue to attract IT operations vendors that providemanagement technology. Use this Hype Cycle to manage your expectationsregarding these management technologies and processes.

Table of Contents

Analysis..................................................................................................................................................3

What You Need to Know..................................................................................................................3

The Hype Cycle................................................................................................................................3

Changes and Updates................................................................................................................4

The Priority Matrix.............................................................................................................................8

Off The Hype Cycle........................................................................................................................11

On the Rise....................................................................................................................................11

DevOps....................................................................................................................................11

IT Operations Analytics.............................................................................................................12

Social IT Management..............................................................................................................14

Cloud Management Platforms..................................................................................................16

Behavior Learning Engines........................................................................................................17

Application Release Automation...............................................................................................18

SaaS Tools for IT Operations....................................................................................................20

Service Billing...........................................................................................................................22

Release Governance Tools.......................................................................................................23

IT Workload Automation Broker Tools.......................................................................................26

Workspace Virtualization...........................................................................................................28

At the Peak.....................................................................................................................................29

Capacity Planning and Management Tools...............................................................................29

IT Financial Management Tools.................................................................................................30

IT Service Portfolio Management and IT Service Catalog Tools.................................................32

Open-Source IT Operations Tools.............................................................................................34

VM Energy Management Tools.................................................................................................35

IT Process Automation Tools....................................................................................................36

Application Performance Monitoring.........................................................................................38

COBIT......................................................................................................................................40

Sliding Into the Trough....................................................................................................................41

IT Service View CMDB..............................................................................................................41

Real-Time Infrastructure............................................................................................................44

IT Service Dependency Mapping..............................................................................................46

Mobile Device Management......................................................................................................48

Business Service Management Tools........................................................................................50

Configuration Auditing..............................................................................................................52

Advanced Server Energy Monitoring Tools................................................................................54

Network Configuration and Change Management Tools...........................................................55

Server Provisioning and Configuration Management.................................................................56

ITIL...........................................................................................................................................59

Hosted Virtual Desktops...........................................................................................................61

PC Application Streaming.........................................................................................................63

IT Change Management Tools..................................................................................................65

IT Asset Management Tools.....................................................................................................67

PC Application Virtualization.....................................................................................................69

Service-Level Reporting Tools..................................................................................................70

Climbing the Slope.........................................................................................................................72

IT Event Correlation and Analysis Tools.....................................................................................72

Network Performance Management Tools................................................................................73

IT Service Desk Tools...............................................................................................................75

PC Configuration Life Cycle Management.................................................................................76

Entering the Plateau.......................................................................................................................77

Network Fault-Monitoring Tools................................................................................................77

Job-Scheduling Tools...............................................................................................................78

Appendixes....................................................................................................................................79

Hype Cycle Phases, Benefit Ratings and Maturity Levels..........................................................81

Recommended Reading.......................................................................................................................83

Page 2 of 84 Gartner, Inc. | G00214402

List of Tables

Table 1. Hype Cycle Phases.................................................................................................................81

Table 2. Benefit Ratings........................................................................................................................82

Table 3. Maturity Levels........................................................................................................................82

List of Figures

Figure 1. Hype Cycle for IT Operations Management, 2011....................................................................7

Figure 2. Priority Matrix for IT Operations Management, 2011...............................................................10

Figure 3. Hype Cycle for IT Operations Management, 2010..................................................................80

Analysis

What You Need to Know

This document was revised on 18 August 2011. For more information, see the Corrections page ongartner.com.

As enterprises continue to increase adoption of dynamic technologies and styles of computing,such as virtualization and cloud computing, the IT operations organization faces several challenges:implementing, administering and supporting these new technologies to deliver business value, andcontinuing to manage the complex IT environment. Consequently, the IT operations organizationplays a key role in becoming a trusted service provider to the business. This, however, is a journeyof service orientation and continuous improvement, which will result in favorable businessoutcomes.

The expectations of the business in terms of agility, low fixed costs and service orientations haverisen due to increased visibility of offerings, such as cloud computing-based services. The promiseof new technology to deliver these business expectations continues; therefore, the hype associatedwith IT operations technology used to ensure service quality, agility and customer satisfactioncontinues. Thus, making the right choices and investments in this technology becomes important.This Hype Cycle provides information and advice on the most important IT operations tools,technologies and process frameworks, as well as their level of visibility and market adoption. Usethis Hype Cycle to review your IT operations portfolio, and to update expectations for futureinvestments, relative to your organization's desire to innovate and its willingness to assume risk.

The Hype Cycle

As the global economy has shown signs of recovery during the last 12 months, investments in ITreflect this change. Cost optimization continues to be a key focus for many enterprises; yet there isa strong drive to make incremental investments in innovation and best practices, especially in the

Gartner, Inc. | G00214402 Page 3 of 84

area of IT service management. According to Gartner CIO polling, IT budgets have increased verymodestly; therefore, IT ops and cloud management platforms are seen as opportunities to reducecost through automation — thus enabling IT to invest in growth and transformation projects.Achieving this, in conjunction with reducing their spending, helps IT ops to "run the business."

Changes and Updates

Along with economic changes, IT infrastructure and operations has experienced significantdisruption, which is represented in this Hype Cycle. IT organizations have continued to build upontheir virtualization and cloud computing strategies. The value proposition and risks associated withprivate, hybrid and external cloud is gaining traction in some organizations. The disruption isoccurring in several areas, most notably the emergence of the DevOps arena and the intersectionwith release management. DevOps represents a fundamental shift in thinking that recognizes theimportance and intersection of application development with IT ops and the productionenvironment. Just as silos were broken down with the adoption of IT service management thatcrossed IT operations disciplines, the shift is now at a higher level — i.e., across IT applicationdevelopment and IT operations — to build a more agile and responsive IT organization. Inconjunction with cloud and virtualization, this shift is resulting in new approaches that make IT morebusiness-aligned and remove many road blocks. However, this is a cultural change that requiresemployees to be willing and able to accept, adapt, implement and build upon these newapproaches.

Other changes that have occurred on the 2011 Hype Cycle are in the process and tool areas. Withrespect to process, ITIL version 2 (ITIL V2) was retired from the Hype Cycle, as ITIL V3 began tooutweigh ITIL V2 relevance in some of the process design guidelines. The adoption of ITIL V3 hasgrown since its introduction four years ago and has almost become mainstream, with a largenumber of Gartner clients having adopted at least one component of the most recent books. Thus,we have just one combined entry (ITIL) for the two ITIL versions.

In 2011, application performance monitoring (APM) technology became a consolidation of thevarious related technologies, such as end-user performance monitoring, application transactionprofiling and application management, which are typically part of the APM market. We have alsoadded behavior learning engines (BLEs). When used with other IT operations tools, BLEs improvethe proactivity and predictability of the IT operations environment. Furthermore, we have combinedvirtual resource capacity planning and resource capacity planning tools into capacity planning andmanagement tools, which reflect the market and evolution of tools and technologies to increasinglyprovide a holistic and consolidated approach.

As IT operations increasingly is run like a business, IT financial management becomes important,beyond just chargeback. Thus, we have introduced IT financial management tools to match thischange and subsume chargeback tools. Furthermore, we have added service billing, because it alsois important in this financial context. We have renamed run book automation to IT processautomation to show the precise process-to-process integration and the associated IT operationstools integrations. We also have renamed network monitoring tools to network fault-monitoringtools to more accurately describe their functions.

Page 4 of 84 Gartner, Inc. | G00214402

To capture the "wisdom of IT operations specialists," social IT management tools that enable andcapture collaboration have emerged, particularly in the IT service desk area, and thus are on thisHype Cycle. Energy management is always a concern for IT operations organizations and, therefore,virtual machine (VM) energy management and advanced server energy monitoring tools, for usespecifically within data centers, have been positioned here. PC energy management is alreadyrepresented in the PC configuration life cycle management tools. Last, but not least, we are seeingan emergence of IT operations analytics tools that more-advanced and mature IT operationsorganizations are beginning to use to combine financial metrics and business value metrics thatform the platform for business intelligence (BI) for IT.

While the recovery from the recession is still under way, IT infrastructure and operations continuesto be a big part of IT budgets. Well-run IT operations organizations take a service management viewacross their technology silos, and strive for excellence and continuous improvement. Thus, tomanage IT operations like a business requires a strong combination of business management(added to model the ITScore dimensions), organization, processes and tools. This journey towardbusiness alignment, while managing cost, needs to be managed through a methodical, step-by-step approach. Gartner's ITScore for infrastructure and operations is a maturity model that hasbeen developed to provide this guidance.

Virtualization is ubiquitous in production environments to improve agility and resource utilization,and to lower costs. Cloud computing has the potential for improving agility, and lowering capitalexpenditure (capex) and fixed costs in the IT environment, as well as lowering operating expenditure(opex) through automation with private clouds. However, both these approaches are increasing thecomplexity of the IT environment. Managing end-to-end service delivery in this dynamicenvironment is challenging, and has led to new management technology entering the marketplace,with the promise to manage these dynamic, yet complex, environments. Running IT operations as abusiness means having a balanced and adaptable combination of organization, processes andtechnologies. Therefore, we position organizational governance methodologies (such as COBIT) andprocess frameworks (such as ITIL) on the Hype Cycle and look at a wide range of IT operationstechnologies, from new technologies (such as IT operations analytics and cloud managementplatforms) to mature technologies (such as end-user monitoring and job-scheduling tools).

Most of the technologies and frameworks have moved gradually along the Hype Cycle. IT assetmanagement tools have dropped into the Trough of Disillusionment, because they have notdelivered expected benefits. Many of the technologies are climbing the Slope of Enlightenment, asopposed to landing on the Plateau of Productivity, despite having 20% to 50% adoption, or morethan 50% adoption. The reason is that they have not fully delivered the benefits expected by userswho have implemented them.

This Hype Cycle should benefit most adoption profiles (early adopters of technology, mainstream,etc.). For example, enterprises that are leading adopters of technology should begin testingtechnologies that are still early in the Hype Cycle. However, risk-averse clients may delay adoptionof these technologies. The earlier or higher the technology is positioned on the Hype Cycle, thehigher the expectations and marketing hype; therefore, manage down your expectations andimplement specific plans to mitigate any risk from using that technology.

Gartner, Inc. | G00214402 Page 5 of 84

davidb
Highlight

We note three important considerations for using this Hype Cycle:

■ Creating a business case for new technologies driven by ROI is important for organizations witha low tolerance for risk; whereas highly innovative organizations that have increased their IToperations budgets likely will gain a competitive advantage from a technology's benefits.

■ Innovative technologies often come from smaller vendors with questionable viability. It is likelythat these vendors will be acquired, exit the market or go out of business, so plan carefully.

■ While budget constraints are slowly easing, organizations should consider the risks they'rewilling to take with new, unproven technologies, as well as the timing of their adoption. Weighrisks against needs and the technology's potential benefits.

Figure 1 depicts technologies on the Hype Cycle for IT Operations Management, 2011.

Page 6 of 84 Gartner, Inc. | G00214402

Figure 1. Hype Cycle for IT Operations Management, 2011

Technology Trigger

Peak ofInflated

Expectations

Trough of Disillusionment Slope of Enlightenment

Plateau of Productivity

time

expectations

Years to mainstream adoption:

less than 2 years 2 to 5 years 5 to 10 years more than 10 yearsobsoletebefore plateau

As of July 2011

DevOpsIT Operations Analytics

Social IT Management

Cloud Management Platforms

Behavior Learning EnginesApplication Release Automation

SaaS Tools for IT Operations

Service Billing

Capacity Planning and Management Tools

Release Governance Tools

IT Workload Automation Broker ToolsWorkspace Virtualization

IT Financial Management Tools

IT Service Portfolio Management and IT Service Catalog Tools

Open-Source IT Operations Tools VM Energy Management Tools

IT Process Automation ToolsApplication Performance MonitoringCOBIT

Advanced Server Energy Monitoring Tools

IT Service View CMDBReal-Time InfrastructureIT Service Dependency MappingMobile Device Management

Business Service Management Tools

Configuration Auditing

Network Configuration and Change Management Tools

Server Provisioning and

Configuration Management

ITILHosted Virtual

Desktops PC Application Virtualization

IT Change Management Tools

IT Asset Management Tools

PC Application Streaming

Service-Level Reporting Tools

IT Event Correlation and Analysis Tools

Network Performance Management ToolsIT Service Desk Tools

PC Configuration Life Cycle Management

Network Fault-Monitoring Tools

Job-Scheduling Tools

Source: Gartner (July 2011)

Gartner, Inc. | G00214402 Page 7 of 84

The Priority Matrix

The Priority Matrix maps a technology's and process's time to maturity on a grid in an easy-to-readformat that answers two questions:

■ How much value will an organization get from a technology?

■ When will the technology be mature enough to provide this value?

However, technology and processes have to be in lock-step to achieve full benefits; thus, we alsohave process frameworks, such as ITIL and COBIT, on this Hype Cycle.

Virtualization, software as a service (SaaS) and cloud computing continue to broaden the servicedelivery channels for IT operations. Business users are demanding more transparency for theservices they receive, but they also want improved service levels that provide acceptable availabilityand performance. They are looking to understand the fixed and variable costs that form the basis ofthe service they want and receive. They are also looking at data security, increased service agilityand responsiveness. Many business customers have circumvented IT to acquire public cloudservices, which has led many IT organizations to invest in private cloud services for some of theirmost-used and highly standardized sets of services.

They often have sought a harmonious mix of organization, processes and technology that willdeliver IT operational excellence to run IT operations like a business. There are a few trulytransformational technologies and processes, such as DevOps, ITIL and real-time infrastructure(RTI). Most technologies provide incremental business value by lowering the total cost of ownership(TCO), improving quality of service (QoS) and reducing business risks.

Fairly mature technologies, such as job-scheduling tools and network fault monitoring tools, enableIT operations organizations to keep IT production workloads running through continuousautomation and lowering network mean time to repair (MTTR). However, the complexity of IToperations environments continues to rise, as the implementation and adoption of service-orientedarchitecture (SOA), Web services, virtualization technologies and cloud computing increase. Thispresents challenges in such areas as IT financial management and improving IT operationsproactive responses to deliver business benefits.

Meanwhile, due to cost pressures, open-source visibility has increased in IT operationsmanagement. These tools provide basic monitoring capabilities, and most of the implementation ofautomated actions is done by script writing, resulting in fairly high maintenance efforts and longerimplementation times. The focus of these tools has changed from monitoring individualinfrastructure components, including networks and servers, to monitoring activities acrossinfrastructure components, from an end-to-end IT services perspective. Furthermore, these toolshave evolved to automate processes such as incident management. Such tools have the potentialto lower the license costs of commercial tools, but pure-play open-source tools may increase thetotal cost of support.

IT financial management, along with IT asset management, and automation technologies will takecenter stage through 2011 as IT operations' quest for continuous improvement and lower costscontinues. The focus on the cost of data will provide cost transparency for costs associated with

Page 8 of 84 Gartner, Inc. | G00214402

the service portfolio, catalog tools and APM tools to rise through 2011. The integration of IToperations tools is continuing at the data level, using technologies such as the configurationmanagement database (CMDB), and at the process level, using technologies such as IT workloadautomation broker and IT process automation (ITPA).

Overall, there is no technology on this Hype Cycle that will mature in more than 10 years, and thereis only one technology (social IT management) that has relatively low benefit, but this could changein the coming years as it matures (see Figure 2). The advantages of implementing IT operationsmanagement technologies continue to be to lower the TCO of managing the complex ITenvironment, improve the quality of service and lower business risk.

Gartner, Inc. | G00214402 Page 9 of 84

Figure 2. Priority Matrix for IT Operations Management, 2011

benefit years to mainstream adoption

less than 2 years 2 to 5 years 5 to 10 years more than 10 years

transformational DevOps

ITIL

Real-Time Infrastructure

high Behavior Learning Engines

Hosted Virtual Desktops

IT Service Desk Tools

PC Application Virtualization

Application Performance Monitoring

Capacity Planning and Management Tools

Cloud Management Platforms

Configuration Auditing

IT Change Management Tools

IT Financial Management Tools

IT Service Dependency Mapping

IT Service View CMDB

IT Workload Automation Broker Tools

Open-Source IT Operations Tools

Server Provisioning and Configuration Management

Service Billing

VM Energy Management Tools

Workspace Virtualization

Business Service Management Tools

IT Operations Analytics

IT Process Automation Tools

IT Service Portfolio Management and IT Service Catalog Tools

moderate Advanced Server Energy Monitoring Tools

Job-Scheduling Tools

Network Fault-Monitoring Tools

Network Performance Management Tools

PC Configuration Life Cycle Management

Application Release Automation

COBIT

IT Asset Management Tools

IT Event Correlation and Analysis Tools

Network Configuration and Change Management Tools

PC Application Streaming

SaaS Tools for IT Operations

Service-Level Reporting Tools

Mobile Device Management

Release Governance Tools

low Social IT Management

As of July 2011

Source: Gartner (July 2011)

Page 10 of 84 Gartner, Inc. | G00214402

Off The Hype Cycle

Some of the technologies on this Hype Cycle have been subsumed or replaced by othertechnologies. For example, virtual server resource capacity planning and resource capacity toolshave been combined into one entry called capacity planning and management tools; chargebacktools are now part of IT financial management tools; end-user monitoring tools, applicationmanagement tools and application transaction profiling tools are now part of APM tools; ITIL V2 andITIL V3 process frameworks have been combined into the ITIL entry; mobile service-levelmanagement tools and business consulting services are off the Hype Cycle, due to their lack ofdirect relevance to the IT operations area.

On the Rise

DevOps

Analysis By: Cameron Haight; Ronni J. Colville; Jim Duggan

Definition: Gartner's working definition of DevOps is "an IT service delivery approach rooted inagile philosophy, with an emphasis on business outcomes, not process orthodoxy. The DevOpsphilosophy (if not the term itself) was born primarily from the activities of cloud service providersand Web 2.0 adopters as they worked to address scale-out problems due to increasing onlineservice adoption. DevOps is bottom-up-based, with roots in the Agile Manifesto and its guidingprinciples. Because it doesn't have a concrete set of mandates or standards, or a known framework(e.g., ITIL, CMMI), it is subject to a more liberal interpretation.

In Gartner's view, DevOps has two main focuses. First is the notion of a DevOps "culture," whichseeks to establish a trust-based foundation between development and operations teams. Inpractice, this is often centered on the release management process (i.e., the managed delivery ofcode into production), as this can be a source of conflict between these two groups often due todiffering objectives. The second is the leveraging of the concept of "infrastructure as code,"whereby the increasing programmability of today's modern data centers provides IT an ability to bemore agile in response to changing business demands. Here, again, the release managementprocess is often targeted through the increasing adoption of automation to improve overallapplication life cycle management (ALM). Practices like continuous integration and automatedregression testing should also be mastered to increase release frequency, while maintaining servicelevels.

Position and Adoption Speed Justification: While DevOps has its roots in agile methodologies,and, from that perspective, is not totally new, its adoption within traditional enterprises is still verylimited, and, hence, the primary reason for our positioning. For IT organizations, the early focus ison streamlining release deployments across the application life cycle from development through toproduction. Tools are emerging that address building out a consistent application or service modelto reduce the risks stemming from customized scripting while improving deployment success dueto more-predictable configurations. The adoption of these tools is not usually associated withdevelopment or production support staff per se, but rather with groups that "straddle" bothdevelopment and production, typically requiring higher organizational maturity.

Gartner, Inc. | G00214402 Page 11 of 84

DevOps does not preclude the use of other frameworks or methodologies, such as ITIL, and, in fact,the potential exists to incorporate some of these other best-practice approaches to enhance overallservice delivery. Enterprises that are adopting a DevOps approach often begin with one processthat can span both development and operations. Release management, while not mature in itsadoption, is becoming the pivotal starting point for many DevOps projects.

User Advice: While there is growing hype about DevOps, potential practitioners need to know thatthe successful adoption or incorporation of this approach is contingent on an organizationalphilosophy shift — something that is not easy to achieve. Because DevOps is not prescriptive, it willlikely result in a variety of different manifestations, making it more difficult to know whether one is"doing" DevOps. However, this lack of a formal process framework should not prevent ITorganizations from developing their own repeatable processes that can give them both agility aswell as control. Because DevOps is emerging in terms of definition and practice, IT organizationsshould approach it as a set of guiding principles, not as process dogma. Select a project involvingdevelopment and operations teams to test the fit of a DevOps-based approach in your enterprise.Often, this is aligned with one application environment. If adopted, consider expanding DevOps toincorporate technical architecture. At a minimum, examine activities along the existing developer-to-operations continuum, and experiment with the adoption of more-agile communicationsprocesses and patterns to improve production deployments.

Business Impact: DevOps is focused on improving business outcomes via the adoption of agilemethodologies. While agility often equates to speed (and faster time to market), there is a somewhatparadoxical impact, as well as smaller, more-frequent updates to production that can also work toimprove overall stability and control, thus reducing risk.

Benefit Rating: Transformational

Market Penetration: Less than 1% of target audience

Maturity: Emerging

IT Operations Analytics

Analysis By: Milind Govekar; David M. Coyle

Definition: IT operations analytics tools enable CIOs and senior IT operations managers to monitortheir "business" operational data and metrics. The tools are similar to a business intelligenceplatform utilized by business unit managers to drive business performance. IT operations analyticstools provide users the capabilities to deliver efficiency, optimize IT investments, correlate trends,and understand and maximize IT opportunities in support of the business. These tools are capableof integrating data from various data sources (service desks, IT ECA, IT workload automationbrokers, IT financial management, APM, performance management, BSM, service-level reporting,cloud monitoring tools, etc.), and processing it in real time to deliver operational improvements.These tools are not the same as data warehouse reporting tools. Data from IT operations analyticstools will typically send data to data warehouses and other IT operations tools (e.g., service-levelreporting) for further non-real-time reporting. IT operations analytics tools may have engines, suchas complex-event processing (CEP), at their core to enable them to collect, process and analyze

Page 12 of 84 Gartner, Inc. | G00214402

multidimensional business and IT data, and to identify the most meaningful events and assess theirimpact in real time.

There are many business intelligence and analytics tools on the market (see "Hype Cycle forBusiness Intelligence, 2010"), but these tools are not focused on the specific needs of the IToperations organization. IT operations analytics tools must have IT domain expertise andconnectors to IT operations management tools and data sources in order to facilitate IT-specificanalytics and decision making. IT operations management vendors often partner with establishedbusiness intelligence vendors (for example, BMC Software partnering with SAP Business Objects)to offer their customers advanced analytics capabilities.

Position and Adoption Speed Justification: As IT operations analytics tools provide real-timeanalytics that can improve IT operations performance, IT management can use these tools to makeadjustments and improvements to the IT operational services they deliver to their businesscustomers. For example, they can delay the acquisition of hardware by showing the cost of unusedcapacity or help in consolidation and rationalization of applications based on utilization and costdata. Analytics tools are well-established, but have experienced slow adoption in the IT organizationdue to their expense and lack of IT domain knowledge, and because IT organizations often lack thematurity to consider themselves a business. Organizations at a maturity of at least Level 3 or abovein the I&O ITScore Maturity Model will be able to take advantage of these tools (see "ITScore forInfrastructure and Operations"):

■ Rationalization of applications and services based on their utilization and their cost to and valuefor the business

■ Delaying the acquisition of hardware (e.g., server/computer and storage) by revealing the costof unused capacity

■ Determining more-cost-effective platforms or architectures through what-if and scenarioanalytics

■ Optimizing the vendor portfolio by redefining contractual terms and conditions based on costand utilization trends

User Advice: Users who are at least at ITScore Level 3 or above in IT operations maturity shouldinvestigate where and how these tools can be used to run their operations more efficiently, providedetailed trend analysis of issues and deliver highly visible and improved customer service. Most ofthe tools available today take a "siloed" approach to IT operations analytics; i.e., the data sourcesthey support are rather limited to their domains, e.g., severs, databases, applications, etc., or arespecific to agents from a single vendor.

Additionally, their ability to integrate data from multiple vendor sources and process large amountsof real-time data are also limited. However, the IT operations analytics tools that have emerged fromspecific IT operations areas have the potential to extend their capabilities more broadly. Most ofthese tools rely on manual intervention to identify the data sources, understand the businessproblem they are trying to solve and build expertise in the tools for interpreting events and providingautomated actions or recommendations.

Gartner, Inc. | G00214402 Page 13 of 84

Investments in these tools are likely to be disruptive for customers, particularly as newer, innovativevendors get acquired. This means that the product must have significant value for the customertoday to mitigate the risk of acquisition and subsequent disruptions to product advancements, orchanges to product strategy. A critical requirement for choosing a tool is understanding the datasources it can integrate with, the amount of manual effort required to run analytics and the trainingneeds of the IT staff.

Business Impact: IT operations analytics tools will provide CIOs and senior IT operations managersradical benefits toward running their own businesses more efficiently, and at the same time enabletheir business customers to maximize opportunities. These tools that will provide a more accurateassessment of the correlations among the business processes, applications and infrastructures in acomplex and multisourced environment.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Embryonic

Sample Vendors: Appnomic; Apptio; Hagrid Solutions; SL; Splunk; Sumerian

Social IT Management

Analysis By: David M. Coyle; Jarod Greene; Cameron Haight

Definition: Social IT management is the ability to leverage existing and new social mediatechnologies and best practices to foster the support of end users, capture collaboration among ITstaff members and collaborate with the business, promoting the value of the IT organization.

Position and Adoption Speed Justification: The impact on social media and as a mechanism forcollaboration, communication, marketing and engaging people is a well-known concept in personaland business life (see "Business Gets Social"). It is only natural that social media would find its wayinto the IT organization; however, the business value is only starting to be conceptualized.

We have identified three areas where social media can assist the IT organization with increasingend-user satisfaction, increasing efficiencies and fostering crisper collaboration among IT staff:

■ Social networking acts as a medium for peer-to-peer support. End users have for years reliedon their immediate coworkers to better understand how to leverage internal systems, and forhelp in solving technology issues. Often, this end user-to-end user support was done by email,instant message or walking to the co-worker's cubicle. Social media tools now allow end usersto crowdsource other end users through the entire organization to receive assistance. Thisallows end users to become more productive in using business technologies. The IT supportteam has the opportunity to be part of this collaboration to offer improved IT service andsupport.

■ Social media tools can also facilitate the capture of collaboration among IT staff that would nottypically be captured via traditional communication methods. For example, the discussion

Page 14 of 84 Gartner, Inc. | G00214402

regarding the risk of an upcoming infrastructure change is often done via email and IM, but thatinformation is not captured in the change management ticket. These ad hoc and informalcollaborations and knowledge-sharing instances can now be turned into reusable and auditedwork assets (see "Collaborative Operations Management: A Next-Generation ManagementCapability"). Collaboration technology will also become increasingly important in the emergingDevOps arena as development and operations begin to work more closely to coordinateplanning, build, test and release activities.

■ Social software suites and text-based posts can allow the IT organization to promote the valueof IT services to the business. Typically, IT organizations unidirectionally inform the business ofplanned and unplanned outages, releases and new services via email or through an intranetportal. This type of communication is often disregarded or ignored. Social media enablesdynamic communications whereby users can generate conversation within these notifications tounderstand the specific impact of the message in a forum open to the wider community ofbusiness end users. End users can follow the IT organization announcements, services andconfiguration items that are important to them through social media tools.

User Advice: It is important to develop a strategy for social media for the IT organization, becauseend users are increasingly expecting this type of collaboration, which is so prevalent in otheraspects in their lives. If IT doesn't embrace and create a social media initiative, the risk is that endusers and IT staff will create many often-conflicting social media tools and processes themselves. ITshould not expect the use of social media tools to be high, and investments should be temperedsince the ROI will be difficult to quantify in the beginning. Therefore, social media initiatives shouldbe accompanied by the development of project plans, usage guidelines and governance. Also, beprepared to change tools and strategies as new technologies and collaborative methods emerge,and since social media usage within businesses is still immature.

Business Impact: Social media tools and processes increase end-user productivity by leveragingknowledge and best practices across the entire organization. Social media tools and processes withenable the IT organization to increase end-user productivity, communicate easier with the businessand capture ad hoc and informal interactions that can be leveraged in the future. Additionally, asocial media strategy will provide the business with a perspective that the IT organization isprogressive and forward-thinking.

Benefit Rating: Low

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: BMC Software; CA Technologies; Hornbill; ServiceNow; Zendesk

Recommended Reading: "Collaborative Operations Management: A Next-GenerationManagement Capability"

Gartner, Inc. | G00214402 Page 15 of 84

Cloud Management Platforms

Analysis By: Cameron Haight; Milind Govekar

Definition: Cloud management platforms are integrated products that provide for the managementof public, private and hybrid cloud environments. The minimum requirements to be included in thiscategory are products that incorporate self-service interfaces, provision system images, enablemetering and billing, and provide for some degree of workload optimization through establishedpolicies. More-advanced offerings may also integrate with external enterprise managementsystems, include service catalogs, support the configuration of storage and network resources,allow for enhanced resource management via service governors and provide advanced monitoringfor improved "guest" performance and availability. A key ability of these products is the insulation ofadministrators of cloud services from proprietary cloud provider APIs, and thus they help ITorganizations prevent lock-in by any cloud service or technology provider. The distinction betweencloud management platforms (CMPs) and private clouds is that the former primarily provide theupper levels of a private cloud architecture, i.e., the service management and access managementtiers, and thus do not always provide an integrated cloud "stack." Another way of saying this is thatcloud management platforms are an enabler of a cloud environment, be it public or private, but bythemselves they may not contain all the necessary components for it (i.e., such as virtualization and/or pooling capabilities). This distinction holds true for public clouds as well.

Position and Adoption Speed Justification: Although the demand for cloud computing services isincreasing, the market for managing the applications and services running with these environmentsis still modest, including for infrastructure as a service (or IaaS) clouds, be they private or public.This is likely because IT organizations adopting cloud-style technologies are not running manymission-critical applications and services on these infrastructures, which might necessitate greatermonitoring and control.

User Advice: Enterprises looking at investing in cloud management platforms should be aware thatthe technology is still maturing. Although some cloud providers are beginning to offer managementtools to provide more insight and control into their infrastructures, these are limited in functionalityand generally offer no support for managing other cloud environments. A small (but growing)number of cloud-specific management platform firms is emerging; however, the traditional market-leading IT operations management vendors — aka the Big Four (BMC Software, CA Technologies,HP and IBM Tivoli) — are also in the process of extending their cloud management capabilities. Inaddition, virtualization platform vendors (e.g., Citrix Systems, Microsoft and VMware), not tomention public cloud providers such as Amazon, are also broadening their management portfoliosto enhance the support of cloud environments.

Business Impact: Enterprises will require cloud management platforms to maximize the value ofcloud computing services, irrespective of external (public), internal (private) or hybrid environments— i.e., lowering the cost of service delivery, reducing the risks associated with these providers andpotentially preventing lock-in.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Page 16 of 84 Gartner, Inc. | G00214402

Maturity: Emerging

Sample Vendors: Abiquo; BMC Software; CA Technologies; Cloud.com; Cloupia; GaleTechnologies; HP; IBM Tivoli; Kaavo; Microsoft; Platform Computing; Red Hat; RightScale;ServiceMesh

Recommended Reading: "The Architecture of a Private Cloud Service"

Behavior Learning Engines

Analysis By: Will Cappelli; Jonah Kowall

Definition: Behavior learning engines (BLEs) are platforms intended to enable the discovery,visualization and analysis of recurring, complex, multiperiod patterns in large operationalperformance datasets. If such engines are to realize their intent, they must support four layers ofcumulative functionality:

■ Variable designation — this supports the selection of the system properties that are to betracked by the platform and how those properties are to be measured

■ Variable value normalization — this is the automated ability to determine (usually by analgorithm that regresses measurements) what constitutes the normal or usual values assumedby the system property measuring variables

■ Observational dependency modeling — this is a set of tools for linking the individual propertymeasuring variables to one another, where the links represent some kind of dependency amongthe values taken by the linked variables; commercially available BLEs differ significantly withregard to the degree to which the dependencies must be pre-established by the vendor or userand the degree to which the dependencies are themselves discovered by BLE algorithmsworking on the performance datasets being considered

■ Assessment — the means by which, once the normalized values and dependency map aredetermined, the resulting construct can be used to answer questions about the values assumedby some variables, based on the values assumed by others

Position and Adoption Speed Justification: When using IT operations availability andperformance management tools, many IT organizations struggle to deliver a proactive monitoringcapability, due to the vast amounts of disparate data that needs to be collected, analyzed andcorrelated. BLEs gather event and performance data from a wide range of sources, identifyingbehavioral irregularities, which allows IT operations to understand the state of the IT infrastructure ina more holistic way.

BLEs detect subtle deviations from normal activity and how these deviations may be correlatedacross multiple variables. This enables more effective and rapid root-cause analysis of systemproblems and encourages a more proactive approach to event and performance management. Theadoption of BLEs is being driven by the emergence of virtualization, an increased focus on cross-silo application performance monitoring and the need to gain a holistic understanding of the virtual

Gartner, Inc. | G00214402 Page 17 of 84

IT infrastructure state — that is, the ability to understand the health of a dynamic virtual serverenvironment based on behavior patterns, rather than static thresholds.

User Advice: IT organizations that continue to struggle to achieve a proactive monitoring stateusing traditional availability and performance tools should investigate augmenting and enhancingtheir investments with emerging BLEs. BLEs supply a consolidated, holistic view of the ITinfrastructure that provides early warning of potential outages, based on trends and patterns. Afocus on specific, defined objectives allows BLEs to quickly establish behavior patterns and toassociate "normal" with "good" behaviors. It is important to understand the challenges for using thistype of tool, because they require time to learn (through time spent collecting data or time spentconsuming data) to establish a normal state. It may also require time and effort to associate normalbehavior with good behavior.

Behavior patterns must be established on good data; poor data will produce spurious patterns.These tools work well in IT infrastructures with a degree of "normalcy," because erratic or constantIT infrastructure change will cause the behavior learning tool to constantly alert on abnormalbehaviors, causing them to relearn normal. However, with the appropriate expectations, approachand preparation, enterprises will gain the value promised from a BLE. The more accurate data thetools collect, the better the behavior analysis will be.

Business Impact: BLEs can improve business service performance and availability by establishingnormal behavior patterns, and allowing subtle deviations of IT normalcy to be detected before theindividual availability and performance tool thresholds are exceeded. This capability is focused on ITservices and applications, and ensures that potential issues can be investigated and remediatedbefore they affect the business, which is a key requirement when moving toward a real-timeinfrastructure (RTI) state. IT organizations with increasingly dynamic virtual IT environments willbenefit from BLEs, especially when there are many performance and event sources to track andunderstand, because they provide IT operations with a new way to comprehend the overall state ofthe IT infrastructure.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: BMC Software (ProactiveNet); Netuitive; Prelert; VMware (Integrien)

Recommended Reading: "An Introduction to IT Operations Behavioral Learning Tools"

"Behavior Learning Software Enables Proactive Management at One of World's Largest TelecomCompanies"

"Tools Alone Won't Enable Proactive Management of IT Operations"

Application Release Automation

Analysis By: Ronni J. Colville; Donna Scott

Page 18 of 84 Gartner, Inc. | G00214402

Definition: Application release automation (ARA) tools focus on the deployment of customapplication software releases and their associated configurations, often for Java Platform,Enterprise Edition (Java EE) and .NET applications for middleware. These tools also offer versioningto enable best practices in moving related artifacts, applications, configurations and even datatogether across the application life cycle. ARA tools support continuous deployment of largenumbers of small releases. Often, these tools include workflow engines to assist in automating andtracking human activities across various tasks associated with application deployment forauditability. Some tools focus on the build and development phases and are now adding thecapability to deploy to production, though many still provide only partial solutions.

Position and Adoption Speed Justification: IT organizations are often very fragmented in theirapproach to application releases. Sometimes, the process is led by operations, although it can alsobe managed from the development side of the organization. This means that different buyers willlook at different tools, rather than comprehensive solutions, which results in tool fragmentation. Theintent of these tools is fivefold:

■ Eliminate the need to build and maintain custom scripts for application updates by adding morereliability to the deployment process with less custom scripting, and by documenting variationsacross environments.

■ Reduce configuration errors and downtime.

■ Coordinate releases between people and process steps that are maintained manually inspreadsheets, email or both.

■ Move the skill base from expensive, specialized script programmers to less costly resources.

■ Speed the time to market associated with agile development by reducing the time it takes todeploy and configure across all environments.

Adoption and penetration of these tools are still emerging (1% to 5%), because they are being usedfor only a small percentage of all applications. For large enterprises with mission-critical websites,adoption is becoming more significant (5% to 20%), because of the criticality of improving agilityand reducing disruption. Even in this scenario, we still see the largest "competitor" of these toolsbeing in-house scripts and manual processes.

User Advice: Clients must remember that processes for ARA are not standardized. Assessapplication life cycle management — specifically deployment processes — and seek a tool or toolsthat can help automate the implementation of these processes. Organizational change issuesremain significant and can't be addressed solely by a tool purchase.

Understand your specific requirements for applications and platforms that will narrow the scope ofthe tools to determine whether one tool or multiple tools will be required. Some vendors haveproducts that address application provisioning (of binaries: middleware, databases, etc.) and ARA.When evaluating tools, consider your workflows and the underlying software you are deploying; thiswill increase time to value for the tools. Determine whether life cycle tools can address the needs ofmultiple teams (such as development and testing), while meeting broad enterprise requirements, asthis will reduce costs. Organizations that want to extend the application life cycle beyond

Gartner, Inc. | G00214402 Page 19 of 84

development to production environments using a consistent application model should evaluatedevelopment tools or point solutions that provide out-of-the-box integration with developmenttools.

Business Impact: ARA tools can eliminate the need to build and maintain time-consuming customscripts for application updates. They can add more reliability to the deployment process with lesscustom scripting and by documenting variations across environments, to reduce configurationerrors and downtime. In addition, with more consistency, there will be an increase instandardization, which will enable all the benefits standardization brings in terms of economies ofscale, cross-training, documentation, backups, etc. ARA tools can supplement or enhance thecoordination of releases among people, as well as the process steps that are maintained manuallyin spreadsheets, email or both. By reducing the scripts and manual interactions with automation, ITorganizations can move the skills base from expensive, specialized script programmers to lesscostly resources. The most significant business benefit of ARA tools is that they improve the speedassociated with agile development by reducing the time it takes to deploy and configureapplications across all environments by creating consistent application models that improve thelikelihood of successful deployments to production.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: BMC Software; HP; MidVision; Nolio; Urbancode; XebiaLabs

Recommended Reading: "Cool Vendors in Release Management, 2011"

"Managing Between Applications and Operations: The Vendor Landscape"

"Best Practices in Change, Configuration and Release Management"

"MarketScope for Application Life Cycle Management"

"Are You Ready to Improve Release Velocity?"

SaaS Tools for IT Operations

Analysis By: David M. Coyle; Milind Govekar; Patricia Adams

Definition: Software as a service (SaaS) is software owned, delivered and managed remotely byone or more providers, and purchased on a pay-for-use basis or as a subscription based on usagemetrics. All IT operations management tools that can be 100% managed through a Web client havethe potential of being licensed as SaaS. SaaS enables the IT organization to have new pricing,delivery and hosting options when acquiring IT operations management tools, in addition to thetraditional perpetual model. Not all IT operations SaaS use multitenancy and elasticity, which arethe hallmarks in a cloud computing delivery model.

Page 20 of 84 Gartner, Inc. | G00214402

Position and Adoption Speed Justification: SaaS has been a viable pricing model for manysoftware products, such as CRM and HR, for several years; however, it is newer to the IT operationsmanagement tool market. Fewer than 5% of IT operations management tools are bought utilizingthe SaaS licensing model. Few traditional software vendors have licensed their tools as SaaS, butthis model is increasingly being added to product road maps. However, growth in the acceptance ofSaaS as a licensing model in the software industry as a whole, plus the use of the Web-only clientfor IT operations management tools — especially IT service desk, resource capacity planning andend-user monitoring tools — will ensure that this model becomes pervasive in IT operationsmanagement.

Mature resource capacity planning tools have been offered as a service since the mainframe days,and can be good candidates to fit the SaaS delivery model in today's computing environment. ForIT organizations that are experiencing budget constraints, SaaS solutions that are paid for monthlyor quarterly and come from the operating budget can be viable alternatives to a large capitalsoftware purchases.

In particular, small or midsize businesses (SMBs) have begun to favor SaaS-based capacitymanagement and planning. SaaS IT service desk vendors currently have 8% to 9% of the overallmarket; however, by 2015, 50% of all new IT service desk tool purchases will use the SaaS model(see "SaaS Continues to Grow in the IT Service Desk Market"). Because many of the IT service deskvendors also offer IT asset management (ITAM) tools, which are tightly integrated, we expect tobegin to see SaaS extend to ITAM implementations where software license models aren't overlycomplex.

Similarly, end-user monitoring tools have been consumed as a SaaS delivery model for more thanfive years, and we are beginning to see increased interest in this delivery model. In addition,chargeback tools are being sold as a SaaS model. However, security, high availability and stabilityof the vendor infrastructure, heterogeneous support, integration and long-term cost are some ofenterprises' primary concerns. In some cases, the tools architecture may not lend itself to thefunctionality being delivered in a SaaS model. Customers that need to customize software to meetunique business requirements should be aware that there is risk associated with customizing SaaS-delivered software that may cause conflicts when new versions become available.

User Advice: Clients should evaluate the inherent advantages and disadvantages of SaaS beforeselecting this model. Trust in the vendor, customer references, security compliance, contracts andSLAs should be foundational when buying SaaS. Clients should compare the SaaS vendors'products with the more-traditional, perpetually licensed products in terms of features, functionality,adherence to best practices, total cost of ownership through the life cycle of the tool, highavailability, manageability and security. They should ensure that they don't choose one product overthe other, based solely on licensing model or cost. If choosing to operate the SaaS tool within yourdata center, it is important to understand the hardware required, the architecture models and thedifferences in pricing.

IT operations tools do not exist as an island, and some level of information, data and processintegration is required, which should be an important SaaS evaluation criterion. Finally,organizations that have unique requirements may find that the one-size-fits-all approach to SaaS

Gartner, Inc. | G00214402 Page 21 of 84

might not be a good fit for business requirements that require tool customization to increase thevalue of the tool.

Business Impact: SaaS offers new options for IT organizations to purchase, implement andmaintain the IT operations management tools in their environment. More choices will enable ITorganizations to become agile, use IT budgets more appropriately and gain more flexibility withvendor negotiations. Implementation time frames should also be faster, potentially leading to fasterpayback.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Apptio; BMC; CA; Cherwell Software; Compuware; InteQ; Keynote Systems;Service-now.com

Recommended Reading: "SaaS Continues to Grow in the IT Service Desk Market"

Service Billing

Analysis By: Milind Govekar

Definition: Service-billing tools differ from IT chargeback tools in that they use resource usage data(on computing and people) to calculate the costs for chargeback and aggregate it for a service.Alternatively, they may offer service-pricing options (such as per employee, per transaction)independent of resource usage. When pricing is based on usage, these tools can gather resource-based data across various infrastructure components, including servers, networks, storage,databases and applications. Service-billing tools perform proportional allocation — based on theamount of resources (including virtualized and cloud-based) allocated and used by the service —for accounting and chargeback purposes.

Service-billing costs include infrastructure and other resource use (such as people) costs, based onservice definitions. As a result, they usually integrate with IT financial management tools and ITchargeback tools. These tools will be developed to work with service governors to set a billingpolicy that uses cost as a parameter, and to ensure that the resource allocation is managed basedon cost and service levels.

Due to their business imperative, these tools will first emerge and be deployed in service providerenvironments and by IT organizations that are at a high level of maturity. These tools are also beingdeployed for cloud computing environments, where usage-based service billing is a keyrequirement.

Position and Adoption Speed Justification: Shared sets of resources for on-premises or off-premises implementations require the ability to track usage for service-billing purposes. As the ITinfrastructure moves to a shared-service model, resource-oriented chargeback models will evolveinto end-to-end collection and reporting approaches. Furthermore, the service-billing tools will work

Page 22 of 84 Gartner, Inc. | G00214402

with service governors to proactively manage the costs of services and their associated servicelevels.

User Advice: This is a new and emerging area, and its visibility has risen sharply. Most commercialtools are being developed by cloud computing service providers for their own environments, or bycloud infrastructure fabric vendors in private or public cloud computing environments, in addition tocommercial off-the-shelf tools. Service-billing tools will take a life cycle approach to services, willperform service cost optimization based on underlying technology resource usage optimizationduring the entire life cycle and will provide granular cost allocation.

The emergence of these tools, to accommodate the vision of dynamic automation of real-timeinfrastructure, will involve their integration with virtualization automation tools that dynamically movevirtual environments, based on resource or performance criteria to assess the cost effects of suchmovement. For example, enterprises that want to incorporate virtual server movement automation intheir environments should assess these tools, as they emerge to assist with controlling costs in theirdata centers.

The available service chargeback tools aggregate costs mainly for static environments, where thereis no automation or dynamic control of resources. These tools will emerge from startups, as well astraditional chargeback vendors, asset management vendors, virtualization management vendorsand software stack vendors.

Business Impact: These tools are critical to running IT as a business, by determining the financialeffect of sharing IT and other resources in the context of services. They also feed billing data backto IT financial management tools and chargeback tools to help businesses understand the costs ofIT and to budget appropriately.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: Apptio; Aria Systems; BMC Software; Comsci; Digital Fuel; IBM Tivoli; MTS;Unisys; VMware

Release Governance Tools

Analysis By: Ronni J. Colville; Kris Brittain

Definition: Release governance tools enable coordinated, cross-organizational capability thatmanages the introduction of changes into the production environment, including those affectingapplications, infrastructures, operations and facilities. The two main focuses are release planningand release scheduling. Planning orchestrates the control of changes into a release by governingthe operational requirements definition (designing with operations in mind, versus just building tobusiness functional specifications alone early on) and integrated build, test, install and deployactivities, including coordination with other processes required prior to rollout (such as system

Gartner, Inc. | G00214402 Page 23 of 84

management and documentation). Scheduling activities are focused on ensuring that prerequisites'and corequisites' requirements are met and that rollback planning is coordinated for the executionof changes and release bundles, to ensure that no conflicts occur prior to production deployments.Release governance tools have tended to develop as an extension to existing IT operations tools.For example, change management tools have similar workflows that IT service desk vendors haveextended for release workflows. Sometimes, run book automation tools, which can supplement oraugment automation processes' and activities' interactions, can have release workflow templates.In addition, application life cycle management release tools for tracking release requests aresometimes being extended to include broader release workflows beyond an application focus.

Position and Adoption Speed Justification: There continues to be an increase in focus ondeveloping a comprehensive release management process across development and productioncontrol, as well as on automating various aspects of release management (e.g., planning forgovernance and release provisioning and deployment). This interest in formalizing the process andsupplementing manual efforts has four main drivers and approaches: (1) ITIL process improvementinitiatives are focused on several processes (such as problem, incident, change and configuration),where the release is often one of the processes being addressed — but, often, later in the timelineof the initiative; (2) the increase in number of composite applications being deployed (such as JavaPlatform, Enterprise Edition [Java EE] and .NET) into production; (3) continuous deploymenttechniques (e.g., agile and DevOps) for the introduction and maintenance of applications to theproduction environment; and (4) the rise in the frequency of changes and the need to bundle, alignand control them more efficiently. Release process adoption in ITIL initiatives tends to be at laterphase of the project, and most often will force a reshaping of early change and configurationmanagement process work.

Release governance tools become critical in ensuring that the appropriate planning and schedulinghas been done to reduce disruption to the production environment, along with appropriatecommunication and reporting mechanisms. These tools can also reduce costs, but only whenviewed across the entire organization, from project management to application development torelease management to operational support. Release governance tools also are becoming morecommonly integrated with project and portfolio management (PPM), application life cyclemanagement, change management, incident/problem management, and IT operations managementtools. Configuration management works hand in hand with release management for distributionactivities of software packaging, deployment and rollback, if required.

The main benefits of release management include:

■ More consistent service quality as a result of better planning, coordination and testing ofreleases, as well as automated deployment through configuration management tools

■ The ability to support higher throughput, frequency and success of releases

■ Reduced release risk through improved planning

■ Reduced cost of release deployment due to fewer errors

Despite these benefits, fewer than 25% of large enterprises will formalize release managementthrough 2014, up from less than 5% today. The reason that so few organizations will achieve

Page 24 of 84 Gartner, Inc. | G00214402

success with release governance tools is twofold. First, success with formal release managementrequires that implementations of change and configuration management be in place to integratewith; thus, release management is normally implemented in the latter phases of an ITIL or ITprocess improvement implementation. During the past five to 10 years, most of the focus onprocess improvement has been on change management, and, more recently, on configurationmanagement surrounding configuration management databases (CMDBs). Release managementhas only recently become a focus in ITIL programs as progress has been made on change andconfiguration, and because there has been an increase in the number of changes.

Second, release governance tools require a coordinated release management process andorganizational role integration across business processes, applications, project management andoperations. As a result, it is one of the more-difficult disciplines to implement, requiring significanttop management commitment and, often, organizational realignment and/or the establishment ofcompetency centers. Contributing to the inhibitors is that organizations will need to build labs toperform integration testing to ensure the validity and integrity of the release in the productionenvironment. Today, testing is done in the application life cycle management (ALM) process, butthese test environments rarely mirror the production environment. Therefore, it is critical to plan forand establish an integration test lab prior to production rollouts. In some cases, organizations caneffectively use the existing independent test organization, which is usually affiliated with thedevelopment team. Such independent quality assurance (QA) functions are a hallmark of MaturityLevel 3 development organizations, specifically in their use of methodologies, such as CapabilityMaturity Model Integrated (CMMI). Funding process projects in this way is often easier to justifythan the inclusion of new hardware, software and resources to support a test lab.

User Advice: IT organizations need to ensure that solid objectives based on the needs of thebusiness are established for release planning and release distribution activities, and that those aremapped to stakeholders' specifications. Because releases occur throughout and across the ITapplication and infrastructure organization, release management will require the participation ofmany IT groups, and may be considered part of development, operational change management,production acceptance/control, and the tail end of the IT delivery life cycle. With the addition ofSOA-based applications, the granularity and frequency will add to the fluency with which releasesneed to occur. Successful release management will require a competency center approach toenable cross-departmental skills for release activities.

Release management takes on greater significance, much as production control departments did inthe mainframe era. Planned changes to applications, infrastructure and operations (such as systemand application management, documentation and job scheduling) processes are integration-testedwith the same rigor that occurs on the development side. Working with architecture groups, more-rigorous policies are put in place for maintenance planning (such as patches and upgrades) and theretirement of software releases and hardware platforms — in adherence to standard policies. Therelease management group also will be responsible for preproduction testing and postproductionvalidation and adjustments to policies related to dynamic capacity management and priorities forreal-time infrastructure.

Gartner, Inc. | G00214402 Page 25 of 84

IT organizations should:

■ Establish a project team composed of application development and operations resources to getthe foundation of a release management policy in place.

■ Be prepared to invest in infrastructure to establish integration testing for production control (asis done for preproduction and development).

■ Organizations that have already implemented change and configuration managementprocesses should:Look to formalizing a release process as one of your next investments, andone that will greatly improve overall service quality (in an additive way).

■ Implement release governance tools to provide a mechanism for tracking the life of a release(singular or a bundle) for planning and scheduling.

Business Impact: Because new releases are the first opportunity for IT customers to experienceIT's capabilities, the success or failure of a release management process will greatly affect thebusiness-IT relationship. Release governance tools will provide automation that facilitates the manystakeholders required to ensure successful deployments into the production environment. It isimportant, therefore, that releases are managed effectively from inception to closure, and that all ITgroups work in concert to deliver quality releases as consistently as possible using releasegovernance tools.

Benefit Rating: Moderate

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: BMC Software; CA Technologies; HP; IBM Tivoli; Service-Now

IT Workload Automation Broker Tools

Analysis By: Milind Govekar

Definition: IT workload automation broker (ITWAB) technology is designed to overcome the staticnature of scheduling batch jobs. It manages mixed batch and other non-real-time workloads basedon business policies in which resources are assigned and deassigned in an automated fashion tomeet service-level objectives, for example, performance, throughput and completion timerequirements. These tools automate processing requirements based on events, workloads,resources and schedules. They manage dependencies, (potentially, across multiple on-premisesand off-premises data centers) across applications and infrastructure platforms. ITWAB technologyoptimizes resources (e.g., working with physical, virtual and cloud-based resource pools)associated with non-real-time or batch workloads, and is built on architectural patterns thatfacilitate easy, standards-based integration — for example, using SOA principles across a widerange of platforms and applications.

Position and Adoption Speed Justification: Some characteristics of ITWAB were defined in "ITWorkload Automation Broker: Job Scheduler 2.0." A federated policy engine that takes business

Page 26 of 84 Gartner, Inc. | G00214402

policies and converts them into a technology SLA is over two years away from being deployed inproduction environments. ITWAB can emerge in vertical industry segments — for example,insurance — where a set of standardized, risk model calculation processes is driven by a commondefinition of business policies. Alternatively, ITWAB is emerging in situations where decisions needto be made on use and deployment of computing resources; for example, to finish processingworkloads associated with business processes by a certain deadline, ITWAB may make decisionsto use cloud-based computing resources, as needed in addition to on-premises resources. Anintermediate stage of tool development will be the manual conversion of business policies intotechnology SLAs.

Visibility, discovery and optimization of resource pools across the entire physical, virtual and cloudcomputing environment isn't possible today. Intermediate solutions based on targetedenvironments, such as server resource pools through the use of virtualization management tools,will emerge first. Some tools have integration with configuration management databases (CMDBs)to maintain batch service data for better change and configuration management of the batchservice to support reporting for compliance requirements. Integration with run book automation(RBA) tools, data center automation tools and cloud computing management tools to provide end-to-end automation will also continue to evolve into enterprise requirements during the next twoyears in order to facilitate growing or shrinking the pool of resources dynamically, i.e., in an RTIenvironment to meet batch-based SLAs. Furthermore, critical-path analysis capabilities to identifyjobs that may breach SLA requirements are being adopted by many of these tools.

User Advice: Users should use these tools instead of traditional job scheduling tools where there isa need to manage their batch or non-real-time environment using policies. Users should also keepin mind that not all ITWAB capabilities have been delivered yet (as highlighted above). Tools thathave developed automation capabilities such as run book automation (or IT operations processautomation) and/or are able to integrate with other IT operations tools should be used to implementan end-to-end automation.

Business Impact: ITWAB tools will have a big impact on the dynamic management of batch SLAs,increasing batch throughput and decreasing planned downtime, and will play a role in end-to-endautomation. ITWAB will be involved in the initial stages of implementing the service governorconcept of real-time infrastructure.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: BMC Software; CA Technologies; Cisco (Tidal); IBM Tivoli; Orsyp; RedwoodSoftware; Stonebranch; UC4 Software

Recommended Reading: "IT Workload Automation Broker: Job Scheduler 2.0"

"Magic Quadrant for Job Scheduling"

Gartner, Inc. | G00214402 Page 27 of 84

Workspace Virtualization

Analysis By: Terrence Cosgrove; Mark A. Margevicius; Federica Troni

Definition: Workspace virtualization tools separate the user's working environment (i.e., data,settings, personalization and applications) from the OS or any applications on the PC or Mac onwhich it executes. This allows users to run a corporate-managed workspace (i.e., one that ispatched, provisioned, secured, etc.) on a corporate or user-owned PC or Mac. Because these toolsallow the workspace to execute on the local client (as opposed to hosted virtual desktops [HVDs],which execute in the data center), users can have a secure, data-leakproof, device-independentworkspace, while leveraging local processing power and working offline.

Position and Adoption Speed Justification: Adoption of workspace virtualization tools wasoriginally driven by organizations that wanted to separate workspaces to prevent data leakage (e.g.,financial services or government). Growing interest in this technology is driven by:

■ Employee-owned PC programs (see "Checklist for an Employee-Owned Notebook or PCProgram")

■ Organizations looking for new ways to manage mobile or distributed users (see "NewApproaches to Client Management")

■ Organizations that want to give nonemployees temporary access to a corporate system (e.g.,contractors)

The technology has matured enough to support thousands of users; however, the large vendors donot have mature products, and the mature products come from small vendors. We are starting tosee this change as major client computing industry players, such as Intel, HP and Lenovo, start tobring workspace virtualization tools to market. This will drive adoption and awareness of thetechnology, and secure the viability of this market.

User Advice: Users should begin to evaluate workspace virtualization tools in proofs of concept,where user workspace isolation is needed and other approaches, like server-based computing orHVDs, cannot be applied (for example, when offline requirements or network issues prohibit the useof HVDs). Workspace virtualization tools hold particular promise for mobile users who areconnected intermittently to enterprise networks.

Current workspace virtualization vendors are at risk of being acquired through 2013. This likely willbe disruptive for customers. Therefore, the vendor's product must have significant value for thecustomer today to mitigate the risk of acquisition and subsequent disruptions to productadvancements or changes to product strategy.

Business Impact: Workspace virtualization tools offer some of the management benefits of HVDswithout requiring the necessary HVD infrastructure build out, while allowing the user to work offline,leverage local processing power and separate workspaces to prevent data leakage. Therefore, thereis a wide range of users: Mac users, employee-owned PC users, remote users connecting over slowlinks, contractors, knowledge workers and mobile users. This technology offers potentially highbenefits due to its ability to support user-owned IT initiatives and the separation of user andcorporate workspaces. Organizations that do not have a particular need for virtualization

Page 28 of 84 Gartner, Inc. | G00214402

capabilities and already manage their users effectively with current tools likely will see little value inworkspace virtualization tools.

Benefit Rating: High

Market Penetration: Less than 1% of target audience

Maturity: Emerging

Sample Vendors: Citrix Systems; MokaFive; Scense; Virtual Computer; VMware

At the Peak

Capacity Planning and Management Tools

Analysis By: Milind Govekar; Cameron Haight

Definition: Capacity planning tools help plan for optimal performance of business processes basedon planned variations of demand. These tools are designed to assist IT organizations achieveperformance goals and planning budgets, while potentially preventing overprovisioning ofinfrastructure or the purchasing of excessive off-premises capacity. Capacity planning tools providevalue by enabling enterprises to build performance scenarios (models) that relate to the businessdemand by often asking what-if questions, and assessing the impact of the scenarios on variousinfrastructure components. Capacity also has to be managed in real time in a productionenvironment. Thus, the technology has evolved from purely a planning perspective to provide real-time information dissemination (and control) of workloads to meet organizational performanceobjectives. This capacity management capability is becoming pervasive in virtual (and, to a lesserextent, cloud) environments via embedded technologies, such as VMware's Distributed ResourceScheduler (DRS). Increasingly, these technologies are being used to plan and manage capacity atthe IT service and business service levels.

Position and Adoption Speed Justification: While physical infrastructure and primarilycomponent-focused capacity planning tools have been available for a long time, products that cansupport an increasingly dynamic environment are not yet fully mature. Throughout 2010 and early2011, we have seen a rise in interest in and implementation of these tools. These offerings areincreasingly being used for standard data center consolidation activities, as well as related planningand management of virtual (and cloud) infrastructures. These tools have historically come with highprice tags and long learning curves due to their complexity, leading to additional costs foradequately trained personnel. However, some of these products have evolved to the point wheretheir use can be performed competently by individuals not associated with performance engineeringteams, with some of the capacity management tools requiring little human intervention at all.

User Advice: Capacity planning and management is becoming especially critical due to theincrease in shared infrastructures, where the potential for resource contention may be greater.Users should invest in capacity planning and management tools to lower costs and manage risk.Although some tools are easier to use and implement than others, many can still require a high level

Gartner, Inc. | G00214402 Page 29 of 84

of skill, so ensure that adequate training is available to maximize the utility of these products.Finally, determine your infrastructure and application management needs — some organizationsmay only require support of virtual (and cloud) environments, while others will seek to includesupport for what may still be a substantial legacy installed base.

Business Impact: Capacity planning and management tools should be used by organizations thatrely heavily on the business processes of the IT organization to avoid performance issues and avoidexcessive infrastructure (including cloud service) costs, and, hence, plan and forecast budgetsaccurately. These tools are usually successfully implemented by organizations that have high ITservice management maturity and a dedicated performance management group.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: BMC Software; CA Technologies; CiRBA; Opnet Technologies; Quest Software;Solution Labs; Sumerian; TeamQuest; Veeam; VKernel; VMTurbo; VMware

Recommended Reading: "Toolkit: Planning Your Performance Management and CapacityPlanning Implementation"

"Toolkit: Business and IT Operations Data for the Performance Management and Capacity PlanningProcess"

"Toolkit: Sample Job Description for a Capacity Planner"

"Toolkit Sample Template: Server Performance Monitoring and Capacity Planning Tool RFI"

"Toolkit: Guidance for Preparing an RFI for End-to-End Monitoring Tools"

IT Financial Management Tools

Analysis By: Milind Govekar; Biswajeet Mahapatra; Jay E. Pultz

Definition: Most IT organizations have to transform themselves to become a trusted serviceprovider to the business — this means transforming to provide services as opposed totechnologies, understanding cost drivers in detail, providing transparency of IT costs and valuedelivered, and generally "running IT like a business." IT financial management tools provide CIOsand senior IT leaders with IT costs data and analytics that best support strategic decision making.For example, an organization with a strategy of operational excellence might use a view detailingthe unit costs that constitute its IT services. Conversely, a business-process-oriented organizationmight want to see IT costs in relation to key business processes. Thus, these tools have the abilityto provide costing (transparency at a defined unit of service and cost drivers, using multiple costallocation models), pricing, budgeting, forecasting (including what-if scenarios), benchmarking,analytics, the ability to publish and/or subscribe costing and pricing with service catalog, reporting,billing and reconciliation with enterprise financial systems, among other functionalities. These tools

Page 30 of 84 Gartner, Inc. | G00214402

have the ability (adapters) to collect cost-related data from a heterogeneous and complex ITenvironment, the ability to build a cost model and reporting and allocation capabilities.

Most enterprises have traditionally used spreadsheets and other homegrown tools for IT financialmanagement. These spreadsheets are inadequate to run IT like a business in a multisourced ITservice environment. Vendors now provide software-based tools that are more powerful than thesetraditional approaches — and enable a quick-start versus the time required to develop a do-it-yourself tool.

Position and Adoption Speed Justification: Through 2010, and so far in 2011, we have seen asignificant rise in interest in the adoption these tools, mainly for showback and cost transparency.New IT financial management tools have emerged over the last five years, and some of the existingCOTS tools that have traditionally been used as substitutes for spreadsheets have also transformedto provide some of the key functionality needed for IT financial management. The demand for thesetools has also grown due to the increasing share of virtualization (shared infrastructure) in theproduction environment, interest in cloud computing service delivery models and increased interestin cost optimization and allocation. These tools will grow in capabilities, providing a wide range ofoptions, including simulations giving visibility to IT leaders on the impact of their decisions. Thesetools are also moving from being isolated and stand-alone tools to being a more integrated part oftraditional service management tools.

These tools will continue to gain visibility and capability as the pressure on enterprise IT increasesto run IT like a business. Furthermore, increased interest in cloud computing is putting additionalpressure on IT operations to provide transparency of costs, billing and chargeback information, thusincreasing the demand for the tools. Most organizations are beginning to see the benefits ofcommercial chargeback tools and are using these tools to provide greater transparency of costs,particularly in a multisourced environment where service costs from internal, outsourced, externaland cloud computing environments need to be managed to make sourcing decisions.

User Advice: As IT departments start their transformation toward investment centers or profitcenters, they need more transparency in their operations, provide investment justifications, reduceoperational ambiguity, strive for better management and control of their operations. IT financialmanagement tools provide the business with improved cost transparency or showback in amultisourced (internal, outsourced, cloud) IT service environment, allocate cost to the appropriatedrivers and help build cost as one of the key decision-making components. IT financialmanagement tools can help with this process, especially in showing where consumption driveshigher or lower variable costs. As IT moves toward a shared-service delivery model in a highlycomplex computing environment, these tools will enable more-responsible and accountablefinancial management of IT. However, users must have a functional financial managementorganization in IT, along with a well-defined process for a successful implementation of these tools.

Business Impact: IT financial management tools mainly help run the IT organization as a business.They affect the IT organization's ability to provide cost transparency and perform accurate costallocation, and have an impact on the value of the services it provides. The tools enable thebusiness and IT to manage and optimize the demand and supply of IT services. A major benefit isthat these tools enable enterprises to provide an insight into IT costs and fair apportionment of IT

Gartner, Inc. | G00214402 Page 31 of 84

service costs, if needed, based on differentiated levels of business unit service consumption. Theyalso show how the IT organization contributes to business value.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Apptio; BMC Software; ComSci; Digital Fuel; IBM Tivoli; Nicus Software; VMware

IT Service Portfolio Management and IT Service Catalog Tools

Analysis By: Debra Curtis; Kris Brittain

Definition: IT service portfolio management (ITSPM) products document the portfolio ofstandardized IT services, described in business value terms, along with their standard supportedarchitectures and contracts with internal and external service providers. ITSPM tools simplify theprocess of decomposing IT services into an IT service catalog of specific offerings that meets themajority of customer requirements and creates the service request portal, so that an end user or abusiness unit customer can purchase them. The service catalog portal format includes space foreasy-to-follow instructions on how to order or request services, as well as details on service pricing,service-level commitments and escalation/exception-handling procedures. ITSPM tools provide ITservice request handling capabilities to automate, manage and track the process workflow from aservice request (a customer order) that comes into the portal, through to service delivery, includingtask definition, dissemination and approval. ITSPM tools provide reporting and, sometimes, a real-time dashboard display of service demand, service delivery and service-level performance for ITanalysis and for customers to track their service requests. Finally, ITSPM financial managementcapabilities help the IT operations group analyze service costs and service profitability, and addressfunctional requirements specific to accounting, budgeting and billing for IT services in the portfolioand catalog.

Position and Adoption Speed Justification: As IT organizations adopt a business-oriented ITservice management strategy, they seek greater efficiency in discovering, defining and documentingIT services; automating the processes for delivering IT services; and managing service demand andservice financials. IT service catalogs are gaining new focus and impetus with the ITIL v.3 update.However, the target market for ITSPM and IT service catalog tools is the 5% to 15% of ITorganizations that have attained the service-aligned level of the ITScore maturity model forinfrastructure and operations, a fact that slows adoption speed and lengthens the Time to Plateau.

IT organizations will proceed through a number of maturity steps, likely first documenting theirservice catalog in a simple Microsoft Word document, then storing it in an Excel spreadsheet or ahomegrown database. A typical second stage of maturity appears with a homegrown IT servicecatalog portal on the intranet, which is placed under change control. Finally, IT organizations matureto using commercial, off-the-shelf ITSPM tools to present the IT service catalog online forcustomers to place orders through a self-service portal.

Page 32 of 84 Gartner, Inc. | G00214402

User Advice: Enterprises that want to automate the process workflow for ordering and delivering ITservices, as well as to track the financial aspects of the "business of IT," should evaluate these toolsonly when they have mature IT service management processes and have documented ITarchitecture standards that are already in place. Generally, ITSPM products do not directly measureapplication availability or IT infrastructure components for service quality reporting. Instead, theydepend on data imports from service-level reporting, business service management or ITcomponent availability monitoring tools. Therefore, integration usually will be required with othervendors/products to complete this function. Some functions of emerging ITSPM tools overlap withmore-mature IT service desk tools. There is a high potential for market consolidation and acquisitionas ITSPM features begin to blend with or disappear into other categories.

Business Impact: ITSPM products are intended to aid IT organizations in articulating the businessvalue of the IT services they offer, improve the customer experience by making it easier forcustomers to do business with IT, increase IT operations efficiency through service delivery processdocumentation and workflow automation, and help the IT operations group assess the profitabilityand competitiveness of its services. By documenting its portfolio of value-based, business-orientedIT services at different price points, the IT organization can present well-defined IT service offeringsto its business unit customers, which raises the organization's credibility with the business andhelps establish a foundation for service quality and IT investment negotiations that are based onbusiness value and results. Through standardization, along with a better understanding of customerrequirements and delivery costs (such as capital and labor requirements), the IT organization is in aposition to do an accurate cost/profit analysis of its service portfolio and to continually seekmethods to reduce delivery costs, while meeting customer service and quality requirements.

Once services are decomposed into standardized, orderable service catalog offerings, repeatableprocess methodologies for service delivery can be documented and automated. This will reduceerrors in service delivery, help identify process bottlenecks and uncover opportunities for efficiencyimprovements. In addition, service catalogs simplify the service request process for customers andimprove customer satisfaction by presenting a single "face" of IT to the customer for all kinds of ITinteractions (including incident logging, change requests, service requests, project requests andnew portfolio requests).

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Emerging

Sample Vendors: BMC Software; CA Technologies; Cisco-newScale; Digital Fuel; HP; IBM Tivoli;PMG; USU

Recommended Reading: "Case Study: Insurance Provider Improves Service Delivery via a ServiceCatalog"

"ITSM Fundamentals: How to Create an IT Service Portfolio"

"ITSM Fundamentals: How to Construct an IT Service Catalog"

Gartner, Inc. | G00214402 Page 33 of 84

"The Fundamental Starter Elements for IT Service Portfolio and IT Service Catalog"

Open-Source IT Operations Tools

Analysis By: Milind Govekar; Cameron Haight

Definition: Open-source service management tools are products offered under several licensingarrangements (GNU General Public License, Apache Software License, etc.), and designed toprovide similar IT service management (ITSM) and operations management capabilities as thoseoffered by traditional management providers. These capabilities include performance andavailability management, configuration management (including discovery, configurationmanagement databases [CMDBs] and provisioning), service desk, and service-level management.The qualitative (feature richness) and quantitative (feature breadth) attributes of these products havecontinued to improve, although they often require somewhat more-skilled users to maximize theirvalue. While these products often have been of interest to small or midsize organizations, due totheir cost sensitivity, we see more large enterprises viewing these tools as potential substitutes tothose from the larger enterprise management vendors. Accelerating this trend is the growingadoption of cloud computing in large, public cloud organizations that have opted for open-source-based (or do-it-yourself) management products to support open-source server and networkinginfrastructures. The rise of the DevOps movement is also creating interest in these products,especially in application life cycle management (ALM), including release management andconfiguration management.

Position and Adoption Speed Justification: Open-source service management products havebeen available for several years, supporting a variety of IT requirements. Increasingly, manyorganizations are beginning to use the tools in mission-critical environments, because theirfunctionality continues to improve, and standardized support and maintenance contracts havebecome more widely available. The ongoing pressure to reduce IT costs is also continuing toincrease interest in these products. While we don't anticipate broad-based adoption in Global 2000enterprises in the near term, we anticipate pockets of open-source management deploymentspecific to the functions identified above to occur more frequently within these large firms, inaddition to their growing adoption in the cloud computing marketplace.

User Advice: Understand not only the potential feature limitations of some of these products, butalso the open-source licenses under which they may be offered. Be sure to assess thesupportability and maintenance provisions, especially for products being offered as "projects" or vianoncommercial concerns. You may need to plan for consulting services to address any featurelimitations requiring remedy. Potentially, the total cost of ownership (TCO) of implementing thesetools may be higher than commercial off-the-shelf (COTS) tools, depending on the maintenance,integration, upgrades, customization and other requirements.

Business Impact: Open-source service management products can, in many cases, dramaticallyreduce your IT service and operations management spending in license revenue and associatedmaintenance. However, be prepared to actively manage your TCO downward, as this may turn outto be a bigger factor in cost terms.

Benefit Rating: High

Page 34 of 84 Gartner, Inc. | G00214402

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: Cfengine; GroundWork Open Source; Nagios; Opscode; Puppet Labs; Zabbix;Zenoss

VM Energy Management Tools

Analysis By: Rakesh Kumar; John R. Phelps; Philip Dawson

Definition: Virtual machine (VM) energy management tools enable users to understand andmeasure the amount of energy that is used by each VM in a physical server. This will enable usersto control operational costs and associate energy consumption with the applications running in aVM.

Position and Adoption Speed Justification: Data center infrastructure management (DCIM) toolsmonitor and model the use of energy across the data center. Server energy management tools trackthe energy consumed by physical servers. Although these tools are critical to show real-timeconsumption of energy by IT and facilities components, the need to measure the energy consumedat the VM level is also important. This will provide more-granular management of overall data centerenergy costs and allow association of energy across the physical to the virtual environments. As theuse of server virtualization increases, this association will become more important, as will be theability to associate the energy used by applications running the VM.

Distributed Power Management from VMware is available and is designed to throttle down inactiveVMs to reduce energy consumption. Coupled with Distributed Resource Scheduler, it can moveworkloads at different times of the day to get the most-efficient energy consumption for a particularset of VMs. Also, the core parking feature of Hyper-V R2 allows minimal use of cores for a givenapplication, even if multiple cores are predefined, keeping the nonessential cores in a suspend stateuntil needed, thus reducing energy costs.

The tools that provide VM energy management are at an early stage of development, but shouldimprove during the next few years.

User Advice: Start deploying appropriate tools to measure energy consumption in data centers at agranular level. This includes information at the server, rack and overall site levels. Use thisinformation to manage data center capacity, including the floor space layout for new hardware, andfor managing costs through virtualization and consolidation programs.

Acquire energy management tools that report data power and energy consumption efficiencyaccording to power usage effectiveness (PUE) metrics as a measure of data center efficiency. TheGreen Grid's PUE metric is increasingly becoming one of the standard ways to measure the energyefficiency of a data center.

Where appropriate, evaluate and deploy VM energy management tools. Gartner also encouragesusers to develop operational processes to maximize the relationships among the applications

Gartner, Inc. | G00214402 Page 35 of 84

running in VMs and the amount of energy that is used by VMs. For example, VM energymanagement tools could be used for chargeback or as the basis for application prioritization.

Business Impact: VM energy management tools will provide better management of data centeroperations costs, and more-granular energy-based and application-specific chargeback.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: 1E; Emerson Network Power-Aperture; HP; IBM; VMware

Recommended Reading: "Pragmatic Guidance on Data Center Energy Issues for the Next 18Months"

"Green Data Centers: Guidance on Using Energy Efficiency Metrics and Tools"

IT Process Automation Tools

Analysis By: Ronni J. Colville

Definition: IT operations process automation (IT PA) tools (previously called run book automation[RBA]) automate IT operations processes across different IT management disciplines. IT PAproducts have:

■ An orchestration interface to design, administer and monitor processes

■ A workflow to map and support the processes

■ Integration with IT elements and IT operations tools needed to support processes (e.g., fault toremediation, configuration management and disaster recovery)

IT PA products are used to integrate and orchestrate multiple IT operations management tools tosupport a process that may need multiple tools, and that may span many IT operationsmanagement domains. IT PA tools can focus on a specific process (for example, serverprovisioning), replacing or augmenting scripts and manual processes more broadly to processesthat cross different domains (cloud automation).

Position and Adoption Speed Justification: IT operations process automation continues to growas a key focus for IT organizations looking to improve IT operations efficiencies, and provide ameans to track and measure process execution. IT PA tools provide a mechanism to help ITorganizations integrate their disparate IT operations tool portfolios to improve process handoffs insupport of best practices. The IT PA market continues to attract new players from a wide range oftechnology focus areas, including event and fault management, change management, configurationmanagement, job scheduling and business process management.

Page 36 of 84 Gartner, Inc. | G00214402

The combination of a growing awareness of tool benefits and large IT management tool vendorsembracing IT PA across their own portfolios and for specific scenarios (e.g., incident resolution) isincreasing the introduction of new product capabilities (e.g., release management) and newvendors. Success with IT PA tools will continue to accelerate market awareness, and will spurfurther innovation in vendor solutions and in client use cases. In addition, IT PA tools are being usedto support key IT initiatives, such as the management of virtual infrastructures and cloud computing.

IT PA tools continue to be enhanced, especially around scalability, performance and usability, withthe most advanced providing embedded decision-making logic in their workflows to allowautomatic decisions on process execution. There are no signs that the adoption and visibility ofthese tools will diminish, as they continue to be used to address some of today's key IT challenges,including reducing IT operational costs, the automation of virtual infrastructure and supportingprivate cloud initiatives.

The two biggest inhibitors to more widespread adoption are:

■ More out of-the-box workflow templates or building blocks that will enable faster time toimplement. Without specific content for various scenarios, IT resources are tasked with buildingworkflows and associated execution scripts, which is often time consuming and requires in-depth tool knowledge.

■ Knowledge of the tasks or activities being automated. Many organizations try to use these toolswithout the necessary process knowledge, and developing this process design often requirescross-domain expertise and coordination. IT organizations that don't have their processes andtask workflows documented often take longer to gain success with these tools.

User Advice: IT PA tools that have a specific orientation (e.g., user provisioning, serverprovisioning, etc.) and provide a defined (out-of-the-box) process framework can aid in achievingrapid value. When used in this way, the IT PA tools are focused on a specific set of IT operationsmanagement processes. However, using a more general-purpose IT PA tool requires more-mature,understood process workflows. Select IT PA tools with an understanding of your process maturityand the tool's framework orientation. Clients should expect to see IT PA tools being positioned andsold to augment and enhance existing IT management products within a single vendor's productportfolio (e.g., cloud management platform solutions).

However, when used to support a broader range of process needs that cross domains and crossmultiple processes, clients should develop and document their IT operations managementprocesses before implementing IT PA tools. IT operations managers who understand the challengesand benefits of using IT operations management tools should consider IT PA tools as a way toreduce risk where handoffs occur or to improve efficiencies where multiple tool integrations canestablish repeatable best-practice activities. This can only be achieved after the issues ofcomplexity are removed through standardizing processes to improve repeatability andpredictability. In addition, IT operations processes that cross different IT management domain areaswill require organizational cooperation and support, and the establishment of process owners.

Business Impact: IT PA tools will have a significant effect on running IT operations as a businessby providing consistent, measurable and better-quality services at optimal cost. They will reduce

Gartner, Inc. | G00214402 Page 37 of 84

the human factor and associated risks by automating safe, repeatable processes, and will increaseIT operations efficiencies by integrating and leveraging the IT management tools needed to supportIT operations processes across IT domains.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Early mainstream

Sample Vendors: BMC Software; CA Technologies; Cisco; Citrix Systems; HP; iWave Software;IBM; LANDesk; Microsoft; NetIQ; Network Automation; Singularity; UC4 Software; Unisys; VMware

Recommended Reading: "Best Practices for and Approaches to IT Operations ProcessAutomation"

"Run Book Automation Reaches the Peak of Inflated Expectations"

"RBA 2.0: The Evolution of IT Operations Process Automation"

"The Future of IT Operations Management Suites"

"IT Operations Management Framework 2.0"

Application Performance Monitoring

Analysis By: Jonah Kowall; Will Cappelli; Milind Govekar

Definition: Application performance monitoring (APM) tools were previously represented in theHype Cycle by three technologies: end-user monitoring tools, application management andapplication transaction profiling. With this consolidation of APM products, we have selected theposition accordingly. Gartner's definition of APM is composed of five main functional dimensions:

1. End-user experience monitoring: Tracking the end-user application performance experience,and the quality with which an application is performing, including the use of synthetictransaction-based software robots, network-attached appliance-based packet capture andanalysis systems, endpoint instrumentation systems based on classical manager agent, andspecial-purpose systems targeted at complex Internet Protocol (IP)-based services.

2. User-defined transaction profiling: Following a defined set of transactions as it traverses theapplication stack and infrastructure elements that support the application. Possible methodsemployed are automated transaction-centric event correlation and analysis, and transactiontagging.

3. Application component discovery and modeling: Discovery of software and hardwarecomponent interrelationships as user-defined transactions are executed.

4. Application component deep-dive monitoring: Technologies that allow for an in-depth viewof the critical elements that hold a modern, highly modular application stack together.

Page 38 of 84 Gartner, Inc. | G00214402

5. Application performance management database: Storage, correlation and analysis ofcollected datasets to yield value.

Position and Adoption Speed Justification: Gartner has seen a rise in demand from clientsinvesting in these tools, as most enterprises continue their transformations from purelyinfrastructure management to application management. In their journeys toward IT servicemanagement, APM provides valuable IT operations for the rapid isolation and root cause analysis(RCA) of problems. The increasing adoption of private and public cloud computing is stimulating thedesire for more insight into application and user behavior. This journey will require collaborationwith, and coordination among, the application development teams and the IT infrastructure andoperations teams, which don't always have the same priorities and goals. The visibility of APM toolswithin this segment increased significantly during the past several years, and is continuing to do so.

User Advice: Enterprises should use these tools to proactively measure application availability andperformance. Most enterprises will need to implement several types of technology from themultidimensional model to satisfy the needs of different IT departments, as well as demands ofbusiness users and management. This technology is particularly suited for highly complexapplications and infrastructure environments. Enterprises should take into consideration that, onmany occasions, they will need support from consultants and application developmentorganizations to implement these tools successfully. Many organizations start with the end-userexperience monitoring tools to first get a view of end-user or application-level performance.

Business Impact: APM tools are critical in the problem-isolation process, thus shortening meantime to repair and improving service availability. They provide correlated performance data thatbusiness users can utilize to answer questions about service levels and user populations, often inthe form of easily digestible dashboards. These tools are paramount to improving andunderstanding service quality, as users interact with applications. They allow for multiple IT groupsto share captured data and assist users with advanced analysis needs.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: AppNeta; AppSense; Arcturus Technologies; ASG Software Solutions; Aternity;BMC Software; CA Technologies; Compuware; Correlsense; dynaTrace; Endace; ExtraHopNetworks; HP; IBM; Idera; Inetco Systems; InfoVista; Keynote Systems; ManageEngine; Microsoft;Nastel Technologies; NetScout Systems; Network Instruments; Opnet Technologies; OpTier;Oracle; Precise; Progress Software; Quest Software; SL Corp; Triometric

Recommended Reading: "Magic Quadrant for Application Performance Monitoring"

"The Future of Application Performance Monitoring"

"APM in 2011: Top Trends"

Gartner, Inc. | G00214402 Page 39 of 84

"Keep the Five Functional Dimensions of APM Distinct"

"Magic Quadrant for IT Event Correlation and Analysis"

COBIT

Analysis By: George Spafford; Simon Mingay; Tapati Bandopadhyay

Definition: COBIT is an IT control framework used as part of IT governance to ensure that the ITorganization meets enterprise requirements. Although originally an IT audit tool, COBIT isincreasingly being used by business stakeholders and IT management to identify and create ITcontrol objectives that help mitigate risks, and for high-level self-assessment of IT organizations.Process engineers in IT operations can leverage COBIT to identify controls to embed in processesto better manage risks. Using COBIT may be part of an enterprise-level compliance program or anIT process and quality program. COBIT is organized into four high-level domains (plan and organize,acquire and implement, deliver and support, and monitor and evaluate) that are made up of 34 high-level processes, with a variable number of control objectives for each.

The focus of this high-level framework is on what must be done, not how to do it. For example, theCOBIT framework identifies a software release as a control objective, but it doesn't define theprocesses and procedures associated with the software release. Therefore, IT operationsmanagement typically uses COBIT as part of a mandated program in the IT organization, and toprovide guidance regarding the kind of controls needed to meet the program's requirements.Process engineers can, in turn, leverage other standards, such as ITIL, for additional design detailsto pragmatically use.

COBIT 4.1 was released in May 2007 by the Information Systems Audit and Control Association(ISACA) to achieve greater consistency with its Val IT program, and to address some discrepanciesin COBIT v.4. Supplementary guidance Tools consists primarily of spreadsheets and mappings toother frameworks, such as the ITIL. COBIT v.5 will be released in 2012 to integrate COBIT, Val ITand Risk IT frameworks, and to provide additional guidance.

Position and Adoption Speed Justification: COBIT will have a steadily increasing effect on IToperations as IT operations organizations begin to increasingly realize its benefits, such as more-predictable operations, and as more enterprises adopt it and issue mandates for IT operations tocomply with it. As a control framework, COBIT is well-established. Its indirect effect on IToperations can be significant, but it's unlikely to be a frequent point of reference for IT operationsmanagement. As typical IT operations groups become more familiar with the implications of COBIT,and awareness and adoption increase, the framework will progress slowly along the Hype Cycle.We saw an increase in client inquiry calls in 2010, and expect interest to increase as IT operationsprocess engineers increasingly understand how to leverage the framework.

User Advice: IT operations managers who want to assess their controls to better mitigate risks andreduce variations, and are aiming toward clearer business alignments of IT services, should useCOBIT in conjunction with other frameworks, including ITIL and ISO 20000. Those IT operationsmanagers who want to gain insight into what auditors will look for, or into the potential implicationsfor compliance programs, should also take a closer look at COBIT. Any operations team facing a

Page 40 of 84 Gartner, Inc. | G00214402

demand for wholesale implementation should push back and focus its application in areas wherethere are specific risks, in the context of its operation.

COBIT is better-positioned than ITIL in terms of managing IT operation's high-level risks andcontrols; as such, enterprises that wish to put their IT service management program in the broadercontext of a controls and governance framework should use COBIT. For example, in COBIT, thebusiness triggers the IT services demand scenario, with business goals determining the number ofIT goals, and then the IT goals determining the process goals and, subsequently, the activity goals.These links can serve as audit trails to justify the IT activities and processes and services, and canhelp build business cases around each of them at the different levels of detail as required. Thecontrol strengths of COBIT are visible in the measurability of goals and process/serviceperformance (e.g., key goal indicators [KGIs]), defined as lagging indicators to measure whethergoals have been achieved, and the lead indicators of key performance indicators (KPIs) for planningand setting targets on processes and services. Each COBIT process has links to business goals tojustify what it focuses on, how it plans to achieve the targets and how it can be measured (metrics).

An additional consideration is that service improvement programs that seek to leverage ITIL all toofrequently set themselves up as bottom-up, tactical, process engineering exercises, lacking astrategic or business context. While ITIL encourages and provides guidance for a more strategicapproach, COBIT can help in achieving that, particularly by drawing business stakeholders into theorganizational change.

Business Impact: With a history as an auditing tool, COBIT is primarily concerned with reducingrisk. It affects all areas of managing the IT organization, including aspects of IT operations.Management should review how the use of controls can help better manage risks and result inimproved performance. COBIT's usefulness has moved a long way beyond a simple audit tool,particularly with the addition of the maturity models and the responsible, accountable, consultedand informed (RACI) charts.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Adolescent

Recommended Reading: "Leveraging COBIT for Infrastructure and Operations"

"Understanding IT Controls and COBIT"

Sliding Into the Trough

IT Service View CMDB

Analysis By: Ronni J. Colville; Patricia Adams

Definition: An IT service view configuration management database (CMDB) is a repository that hasfour functional characteristics: service modeling and mapping, integration/federation, reconciliation,

Gartner, Inc. | G00214402 Page 41 of 84

and synchronization. A CMDB maintains the dependencies among various IT infrastructurecomponents, and visualizes the hierarchical and peer-to-peer relationships that comprise an ITservice that's delivered to a business or to IT customers. Thus, a prerequisite for a CMDB is theidentification of business-oriented IT services and their associated components andinterrelationships.

A CMDB provides a consolidated configuration view of various sources of discovered data (as wellas manual data and documentation), which are integrated and reconciled into a single IT serviceview to assist with the impact analysis of pending changes. There are two approaches to definingthe services and their component relationships:

■ Manually input or imported from a static document or source that is manually maintained. Thisis tedious and often wrought with errors.

■ Using an IT service dependency mapping tool to populate CMDBs with discovered relationshipdata, which is then reconciled with other data sources to illustrate a broader view of the ITservices.

A CMDB also has synchronization capabilities that provide checks and balances to compareconfiguration data in various IT domain tools with one or more designated "trusted sources," whichare used to populate the CMDB. In addition, the CMDB provides federated links in context withother sources — such as problem, incident and asset management, as well as documentation — toenable a deeper level of analysis regarding proposed change impacts. Once the IT services aremodeled and mapped, other IT management tools will be able to "leverage" these views to enhancetheir capabilities (for example, business service management, data center consolidation projectsand disaster recovery).

ITIL v.3 introduced a new concept called configuration management system (CMS). This can be aCMDB or a completely federated repository. The concept of CMS offers a varied approach toconsolidating a view of all the relative information pertaining to an IT service. CMS tooling is actuallythe same as a CMDB; because federation is still a nonstandard and immature functionality, thereality of a CMS is still not technically viable.

Position and Adoption Speed Justification: Most IT organizations implement a CMDB as theyprogress along their ITIL journey, which usually begins with a focus on problem, incident andchange management, and then evolves into a CMDB. Subsequently, there is often a focus onconfiguration or IT asset management. Foundational to a successful CMDB are mature change andconfiguration management processes. IT organizations that do not have a focus on governingchange and tracking and maintaining accurate configurations will not be successful with a CMDBimplementation.

During the past 12 months, Gartner has seen an increase in successful implementations(approximately 20% to 40%, based on continued polling) that have achieved business value bymodeling and mapping at least three IT services. This has improved the ability to assess the impactof a change on one component and others in the IT services. CMDB implementations can take fromthree to five years to establish, but have no "end date" because they are ongoing projects. CMDBtools have been available since late 2005 and 2006. The combination of maturing tools andmaturing IT organization processes, as well as the realization that this type of project does not have

Page 42 of 84 Gartner, Inc. | G00214402

a quick ROI, has drawn out the planning, architecting and tool selection time frame. However, evenwith longer time frames, IT organizations can achieve incremental benefits (for example, data centervisibility) while undergoing the implementation. Disaster recovery, business continuity managementplanning and data center consolidation projects continue to be prevalent business drivers (alongwith the main driver of improving change impact analysis) for justifying CMDB projects.

CMDB technology is foundational, and will provide input for other IT operational tools to provide atrusted view of IT systems (for example, asset planning activities that need to implement a CMDBrequire a significant amount of planning, which includes configuration process assessments andorganizational alignment). IT organizations must involve a broad set of stakeholders to ensurecross-collaboration in defining the necessary configuration items that comprise a business serviceview. This effort could take one to three years to complete. Because enterprise infrastructures willinclude both internal and external resources (hybrid clouds), IT organizations will be challenged tomaintain a "living" source of the IT services their users consume, and to maintain change activity toreduce and minimize disruption. A CMDB will play a key role in achieving this. As CMS and CMDBtechnologies mature, they will become a critical part of an enterprise's trusted repository forsynchronizing the configuration service model with the real-time service model.

A CMDB is foundational and must be a prerequisite for a CMS. One significant inhibitor for CMS"realization" is federation capability. Today, most successful implementations have fewer than threesources of discovered data for integration and federation, because federation is still not robustenough for true multivendor environments. Without standards of any significance being adopted byCMDBs, CMS and the management data repository vendors that are suppliers of federatedinformation, IT organizations should continue to focus on CMDB implementations.

User Advice: Enterprises must have a clear understanding of the goals and objectives of their ITservice view CMDB projects, and have several key milestones that must be accomplished for asuccessful implementation. Enterprises should start now to prevent scope creep, or it may result insignificant delays. Enterprises lacking change and configuration management processes are likelyto establish inventory data stores that don't represent real-time or near-real-time data records.Trusted source data and reconciliation are essential components that require comprehensiveprocesses and organizational changes. IT organizations must know what trusted data they have,and what data will be needed to populate the CMDB and IT service models that will achieve theirgoals. Only data that has ownership and a direct effect on a goal should be in the CMDB IT serviceconfiguration models; everything else should be federated (e.g., financial and contractual datashould remain in the IT asset management repository, and incident tickets should remain with the ITservice desk).

Enterprises should develop an incremental approach to implementing an IT service view CMDB byfocusing on one or two IT services at a time, rather than trying to define them all at once. In manyscenarios, an IT service dependency mapping tool is a good place to start establishing a baseline ofrelationships across applications, servers, databases and middleware, and networking componentsand storage, and to start gaining insights into the current configuration. An IT service dependencymapping tool also is an effective vehicle for automating the data population of the CMDB.

Gartner, Inc. | G00214402 Page 43 of 84

Business Impact: A CMDB affects nearly all areas of IT operations. It will benefit providers (of data)and subscribers (of IT service views). It's a foundation for lowering the total cost of operations andimproving the quality of service. An IT service view CMDB implementation improves riskassessment of proposed changes and can assist with root cause analyses. It also facilitates a near-real-time business IT service view.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: BMC Software; CA Technologies; HP; IBM Tivoli

Recommended Reading: "Top Five CMDB and Configuration Management System MarketTrends"

"Cloud Environments Need a CMDB and a CMS"

"Four Pitfalls of the Configuration Management Database and Configuration Management System"

Real-Time Infrastructure

Analysis By: Donna Scott

Definition: Real-time infrastructure (RTI) represents a shared IT infrastructure (across customers,business units or applications) in which business policies and SLAs drive dynamic and automaticallocation and optimization of IT resources (that is, services are elastic), so that service levels arepredictable and consistent, despite the unpredictable demand for IT services. Where resources areconstrained, business policies determine how resources are allocated to meet business goals. RTImay be implemented in private and public cloud architectures, as well as hybrid environmentswhere data center and service policies would drive optimum placement of services and workloads.Moreover, in all these configurations, RTI provides the elasticity functionality, as well as dynamicoptimization and tuning, of the runtime environment based on policies and priorities.

Position and Adoption Speed Justification: This technology is immature from the standpoint ofarchitecting and automating an entire data center and its IT services for RTI. However, pointsolutions have emerged that optimize specific applications or specific environments, such asdynamically optimizing virtual servers (through the use of performance management metrics andvirtual server live-migration technologies) and dynamically optimizing Java Platform, EnterpriseEdition (Java EE)-based shared application environments. It is also emerging in cloud solutions,initially for optimizing placement of workloads or services upon startup based on pre-establishedpolicies. Moreover, enterprises are implementing shared disaster recovery data centers, wherebythey dynamically reconfigure test/development environments to look like the productionenvironment for disaster recovery testing and disaster strikes. This type of architecture can typicallyachieve recovery time objectives in the range of one to four hours after a disaster is declared.Because of the advancement in server virtualization, RTI solutions are making some degree ofprogress in the market, especially for targeted use cases where enterprises write specific

Page 44 of 84 Gartner, Inc. | G00214402

automation, such as to scale a website up/down and in/out. However, there is low marketpenetration, primarily because of lack of modeling the service (inclusive of runtime policies andtriggers for elasticity), lack of standards and lack of strong service governors/policy engines in themarket. This leaves customers that desire dynamic optimization to integrate multiple technologiestogether and orchestrate analytics with actions.

User Advice: Surveys of Gartner clients indicate that the majority of IT organizations view RTIarchitectures as desirable for gaining agility, reducing costs and attaining higher IT service quality,and that about 20% of organizations have implemented RTI for some portion of their portfolios.Overall progress is slow for internal deployments of RTI architectures because of manyimpediments, especially the lack of IT management process and technology maturity levels, butalso because of organizational and cultural issues.

It is also slow for public cloud services, where applications may have to be written to a specific andproprietary set of technologies to get dynamic elasticity. We see technology as a significant barrierto RTI, specifically in the areas of root cause analysis (required to determine what optimizationactions to take), service governors (the runtime execution engine behind RTI analysis and actions),and integrated IT process/tool architectures and standards. However, RTI has taken a step forwardin particular focused areas, such as:

■ Dynamic provisioning of development/testing/staging and production environments

■ Server virtualization and dynamic workload movement

■ Reconfiguring capacity during failure or disaster events

■ Service-oriented architecture (SOA) and Java EE environments for dynamic scaling ofapplication instances

■ Specific and customized automation written for specific use cases, such as scaling up/down orout/in a website that has variable demand

Many IT organizations that have been maturing their IT management processes and using ITprocess automation tools (aka run book automation) to integrate processes (and tools) together toenable complex, automated actions are moving closer to RTI through these actions. ITorganizations desiring RTI should focus on maturing their management processes using ITIL andmaturity models (such as Gartner's ITScore for I&O Maturity Model), and their technologyarchitectures (such as through standardization, consolidation and virtualization). They should alsobuild a culture conducive to sharing the infrastructure, and should provide incentives such asthrough reduced costs for shared infrastructures. Organizations should investigate and considerimplementing early RTI solutions in the public or private cloud or across data centers in a hybridimplementation, which can add business value and solve a particular pain point, but should notembark on data-center-wide RTI initiatives.

Business Impact: RTI has three value propositions expressed as business goals:

■ Reduced costs achieved by better, more-efficient resource use, and by reduced IT operationsmanagement (labor) costs

Gartner, Inc. | G00214402 Page 45 of 84

■ Improved service levels achieved by dynamic tuning of IT services

■ Increased agility achieved by rapid provisioning of new services or resources, and scalingcapacity (up and down) of established services across both internally and externally sourceddata centers

Benefit Rating: Transformational

Market Penetration: 5% to 20% of target audience

Maturity: Emerging

Sample Vendors: Adaptive Computing; CA Technologies; IBM Tivoli; NetIQ; Univa; VMware

Recommended Reading: "Provisioning and Configuration Management for Private CloudComputing and Real-Time Infrastructure"

"Private Cloud Computing Ramps Up in 2011"

"Survey Shows High Interest in RTI, Private Cloud"

"Building Private Clouds With Real-Time Infrastructure Architectures"

"Cloud Services Elasticity Is About Capacity, Not Just Load"

IT Service Dependency Mapping

Analysis By: Ronni J. Colville; Patricia Adams

Definition: IT service dependency mapping tools enable IT organizations to discover, documentand track relationships by mapping dependencies among the infrastructure components, such asservers, networks, storage and applications, that form an IT service. These tools are used primarilyfor applications, servers and databases; however, a few discover network devices (such asswitches and routers), mainframe-unique attributes and virtual infrastructures, thereby presenting acomplete service map. Some tools offer the capability to track configuration change activity forcompliance. Customers are increasingly focused on tracking virtual servers and their relationshipsto physical and other virtual systems, and this is becoming a differentiator among the tools. Sometools can detect only basic relationships (e.g., parent to child and host to virtual machine), butothers can detect the application relationships across virtual and physical infrastructures.

Most tools are agentless, though some are agent-based, and will build either a topographical mapor tabular view of the interrelationships of the various components. Vendors in this category providesets of blueprints or templates for the discovery of various packaged applications (e.g., WebSphereand Apache) and infrastructure components. The tools provide various methods for IT organizationsto develop blueprints or templates of internally developed or custom applications, which can thenbe discovered with the IT service dependency mapping tools.

Position and Adoption Speed Justification: Enterprises struggle to maintain an accurate and up-to-date view of the dependencies across IT infrastructure components, usually relying on data

Page 46 of 84 Gartner, Inc. | G00214402

manually entered into Visio diagrams and spreadsheets, which may not reflect a timely view of theenvironment, or not at all. There was no (near) real-time view of the infrastructure components thatmade up an IT service or how these components interrelated. Traditional discovery tools provideinsight about the individual components and basic peer-to-peer information, but they did notprovide the necessary parent/child hierarchical relationship information of how an IT service isconfigured to enable better impact analysis.

The last of the stand-alone, dependency mapping vendors was acquired in 2009 (BMC Softwareacquired Tideway); the rest were acquired from 2004 through 2006 by vendors with IT service viewCMDB tools with a plan to jump-start the data population of the CMDB with the IT service orapplication service models. However, for many enterprises, these tools still fall short in the area ofhomegrown or custom applications. Although the tools provide functionality to develop theblueprints that depict the desired state or a known logical representation of an application or ITservice, this task remains labor-intensive, which will slow enterprisewide adoption of the toolsbeyond its primary use of discovery.

Over the last 18 months, two vendors have emerged with IT service dependency discoverycapability. One is an IT service management vendor, and one is a new BSM vendor. These newtools have some capability for discovering relationships, but fall short in the depth and breadth ofblueprints and types of relationships (e.g., for mainframes and virtualized infrastructures). While theydon't compare competitively to the more mature tools, organizations with less complexity might findthem to be sufficient. Because no independent solutions have been introduced, or ones that are(potentially) easier to implement, these new vendors' tools may be an accelerant to adoption.

To meet the demand of today's data center, IT service dependency mapping tools requireexpanded functionality for breadth and depth of discovery, such as a broad range of storagedevices, virtual machines, mainframes and applications that crosses into the public cloud. While thetools are expensive, they can be easily justified based on their ability to provide a discovered viewof logical and physical relationships for applications and IT services They offer dramaticimprovements, compared with the prior manual methods. The adoption of these tools has increasedin the last 12 months, because new stakeholders (e.g., disaster recovery planners) and businessdrivers with growing use cases (e.g., enterprise architecture planning, data center consolidation andmigration projects) have emerged. Therefore, market awareness and sales traction have improved.

User Advice: Evaluate IT service dependency mapping tools to address requirements forconfiguration discovery of IT infrastructure components and software, especially where there is arequirement for hierarchical and relationship discovery. The tools should also be considered asprecursors to IT service view CMDB initiatives. If the primary focus is for an IT service view, then beaware that if you select one tool, the vendor is likely to try to thrust its companion IT service viewCMDB technology on you, especially if the CMDB is part of the underlying architecture of thediscovery tool. If the IT service dependency mapping tool you select is different from the CMDB,then ensure that the IT service dependency mapping vendor has an adapter to integrate andfederate to the desired or purchased CMDB.

These tools can also be used to augment or supplement other initiatives, such as business servicemanagement and application management, and other tasks that benefit from a near-real-time view

Gartner, Inc. | G00214402 Page 47 of 84

of the relationships across a data center infrastructure (e.g., configuration management, businesscontinuity). Although most of these tools aren't capable of action-oriented configurationmodification, the discovery of the relationships can be used for a variety of high-profile projects inwhich a near-real-time view of the relationships in a data center is required, including compliance,audit, disaster recovery and data center moves (consolidation and migration), and even in planningactivities by integrating with enterprise architecture tools for gap analysis. IT service dependencymapping tools can document what is installed and where, and can provide an audit trail ofconfiguration changes to a server and application.

Large enterprises with change impact analysis as a business driver build IT service view CMDBs(whereas SMBs prefer to use the tools for asset visibility), so the adoption rate of IT service DMtools parallels that of IT service view CMDBs. Where the business driver is near-real-time datacenter visibility, we have begun to see greater penetration in midsize and large organizationswithout an accompanying CMDB project.

Business Impact: These tools will have an effect on high-profile initiatives, such as IT service viewCMDB, by establishing a baseline configuration and helping populate the CMDB. IT servicedependency mapping tools will also have a less-glamorous, but significant, effect on the day-to-dayrequirements to improve configuration change control by enabling near-real-time change impactanalysis, and by providing missing relationship data critical to disaster recovery initiatives.

The overall value of IT service dependency mapping tools will be to improve quality of service byproviding a mechanism for understanding and analyzing the effect of change to one component andits related components within a service. These tools provide a mechanism that enables a near-real-time view of relationships that previously would have been maintained manually with extensive timedelays for updates. The value is in the real-time view of the infrastructure so that the effect of achange can be easily understood prior to release. This level of proactive change impact analysiscan create a more stable IT environment, thereby reducing unplanned downtime for critical ITservices, which will save money and ensure that support staff are allocated efficiently, rather thanfixing preventable problems. Using dependency mapping tools in conjunction with tools that can doconfiguration-level changes, companies have experienced labor efficiencies that have enabled themto manage their environments more effectively and improved stability of the IT services.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: BMC Software-Tideway; CA Technologies; HP; IBM; Neebula; Service-Now;VMware-EMC

Recommended Reading: "Selection Criteria for IT Service Dependency Mapping Vendors"

Mobile Device Management

Analysis By: Leif-Olof Wallin; Terrence Cosgrove; Ronni J. Colville

Page 48 of 84 Gartner, Inc. | G00214402

Definition: Mobile device management (MDM) includes software that provides the followingfunctions: software distribution, policy management, inventory management, security managementand service management for smartphones and media tablets. MDM functionality is similar to that ofPC configuration life cycle management (PCCLM) tools; however, mobile-platform-specificrequirements are often part of MDM suites.

Position and Adoption Speed Justification: Many organizations use MDM tools that are specificto a device platform or that manage a certain part of the life cycle (e.g., device lock or wipe),resulting in the adoption of fragmented toolsets. We are now beginning to see more focus on andadoption of MDM tools triggered by the attention and adoption of the iPad. While IT organizationsvary in their approaches to implementing and owning the tools that manage mobile devices (e.g.,the messaging group, some other mobile group, the desktop group, etc.), there are still very fewthat are managing the full life cycle across multiple device platforms. Organizations are realizing thatusers are broadening their use of personal devices for business applications. In addition, manyorganizations are using different ways to deploy MDM to support different management styles.These factors will drive the adoption of tools to manage the full life cycle of mobile devices.

Gartner believes that mobile devices will increasingly be supported in the client computing supportgroup in most organizations, and become peers with notebooks and desktops from a supportstandpoint. Indeed, some organizations are already replacing PCs with tablets for niche usergroups. An increasing number of organizations are looking for MDM functionality from PCCLMtools.

Gartner has moved the position of MDM back (to the left) this year because there are new dynamicsthat are affecting both the technology and user adoption. While MDM is not new, and some of thetechnology used to manage mobile devices is not new, what has changed is that, now, ITorganizations will be looking to manage more types of mobile devices from a single managementframework (or tool). Some IT organizations will be able to extend this capability into their PCCLMtools, as many of the functions will be similar; while, for others, MDM will be in a separate tool dueto organizational alignment challenges and the success or failure of their existing PCCLM tool.

User Advice: Organizations already manage notebooks similarly to office PCs; however, the needsof smartphone and media tablet users must be assessed. If your MDM requirements are similar toPCCLM tool capabilities, PCCLM tools should be leveraged wherever possible. Many PCCLM toolsdo not have strong mobile device support, so third-party tools may be required for at least the nexttwo years to automate those functions and other mobile-device-specific functions, such as devicewipe.

Business Impact: As more users rely on mobile computing in their jobs, the number of handhelddevices and media tablets used for business purposes is growing, especially with the introductionof the iPad. Therefore, MDM capabilities are likely to become increasingly important. Mobile devicesare being used more frequently to support business-critical applications, thus requiring more-stringent manageability to ensure secure user access and system availability. In this regard, MDMtools can have material benefits to improve user productivity and device data backup/recovery.Initially, the benefits will be visible mostly in sales force and workforce management deployments,where improved device management can increase availability and productivity, as well as decrease

Gartner, Inc. | G00214402 Page 49 of 84

support costs. In the short term, MDM tools may add significant per-user and per-device costs tothe IT budget. Companies will be at odds to allocate funds and effort to put increasing numbers ofdevices under management that seem far less expensive than notebooks and may be owned by theuser. The needs for security, privacy and compliance must be understood as factors beyond userchoice, and must be recognized as a cost of doing business in a "bring your own device" scenario.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Early mainstream

Sample Vendors: AirWatch; BoxTone; Capricode; Excitor; FancyFon Software; FiberlinkCommunications; Fixmo; Fromdistance; Good Technology; Ibelem; McAfee; Mobile Active Defense;MobileIron; Motorola Solutions; Odyssey Software; Smith Micro Software; SOTI; Sybase; Symantec;Tangoe; The Institution; Zenprise

Recommended Reading: "Magic Quadrant for Mobile Device Management Software"

"Mobile Device Management 2010: A Crowd of Vendors Pursue Consumer Devicesin theEnterprise"

"Use Managed Diversity to Support Endpoint Devices"

"The Five Phases of the Mobile Device Management Life Cycle"

"Microsoft's Mobile Device Management Solution Could Attain Long-Needed Focus"

"Mobile System Management Vendors Consolidate Across Configuration Markets"

"Toolkit Best Practices: Plan for Convergence of Mobile Security and Device Management"

"Toolkit: Are You Ready for the Convergence of Mobile and Client Computing?"

"Toolkit Decision Framework: Mobile Device Management in the Context of PC Management"

"Toolkit Decision Framework: Selecting Mobile Device Management Vendors"

"PC Life Cycle and Mobile Device Management Will Converge by 2012"

Business Service Management Tools

Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall

Definition: Business service management (BSM) is a category of IT operations managementsoftware that dynamically links the availability and performance events from underlying ITinfrastructure and application components to the business-oriented IT services that enable businessprocesses. To qualify for the BSM category, a product must support the definition, storage andvisualization of IT service topology via an object model that documents and maintains parent-child

Page 50 of 84 Gartner, Inc. | G00214402

relationships and other associations among the supporting IT infrastructure components. BSMproducts must gather real-time operational status data from underlying applications and ITinfrastructure components via their services or through established monitoring tools, such asdistributed system- and mainframe-based event correlation and analysis (ECA), job scheduling and,in some cases, application performance monitoring. BSM products then process the status dataagainst the object model, using potentially complex service health calculations and weightings,rather than straightforward inheritance, to communicate real-time IT service status. Results aredisplayed in graphical business service views, sometimes referred to as dashboards.

Position and Adoption Speed Justification: Every company wants to assess the impact of the ITinfrastructure and applications on its business processes, to match IT to business needs. However,only 10% of large companies have developed their IT operational processes to the point wherethey're ready to successfully deploy a BSM tool to achieve this. Adoption speed will continue to beslow, but steady, as IT organizations improve their IT management process maturity. BSM isstarting to slide toward the Trough of Disillusionment. IT organizations are discovering that BSMtools aren't easy to deploy, because a manual effort is required to identify the IT servicerelationships and dependencies, or implementation requires that a configuration managementdatabase (CMDB) be in place, which is not the case in most companies.

User Advice: Clients should choose BSM tools when they need to present a real-time, business-oriented dashboard display of service status, but only if they already have a mature, service-oriented IT organization. BSM requires that users understand the logical links between ITcomponents and the IT services they enable, as well as have good instrumentation for andmonitoring of these components.

Clients should not implement BSM to monitor individual IT infrastructure components or technologydomains. At its core, BSM provides the capability to manage technology as a business service,rather than as individual IT silos. Thus, BSM should be used when IT organizations try to becomemore business aligned in their IT service quality monitoring and reporting.

Business Impact: BSM tools help the IT organization present its business-unit customers with abusiness-oriented display of how well IT services are performing in support of critical processes.BSM tools identify the IT services affected by IT component problems, helping to prioritizeoperational tasks and support efforts relative to business impact. By following the visualrepresentation of the dependencies, from IT services to business applications and IT infrastructurecomponents (including servers, storage, networks, middleware and databases), BSM tools can helpthe IT department determine the root causes of service problems, thus shortening mean time torepair, especially for critical business processes.

Benefit Rating: High

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: ASG; BMC Software; CA Technologies; Compuware; eMite; HP; IBM Tivoli;Interlink Software; Neebula; NetIQ; Quest Software; Tango/04; USU

Gartner, Inc. | G00214402 Page 51 of 84

Recommended Reading: "Business Service Management Versus Application PerformanceMonitoring: Conflict and Convergence"

"Aligning ECA and BSM to the IT Infrastructure and Operations Maturity Model"

"Toolkit: How to Begin Business Service Management Implementation"

Configuration Auditing

Analysis By: Ronni J. Colville; Mark Nicolett

Definition: Configuration auditing tools provide change detection, configuration assessment(comparing configuration settings with operational or security policies) and the reconciliation ofdetected changes against approved requests for changes (RFCs) and mitigation. Discoveredchanges can be automatically matched to the approved and documented RFCs that are governedin the IT change management system, or to manually logged changes. Configuration settings areassessed against company-specific policies (for example, the "golden image") or against industry-recognized security configuration assessment templates, which are used for auditing and securityhardening (such as those of the U.S. National Institute of Standards and Technology and Center forInternet Security).

These tools focus on requirements that are specific to servers or PCs, but some can also addressnetwork components, applications, databases and virtual infrastructures, including virtual machines(VMs). Some of these tools provide change detection in the form of file integrity monitoring (FIM),which can be used for Payment Card Industry (PCI) compliance, as well as support for other policytemplates (such as the U.S. Federal Information Security Management Act [FISMA] or the UnitedStates Government Configuration Baseline [USGCB]). Exception reports can be generated, andsome tools can automatically return the settings to their desired values, or can block changesbased on approvals or specific change windows.

Position and Adoption Speed Justification: Broad configuration change detection capabilities areneeded to guarantee system integrity by ensuring that all unauthorized changes are discovered andpotentially remediated. Configuration auditing also has a major external driver (regulatorycompliance) and an internal driver (improved availability). Implementation of the technology is gatedby the process maturity of the organization. Prerequisites include the ability to define and implementconfiguration standards. Although a robust, formalized and broadly adopted change managementprocess is desirable, these tools offer significant benefits for tracking configuration change activitywithout automating change reconciliation. Without the reconciliation requirement, other tools (e.g.,operational configuration tools or security configuration assessment tools) can be considered forconfiguration auditing and can broaden the vendor landscape.

Configuration auditing continues to be one of the top three drivers for adopting server configurationautomation for three reasons. First, there are more tools available with a focus on varying levels andcapabilities for configuration auditing (operational-based and security-based). Second, there is aheightened awareness of security vulnerabilities, and, third, there continues to be an increase in thenumber of changes and types of changes being made across an IT infrastructure. Compoundingthese is the growing number of audits that IT organizations need to be prepared for across a variety

Page 52 of 84 Gartner, Inc. | G00214402

of industries. These conditions are compelling IT organizations to implement mechanisms to trackchanges (to ensure there's no negative impact on availability), and to audit for missing patches orother vulnerabilities.

Configuration auditing tools are most often bought by those in operational system administrationroles (e.g., system administrators and system engineers). In some cases, these tools are bought bythose responsible for auditing. Security administrators often implement subset functions ofconfiguration auditing (security configuration assessment and/or file integrity monitoring), which arecapabilities provided by a variety of security products.

The adoption of configuration auditing tools will continue to accelerate, but point-solution tools willcontinue to be purchased to address individual auditing and assessment needs. The breadth ofplatform coverage (e.g., servers, PCs and network devices) and policy support varies greatly amongthe tools, especially depending on whether they are security-oriented or operations-oriented.Therefore, several tools may end up being purchased throughout an enterprise, depending on thebuying center and the specific functional requirements.

User Advice: Develop sound configuration and change management practices in your organizationbefore introducing configuration auditing technology. Greater benefits can be achieved if robustchange management processes are also implemented, with the primary goal of becoming proactive(before the change occurs) versus reactive (tracking changes that violate policy, introduce risk orcause system outages). Process development and technology deployment should focus on thesystems that are material to the compliance issue being solved; however, broader functionalrequirements should also be evaluated, because many organizations can benefit from more thanone area of focus, and often need to add new functions within 12 months.

Define the specific audit controls that are required before configuration auditing technology isselected, because each configuration auditing tool has a different focus and breadth — forexample, security regulation, system hardening, application consistency and operating systemconsistency. IT system administrators, network administrators or system engineers should evaluateconfiguration auditing tools to maintain operational configuration standards and provide a reportingmechanism for change activity. Security officers should evaluate the security configurationassessment capabilities of incumbent security technologies to conduct a broad assessment ofsystem hardening and security configuration compliance that is independent of operationalconfiguration auditing tools.

Business Impact: Not all regulations provide a clear definition of what constitutes compliance for IToperations and production support, so businesses must select reasonable and appropriate controls,based on reasonably anticipated risks, and build a case that their controls are correct for theirsituations. Reducing unauthorized change is part of a good control environment. Define thenecessary audit controls before selecting a product, but look broadly across your infrastructure toensure that the appropriate tool is selected. Although configuration auditing has been taskedindividually in each IT domain, as enterprises begin to develop an IT service view, configurationreporting and remediation (as well as broader configuration management capabilities) will ensurereliable and predictable configuration changes and offer policy-based compliance with auditreporting.

Gartner, Inc. | G00214402 Page 53 of 84

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Mature mainstream

Sample Vendors: BMC Software (BladeLogic); Tripwire; VMware

Recommended Reading: "Server Configuration Baselining and Auditing: Vendor Landscape"

"Market Trends and Dynamics for Server Provisioning and Configuration Management Tools"

"Security Configuration Management Capabilities in Security and Operations Tools"

Advanced Server Energy Monitoring Tools

Analysis By: Rakesh Kumar; John R. Phelps; Jay E. Pultz

Definition: Energy consumption in individual data centers is increasing rapidly, by 8% to 12% peryear. The energy is used for powering IT systems (for example, servers, storage and networkingequipment) and the facility's components (for example, air-conditioning systems, power distributionunits and uninterruptible power supply systems). The increase in energy consumption is driven byusers installing more equipment, and by the increasing power requirements of high-density serverarchitectures.

While data center infrastructure management (DCIM) tools monitor and model energy use acrossthe data center, server-based energy management software tools are specifically designed tomeasure the energy use within server units. They are normally an enhancement to existing servermanagement tools, such as HP Systems Insight Manager (HP SIM) or IBM Systems Director. Thesesoftware tools are critical to gaining accurate and real-time measurements of the amount of energya particular server is using. This information can then be fed into a reporting tool or into a broaderDCIM toolset. The information will also be an important trigger for the real-time changes that willdrive real-time infrastructure. Hence, for example, a change in energy consumption may drive aprocess to move an application from one server to another.

Position and Adoption Speed Justification: Server vendors have developed sophisticated internalenergy management tools during the past three years. However, the tools are vendor-specific, andare often seen by the vendors as a source of competitive advantage over rival hardware suppliers.In reality, they provide pretty much the same information, and it's the use of that information inbroader system management or DCIM tools that generates enhanced user value. For example,using the energy data to provide the metrics for energy-based chargeback is beginning to resonatewith users, but requires not just server-based energy management tools, but also the use ofchargeback tools.

User Advice: In general, users should start deploying appropriate tools to measure energyconsumption in data centers at a granular level. This includes information at the server, rack andoverall site levels. Use this information to manage data center capacity, including floor space layoutof new hardware, and for managing costs through virtualization and consolidation programs. Users

Page 54 of 84 Gartner, Inc. | G00214402

should acquire energy management tools that report data power and energy consumption efficiencyaccording to power usage effectiveness (PUE) metrics as a measure of data center efficiency.

Specifically for servers, users need to ensure that all new systems have sophisticated energymanagement software tools built into the management console. Users should ensure that thefunctionality and maturity of these tools are part of the selection process. We also advise users togive more credit to tools that provide output in a standard fashion that is easily used by the DCIMproducts.

Business Impact: Server-based energy management software tools will evolve in functionality tohelp companies proactively manage energy costs in data centers. They will continue to becomeinstrumental in managing the operational costs of hardware.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: Dell; HP; IBM

Network Configuration and Change Management Tools

Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall

Definition: Network configuration and change management (NCCM) tools focus on discovering anddocumenting network device configurations; detecting, auditing and alerting on changes;comparing configurations with the policy or "gold standard" for that device; and deployingconfiguration updates to multivendor network devices.

Position and Adoption Speed Justification: NCCM has primarily been a labor-intensive, manualprocess that involves remote access (for example, telneting) to individual network devices andtyping commands into vendor-specific command-line interfaces that are fraught with possibilitiesfor human error, or creating homegrown scripts to ease retyping requirements. Enterprise networkmanagers rarely considered rigorous configuration and change management, compliance audits ordisaster recovery rollback processes when executing network configuration alterations, even thoughthey are the things that often cause network problems. However, corporate audit and complianceinitiatives are forcing a shift in requirements.

A new generation of NCCM vendors has created tools that operate in multivendor environments,enable automated configuration management and bring more-rigorous adherence to the changemanagement process, as well as provide compliance audit capabilities. The market has progressedto the point that many of these startups have been acquired, and new vendors have entered themarket using various angles to differentiate themselves, such as appliance-based products, cloud-based alternatives, integration with security, out-of-band device management and free entry-levelproducts to form a basis for upselling.

Gartner, Inc. | G00214402 Page 55 of 84

Nonetheless, NCCM tools are nearing the Trough of Disillusionment, but it is not because of thetools themselves, which work well and can deliver strong benefits to a network management team.The network configuration management discipline is held back by network managers that are oftenreluctant to change their standard operating procedures. Network configuration management isoften practiced by router gurus who are the only ones familiar with the arcane command-lineinterfaces for their various network devices. They believe it provides job security for networkmanagers who can use the arcane commands. It takes a top-down effort from senior ITmanagement and a change in personnel performance review metrics to convince network managersof the business importance of documented network device configuration policies, rigorous changemanagement procedures and tested disaster recovery capabilities.

User Advice: Replace manual processes with automated NCCM tools to monitor and controlnetwork device configurations, thus improving staff efficiency, reducing risk and enabling theenforcement of compliance policies. Prior to investing in tools, establish standard network deviceconfiguration policies to reduce complexity and enable more-effective automated change. NCCMtends to be a discipline unto itself; however, in the future, it must increasingly be considered part ofthe configuration and change management processes for an end-to-end IT service, and viewed asan enabler for the real-time infrastructure (RTI). This will require participation in the strategic,companywide change management process (usually implemented through IT service desk tools)and integration with configuration management tools for other technologies, such as servers andstorage. In addition, network managers need to gain trust in automated tools before they let anyproduct go off and perform a corrective action without human oversight. With cost minimization andservice quality maximization promised by new, dynamically virtualized, cloud-based RTI,automation is becoming a prerequisite, because humans can no longer keep up with problems andchanges manually.

Business Impact: These tools provide an automated way to maintain network deviceconfigurations, offering an opportunity to lower costs, reduce human error and improve compliancewith configuration policies.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: AlterPoint; BMC Software-Emprisa Networks; EMC-Voyence; HP-Opsware; IBM-Intelliden; Infoblox-Netcordia; ManageEngine; SolarWinds

Recommended Reading: "MarketScope for Network Configuration and Change Management"

Server Provisioning and Configuration Management

Analysis By: Ronni J. Colville; Donna Scott

Definition: Server provisioning and configuration management is a set of tools focused onmanaging the configuration life cycle of physical and virtual server environments. Although thesetools manage the configuration changes to the virtual server software stack, managing virtual

Page 56 of 84 Gartner, Inc. | G00214402

servers adds new dynamics that have not yet been adequately addressed by larger suppliers (forexample, physical to virtual [P2V], clones, templates, security configuration and system hardening),but for which point solutions have emerged. Some suppliers offer functionality for the entire lifecycle of functionality across physical and virtual servers, others offer specific point solutions in oneor two areas, and still others focus solely on virtual servers (with no functionality to manage theconfiguration of physical servers). Moreover, this is foundational technology for public and privatecloud computing, as it enables server and application provisioning and maintenance. The maincategories for managing servers (physical and virtual) are:

Server provisioning: Provisioning has historically focused on bare-metal installation of a newphysical server, as well as on deploying new applications to a server. This is usually performed withimaging, scripting response files for unattended installations or by leveraging the platform vendor'sutilities (for example, Solaris's Jumpstart or Linux's KickStart. With virtual servers, you start withprovisioning the hypervisor (which, in many respects, is like bare-metal provisioning of an OS). Oncethe physical server is installed with a hypervisor, virtual machines (VMs) need to be provisioned.Provisioning VMs is done in several ways, including using similar methods as for physical serversfor the software stack, but there are also new methods (because of the platform capabilities of thehypervisor). For example, you may choose to provision a virtual server from a physical server (P2Vmigration), or VMs can be created from templates, from hardware- and virtualization-independentimages, or VMs can be created "empty" and then built layer by layer through unattendedinstallations. Provisioning the new guest can be done either from the hypervisor managementconsole (for example, vCenter), or through an existing server provisioning and configurationmanagement tool (for example, from BMC Software or HP) through APIs to the hypervisormanagement console.

Application provisioning and configuration management (including patch management): Thisincludes a broad set of multiplatform functionality to discover and provision (that is, package,deploy and install) OSs and application software; these tools also can make ongoing updates toOSs or applications (for example, patches, new versions and new functionality), or they can updateconfiguration settings. This functionality applies to both physical and virtual servers; it often requiresa preinstalled agent for continued updates. Moreover, virtual servers have an additional nuance forpatch management — specifically, the need for patching offline VMs (in addition to online VMs).Application provisioning and configuration management could be augmented by application releaseautomation (to deploy application artifacts) and is represented by a separate technology profile.

Inventory/discovery, configuration modeling, audit and compliance: This enables the discoveryof software, hardware and virtual servers; some can discover dependency relationships acrossservers and applications. Using modeling of application and OS configuration settings (that is, thedesired state, or gold standard), these tools can report, and may be able to remediate, variations bymodifying the actual state back to whatever the model requires, or the desired state forapplications, as well as security configuration settings, such as the U.S. National Institute ofStandards and Technology (NIST), the Center for Internet Security (CIS) and the U.S. NationalSecurity Agency (NSA). For virtual servers, system-hardening guidelines for the hypervisor wouldalso apply. Moreover, additional requirements for virtual servers include dependencies on thecreation or lineage of the VMs, as well as the relationships between the VM and the physicalmachines on which it is run (which vary due to mobility).

Gartner, Inc. | G00214402 Page 57 of 84

Position and Adoption Speed Justification: Server provisioning and configuration managementtools continue to mature in depth of function, as well as in integration with adjacent technologies(such as change management, dependency mapping and run book automation). However, thesetools have been slow to add deep functionality for virtual server configuration management, andnew vendors have emerged that focus only on virtual servers included in this category. Although thetools are progressing, the configuration policies, organizational structures and processes inside thetypical enterprise are causing implementations to move slower. There has been an uptick inadoption of minisuites (not the entire life cycle) by both midsize and large enterprises to solvespecific problems (e.g., multiplatform provisioning, compliance-driven audits including improvingpatch management). Therefore, adoption is improving, but to get real value from these toolsrequires a level of standardization in the software stack. Many organizations don't have this level ofstandardization, and build servers on a custom basis. Moreover, even if organizations begin tostandardize, for example, in the OS area, they often have different application, middleware anddatabase management system groups that do not employ the same level of rigor in standards.Fortunately, some IT organizations are benefiting from viewing the public cloud providers, and howtheir level of standardization can enable rapid provisioning, and are seeking to internalize some ofthese attributes. In the meantime, however, server provisioning and configuration managementtools are nearing the Trough of Disillusionment — not so much due to the tools, but due to ITorganizations' inability to standardize and use the tools broadly across the groups supporting theentire software stack.

User Advice: With an increase in the frequency and number of changes to servers and applications,IT organizations should emphasize the standardization of technologies and processes to improveand increase availability, as well as to succeed in using server provisioning and configurationmanagement tools for (physical and) virtual servers. Besides providing increased quality, these toolscan reduce the overall cost to manage and support patching and rapid deployments, and VM policyenforcement, as well as provide a mechanism to monitor compliance. Evaluation criteria shouldinclude technologies that provide active capability (installation and deployment) and ongoingmaintenance, as well as auditing and reporting, and should include the capability to address theunique requirements of virtual servers and VM guests. When standards have emerged, werecommend that organizations implement these tools to automate manual tasks for repeatable,accurate and auditable configuration change control. The tools help organizations gain efficienciesin moving from a monolithic imaging strategy to a dynamic layered approach to incrementalchanges. When evaluating products, organizations need to:

■ Evaluate functionality across the life cycle, and not just the particular pain point at hand.

■ Consider physical and virtual server provisioning and configuration management requirementstogether.

■ Conduct rigorous testing to ensure that functionality is consistent across required platforms.

■ Ensure that tools address a variety of compliance requirements.

Business Impact: Server provisioning and configuration management tools help IT operationsautomate many server provisioning tasks, thereby lowering the cost of IT operations, increasingapplication availability and increasing the speed of modifications to software and servers. They alsoprovide a mechanism for enforcing security and operational policy compliance.

Page 58 of 84 Gartner, Inc. | G00214402

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: BMC Software (BladeLogic); CA Technologies; HP (Opsware); IBM; ManageIQ;Microsoft; Novell; Tripwire; VMware

Recommended Reading: "Server Provisioning Automation: Vendor Landscape"

"Provisioning and Configuration Management for Private Cloud Computing and Real-timeInfrastructure"

"Server Configuration Baselining and Auditing: Vendor Landscape"

"Market Trends and Dynamics for Server Provisioning and Configuration Management Tools"

ITIL

Analysis By: George Spafford; Simon Mingay; Tapati Bandopadhyay

Definition: ITIL is an IT service management framework, developed under the auspices of theU.K.'s Office of Government Commerce (OGC), that provides process guidance on the full life cycleof defining, developing, managing, delivering and improving IT services. It is structured into fivemain books: Service Strategy, Service Design, Service Transition, Service Operation and ContinualService Improvement. ITIL does not provide specific advice on how to implement or measure thesuccess of the implementation; rather, that is something that an organization should adapt to itsspecific needs.

Position and Adoption Speed Justification: ITIL has been evolving for more than 20 years. It iswell-established as the de facto standard in service management, and is embedded in the formalservice management standard of ISO 20000. The current release, v.3, was introduced in 2007.Based on our client discussions and conference attendee polls, uptake is slowly growing.

Organizations beginning service improvements are starting with v.3, and many groups that wereusing the previous version, v.2, are in various phases of transition. As a result, Gartner hasconsolidated its historical v.2 and v.3 Hype Cycle entries into a single entry for ITIL overall.

The current version of ITIL covers the entire IT service life cycle more comprehensively. The ITIL lifecycle begins with the development of strategies relating to the IT services that are needed to enablethe business. ITIL then introduces processes concerned with the design of those IT services, theirtransition into production and ongoing operational support, and then continual serviceimprovement. In general, Service Transition and Service Operation are the most commonly usedbooks. ITIL v.3 has also incorporated more-proactive processes, like event management andknowledge management, in addition to reactive ones, such as the service desk function and theincident management process.

Gartner, Inc. | G00214402 Page 59 of 84

Despite some claims to the contrary, ITIL has a role to play in operational process design,regardless of how it is sourced, even for cloud computing. All solutions require a blending ofpeople, processes and technologies. The question isn't whether processes are relevant; rather,there must be an understanding of what is necessary to properly underpin the services that IT isoffering to the business. Based on these objectives, the relevant processes should be identified,designed and transitioned into production, and then subject to continual improvement. ITIL willcontinue to serve as a source of process design guidance for process engineers to draw from.

Overall, we continue to see a tremendous span of adoption and maturity levels. Some organizationsare just embarking on their journey, for a variety of reasons, whereas others are well on their wayand pursuing continual improvement, integrating other tool process improvement frameworks, suchas Six Sigma and lean manufacturing. In fact, a combination of process guidance from varioussources tends to do a better job of addressing requirements than any framework in isolation.

User Advice: ITIL provides guidance on putting IT service management into a strategic context andprovides high-level guidance on reference processes. To optimize service improvements, ITorganizations must first define objectives, and then pragmatically leverage ITIL during the design oftheir own unique processes. ITIL has been widely adopted around the world, with extensivesupporting services and technologies. Finding staff that has worked in or been formally trained inITIL is now relatively easy.

An update to the ITIL guidelines will be forthcoming, either in late 2011 or early 2012. Werecommend that groups proceed with pragmatic adoption of the current version, and then leveragethe new guidance once it is released.

Business Impact: ITIL provides a framework of processes for the strategy, design, transition,operation and continual improvement of IT services. IT organizations desiring more effective andefficient outcomes should immediately evaluate this framework for applicability. Most ITorganizations need to start or continue the transition from their traditional technology and assetfocus to a focus on services and service outcomes. IT service management is a critical discipline inachieving that change, and ITIL provides useful reference guidance for IT management to drawfrom.

Benefit Rating: Transformational

Market Penetration: 20% to 50% of target audience

Maturity: Adolescent

Recommended Reading: "How to Leverage ITIL for Process Improvement"

"Top Six Foundational Steps for Overcoming Resistance to ITIL Process Improvement"

"Don't Just Implement CMMI and ITIL: Improve Services"

"Evolving Roles in the IT Organization: The IT Product Manage"

Page 60 of 84 Gartner, Inc. | G00214402

Hosted Virtual Desktops

Analysis By: Mark A. Margevicius; Ronni J. Colville; Terrence Cosgrove

Definition: A hosted virtual desktop (HVD) is a full, thick-client user environment, which is run as avirtual machine (VM) on a server and accessed remotely. HVD implementations comprise servervirtualization software to host desktop software (as a server workload), brokering/sessionmanagement software to connect users to their desktop environment, and tools for managing theprovisioning and maintenance (e.g., updates and patches) of the virtual desktop software stack.

Position and Adoption Speed Justification: An HVD involves the use of server virtualization tosupport the disaggregation of a thick-client desktop stack that can be accessed remotely by itsuser. By combining server virtualization software with a brokering/session manager that connectsusers to their desktop instances (that is, the operating system, applications and data), enterprisescan centralize and secure user data and applications, and manage personalized desktop instancescentrally. Because only the presentation layer is sent to the accessing device, a thin-client terminalcan be used. For most early adopters, the appeal of HVDs has been the ability to "thin" theaccessing device without significant re-engineering at the application level (as is usually required forserver-based computing).

While customers implementing HVDs cite many reasons for deployments, three important factorsthat have contributed to the increase in focus on HVD: the desire to implement new clientcomputing capabilities in conjunction with Windows 7 migrations, the desire for device choice (inparticular, iPad use), and the uptick in adoption of virtualization in data centers, where this is nowmore capacity for virtualized systems and greater experience in the required skills. Additionally,during the past few years, adoption of virtual infrastructures in enterprise data centers hasincreased (up from 10% to 20% to about 40% to 70%). With this increase comes both a level ofmaturity and an understanding of how to better utilize the technology. This awareness helps withimplementations of HVD where both desktop engineers and data center administrators cometogether for a combined effort.

Early adoption was hindered by several factors, one main one being licensing compliance issues forthe Windows client operating system, but that has been resolved through Microsoft's WindowsVirtual Desktop Access (VDA) licensing offerings. Even with Microsoft's reduced license costs forWindows OS (offered in mid-2010, by adding it to Software Assurance), enabling an HVD image tobe accessed from a primary and a secondary device for a single license fee, other technical issueshave hindered mainstream adoption. Improvements in the complexity of brokering software andremote-access protocols will continue to occur through 2011, extending the range of desktop userscenarios that HVDs can address; yet, adoption will remain limited to a small percentage of theoverall desktop installed base.

Since late 2007, HVD deployments have grown steadily, reaching around 6 million at the end of2010. Because of the constraints previously discussed, broad applicability of HVDs has been limitedto specific scenarios, primarily structured-task workers in call centers, and kiosks, trading floors andsecure remote access; about 50 million endpoints is still the current target population of the total700 million desktops. Through the second half of 2011 and into 2012, we expect more-generaldeployments to begin. Inhibitors to general adoption involve the cost of the data center

Gartner, Inc. | G00214402 Page 61 of 84

infrastructure that is required to host the desktop images (servers and storage, in particular) andnetwork constraints. Even with the increased adoption of virtual infrastructure, cost-justifying HVDimplementations remains a challenge, because of HVD cost comparisons to those of PCs.Additionally, availability of the skills necessary to manage virtual desktops is also an ongoingchallenge. Furthermore, deploying HVDs to mobile/offline users remains a challenge, despite thepromises of offline VMs and advanced synchronization technologies.

Through 2011, broader manageability of HVD VMs will improve, as techniques to reduce HVDstorage volumes lead to new mechanisms for provisioning and managing HVD images bysegmenting them into more-isolated components (including operating systems, applications,persistent personalization and data). These subsequent manageability improvements will extend theviability of HVD deployments beyond the structured-task worker community — first to desk-basedknowledge workers, then to new use cases, such as improved provisioning and deprovisioning,contractors, and offshore developers.

HVD marketing has promised to deliver diminishing marginal per-user costs, due to the high level ofstandardization and automation required for successful implementation; however, this is currentlyonly achievable for persistent users where images remain intact — a small use case of the overalluser population. As other virtualization technologies mature (e.g., brokers and persistentpersonalization), this restraint will be reduced. This creates a business case for organizations thatadopt HVDs to expand their deployments, as soon as the technology permits more users to beviably addressed. Enterprises that adopt HVDs aggressively will see later adopters achieve superiorresults for lower costs, but will also need to migrate to new broker and complementarymanagement software as products mature and standards emerge. This phenomenon is set tofurther push HVDs into the Trough of Disillusionment in late 2001.

User Advice: Unless your organization has an urgent requirement to deploy HVDs immediately forsecuring your environment or centralizing data management, wait until late 2011 before initiatingdeployments for broader (mainstream) desktop user scenarios. Through 2011, all organizationsshould carefully assess the user types for which this technology is best-suited, with broaderdeployments happening through 2012. Clients that make strategic HVD investments now willgradually build institutional knowledge. These investments will allow them to refine technicalarchitecture and organizational processes, and to grow internal IT staff expertise before IT isexcepted to support the technology on a larger scale through 2015. You will need to balance thebenefits of centralized management with the additional overhead of the infrastructure and resourcecosts. Customers should recognize that HVDs may resolve some management issues, but they willnot become panaceas for unmanaged desktops. In most cases, promised reductions in total cost ofownership will not be significant and will require initial capital expenditures to achieve. The best-case scenario for HVDs continues to be for securing and centralizing data management or forstructured task users.

Organizations must optimize desktop processes, IT staff responsibilities and best practices to fitHVDs, just as organizations did with traditional PCs. Leverage desktop management processes forlessons learned. The range of users and applications that can be viably addressed through HVDswill grow steadily through 2011. Although the user population is narrow, it will eventually includemobile/offline users as well. Organizations that deploy HVDs should plan for growing viability acrosstheir user populations, but they should be wary of rolling out deployments too quickly. Diligence

Page 62 of 84 Gartner, Inc. | G00214402

should be employed in testing to ensure a good fit of HVD capabilities with managementinfrastructure and processes, and integration with newer management techniques (such asapplication virtualization and software streaming). Visibility into future product road maps fromsuppliers is essential.

Business Impact: HVDs provide mechanisms for centralizing a thick-client desktop PC without re-engineering each application for centralized execution. This appeals to enterprises on the basis ofmanageability and data security.

Benefit Rating: High

Market Penetration: 1% to 5% of target audience

Maturity: Adolescent

Sample Vendors: Citrix Systems; NEC; Parallels; Quest Software; Red Hat; VMware

PC Application Streaming

Analysis By: Terrence Cosgrove

Definition: Application streaming is a PC application delivery technology that allows users toexecute applications as they are delivered. The tools can deliver a shortcut that contains enough ofthe application's resources to get the user started. As the user requests new functionality, thenecessary files are delivered in the background and loaded into memory. Users can also access theapplication at a remote location, such as a Web portal or a network share. There are several deliveryoptions:

■ No cache — The full application is streamed to the target PC each time it's launched by theuser; nothing is cached for subsequent use.

■ Partial cache — Only the application components called by the user are streamed to the targetPC. Unused functions and menu components are not sent. This option minimizes network use,but only hitherto requested functionality would be available for offline use.

■ Complete cache — The full application is streamed to the target PC when first requested bythe user. The code required to start the application, as well as application settings, is cachedlocally and remain available the next time the application is launched. This option is optimal foroffline use.

Typically, during user-initiated events (e.g., logon or application launch), the PC will check for IT-administered application updates. If the application is cached, the user will only pull down the deltadifference of the application. If the application is not cached, the user will simply access the newestversion of the application. The application streaming delivery model is usually combined withapplication virtualization, and most products in the market combine both capabilities.

Gartner, Inc. | G00214402 Page 63 of 84

Position and Adoption Speed Justification: Application streaming has historically been a nicheapplication delivery technology. Organizations that virtualized applications have historically pushedthem down to users as full objects, rather than stream them. The reasons for this include:

■ Organizations have had concerns about the impact of application streaming on network andapplication performance, and application availability for offline use.

■ Application streaming has had challenges with scalability.

The most obvious benefit suggested by streaming is the ability to allow applications to be usedimmediately, rather than requiring users to wait for the application to be fully delivered. While this isuseful, it usually doesn't offset the challenges of the technology.

However, several factors are changing this situation. First, products are getting better, to improvescale, minimize network impact, and improve application performance. Second, organizations areincreasingly using application streaming as a mechanism to better manage desktop applicationlicenses and reduce the amount of labor involved in deploying applications. Finally, hosted virtualdesktop projects are leading organizations to look at application streaming as one of thetechnologies to "layer" applications on top of the base image and core applications.

User Advice: Consider streaming for any business application, especially one that must be updatedfrequently and/or requires local execution on the target device.

Avoid network and performance issues by implementing some or all of the following measures: localapplication caching, maintenance windows, configuring users to pull applications from local serversand avoiding streaming applications that are typically in the base image.

Start using application streaming with applications that offer the most license savings potential.

Business Impact: Users can access the same application through multiple PCs. As long as theapplication does not remain resident on any PC (that is, caches are flushed), one user can accessthe same application from multiple PCs, while paying only one application license (depending onthe licensing terms of the vendor). However, application license management must be donecarefully because software vendors will have different rules regarding usage of their product.

Applications can launch and be used faster (incrementally) than with traditional software distributionarchitectures, while the rest of the application functions will be delivered as needed. Applicationstreaming tools typically do not work independently of PC configuration tools.

Application streaming provides a method to quickly remove applications on the date of expiration(forced license metering).

Application streaming can simplify application updates for intermittently connected users.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Page 64 of 84 Gartner, Inc. | G00214402

Sample Vendors: Citrix Systems; Microsoft; Symantec-AppStream

Recommended Reading: "Application Streaming Gains Traction in 2011"

IT Change Management Tools

Analysis By: Kris Brittain; Patricia Adams

Definition: IT change management (ITCM) tool functionality governs documentation, review,approval, coordination, scheduling, monitoring and reporting of requests for change (RFCs). Thebasic functional requirements begin in the area of case documentation, including the industry-standard assignment capabilities of classification and categorization (such as risk and priority), withadvanced functionality to manage "preapproved" or standard RFC-type lists. The tool must includea solid workflow engine to manage embedded workflows (such as standard RFC life cycles), as wellas provide escalation and notification capabilities, which can be executed manually or automatedvia business rules. RFC workflows are presented graphically, and are capable of managingassessment and segmented approval, with the ability to adjust automatically, based on alterationsand multitask change records (see "The Functional Basics of an IT Change Management Tool").

To strengthen the ability to handle the large volume of RFC activity, ITCM tools enable multitaskedassignment, risk/impact assessment and multiviews of the change schedule, as well as the calendar(including the ability to manage maintenance and freeze windows). Critical integrations withconfiguration, release and configuration management database (CMDB) technologies are requiredfor change management tool success (see "Aligning Change Process and Tools to Configurationand Release"). For example, the categorization of a configuration item in the case log in ITCM toolsbecomes essential to identify near-real-time risk, impact and collision analysis capabilities throughintegration with a CMDB.

Without configuration data, vendor tools will leverage survey mechanisms that can provide a riskcontext (see "Improve Security Risk Assessments in Change Management With RiskQuestionnaires"). Automation and acceleration of change throughput has also added focus to"preapproved" (aka standard) change activity. More tools offer integration among ITCM tools, serverconfiguration, run book automation and virtualization management tools to automate RFCgovernance and deployment.

Integration with release management tools has exploded in the industry, because the quality andefficiency of the ITCM workflow requires the coordination of RFC grouping and hand-offs to releaseexecution. This integration and functional coordination not only exists with emerging releasegovernance tools, but is also linked with the application development change to improve changethroughput. ITCM tools can be leveraged to address audit demands, such as the Sarbanes-OxleyAct, by integrating with configuration audit tools. Most ITCM tools are a module in IT service desksuites, offering integration with incident and problem management. ITCM tools must provide metricanalysis to deliver management reports covering SLAs, critical success factors (CSFs) and keyperformance indicators.

Gartner, Inc. | G00214402 Page 65 of 84

Position and Adoption Speed Justification: Nearly 80% of enterprise-scale companies use ITCMtools to govern ITCM process policies. Adoption barriers have been predominantly on rigidtechnical-silo organizational structure, tribal process knowledge and siloed/departmental competingtool strategies. Years of IT technical silo or department-specific processes, such as applicationdevelopment and server management configuration and release processes, become confused withchange execution procedures. Another early driver has been the adoption of industry standards —such as ITIL v.3, Capability Maturity Model Integration (CMMI) and Control Objectives forInformation and Related Technology (COBIT) 4.0 — where are accelerating the adoption of ITCMtools.

During the past year, organizations have begun to view adoption as a way to address poor serviceportfolio knowledge and the inability to build business relevance. ITIL process re-engineering mayhave put the change process in the first phase of process redevelopment; however, with heightenedattention on improving continuous development, new methodologies are becoming the standard inapplication development AD that place more emphasis on the mission-critical role of changeprocesses. This translates into change volume activity growth, compounded by more-aggressivedelivery timelines, which affects change release windows. With that said, release process adoptionis now happening as a later phase of ITIL re-engineering projects, causing IT organizations toreshape their early-phase investments in change and configuration management processes.

Why release would influence change so much is easy to understand. Change workflows navigatechanges to a common macro work stage called "schedule and implement" change. Although thechange workflow helps technical and CAB groups review changes, it does not help orchestrate aswell the subtask activities required to develop the change. Release management workflows are thecritical hand-offs from these primary change macro stages and they feed back change details forthe final postimplementation review of changes. Without this last stage in the change process, thesystem lacks a formal and independent evaluation of the service releases of change. Evaluationchecks are essential to determine actual performance and the outcomes of these changes. Lackingthis knowledge, management is unable to provide the fundamental analysis of cost, value, efficiencyand quality of change.

User Advice: The ITCM process and tools should be the sole source for governing and managingthe RFC life cycle. IT organizations looking for closed-loop and "end to end" change managementwill require that the change tool be integrated with the configuration and release management toolsused to build, test, package and push the change into the production environment, including serverprovisioning and configuration management, and application life cycle management. The vendormarket has often confused clients by blending the change and release "operational" governanceworkflow into a common module. Unfortunately, this may not provide the optimum tool solution,because they tend to be underdeveloped offerings that lead clients to treat change and release,operationally, as one giant workflow. This is the biggest problem the industry faces.

ITCM tools and the IT service desk suites lack process-modeling capabilities similar to businessprocess management tools. There are too many places where constraints can materialize whenindividuals or process committees "invent" their next-generation processes. Change and releaseprocesses need to be managed and refined with some separation of effort. No one should silo theefforts without collaboration, because integration is critical. An appropriate balance is required to

Page 66 of 84 Gartner, Inc. | G00214402

break these down into digestible and optimized process "bites," then connect them to produceend-to-end change management.

Other motivations, such as addressing compliance demands (e.g., Sarbanes-Oxley Act or COBIT),can be addressed by ITCM tools to support control procedural documentation and the integrationwith configuration auditing tools to produce a complete view of compliance and noncompliancereporting. Tool adoption will require new responsibilities in the IT organization, such as adding ITchange manager and change coordinator roles. In addition, growing service complexity andcompliance requirements (i.e., the demand to adhere to governmental and industry regulations) willinfluence ITCM tool implementation and depth of integration, (such as configuration auditing tosupport compliance reporting. For the tool to be successful, organizational and cultural obstaclesneed to be addressed.

Business Impact: A service-oriented IT organization needs to develop a business context forinvestment in ITCM in which the IT and business organizations commit to strategy goals and CSFs,aligning ITCM to service portfolio demands. From a fundamental process perspective, ITCMimplemented across all IT departments will deliver discernible benefits in service quality, IT agility,cost reductions and risk management.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Early mainstream

Sample Vendors: BMC Software (Remedy); CA Technologies; HP; IBM Maximo; SAP; Service-now.com

Recommended Reading: "The Functional Basics of an IT Change Management Tool"

"Aligning Change Process and Tools to Configuration and Release"

"Improve Security Risk Assessments in Change Management With Risk Questionnaires"

"Toolkit: IT Change Management Policy Guide, 2010"

IT Asset Management Tools

Analysis By: Patricia Adams

Definition: The IT asset management (ITAM) process entails capturing and integrating inventory,financial and contractual data to manage the IT asset throughout its life cycle. ITAM encompassesthe financial management (e.g., asset costs, depreciation, budgeting, forecasting); contract termsand conditions; life cycles; vendor service levels; asset maintenance; ownership; and entitlementsassociated with IT inventory components, such as software, PCs, network devices, servers,mainframes, storage, mobile devices and telecom assets, such as voice over IP (VoIP) phones.ITAM depends heavily on robust processes, with tools being used to automate manual processes.

Gartner, Inc. | G00214402 Page 67 of 84

Capturing and integrating autodiscovery/inventory, financial and contractual data into a centralrepository for all asset classes supports and enables the functions that are necessary to effectivelymanage and optimize vendors, and a software and hardware asset portfolio from requisitionthrough retirement, thereby monitoring the asset performance throughout the day-to-daymanagement life cycle of the asset.

Position and Adoption Speed Justification: This process, when integrated with tools, is adoptedduring business cycles that reflect the degree of emphasis that enterprises put on controlling costsand managing the use of IT assets. With an increased focus on software audits, configurationmanagement databases (CMDBs), business service management (BSM), managing virtualizedsoftware, developing IT service catalogs and tracking software license use in the cloud, ITAMinitiatives are gaining increased visibility, priority and acceptance, in IT operations procurement.ITAM data is necessary to understand the costs associated with a business service. Without thisdata, companies don't have accurate cost information on which to base decisions regarding servicelevels that vary by cost or chargeback. We expect ITAM market penetration, currently at 45%, tocontinue growing during the next five years.

User Advice: Many companies embark on ITAM initiatives in response to specific problems, suchas impending software audits (or shortly after an audit), CMDB implementations, virtual softwaresprawl or OS migrations. Inventory and software usage tools, which feed into an ITAM repository,can help ensure software license compliance and monitor the use of installed applications.However, without ongoing visibility, companies will continue in a reactive firefighting mode, withoutachieving a proactive position that diminishes the negative effect of an audit or provides the abilityto see how effectively the environment is performing.

ITAM has a strong operational focus, with tight linkages to IT service management, on creatingefficiencies and effectively using software and hardware assets. ITAM data can easily identifyopportunities, whether it is the accurate purchasing of software licenses, the efficient use of allinstalled software or ensuring that standards are in place to lower support costs. To gain value froman ITAM program, a combination of people, policies, processes and tools needs to be in place. Asprocess maturity occurs, ITAM will focus more on the financial and spending management relatedto controlling asset investment, and will provide integration to project and portfolio management,and enterprise architecture. In addition, ITAM processes and best practices are playing a role inhow operational assets are being managed. Companies should plan for this evolution in thinking.

Business Impact: All IT operations controls, processes and software tools are designed to achieveat least one of three goals: lower costs for IT operations, improved quality of service and agility, andreduced business risks. As more enterprises implement an IT service management strategy, anunderstanding of costs to deliver business IT services will become essential. In addition, ensuringthat the external vendor contracts are in place to deliver on the specified service levels the businessrequires is a necessity. Because ITAM financial data is a feed into a CMDB or content managementsystem, the value of ITAM will be more pronounced in organizations that are undertaking theseprojects.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Page 68 of 84 Gartner, Inc. | G00214402

Maturity: Early mainstream

Sample Vendors: BMC Software; CA Technologies; HP; IBM; Provance Technologies; PS'Soft;Staff&Line; Symantec (Altiris)

Recommended Reading: "Applying IT Asset and Configuration Management Discipline to OT"

"How to Build PC ITAM Life Cycle Processes"

PC Application Virtualization

Analysis By: Terrence Cosgrove; Ronni J. Colville

Definition: PC application virtualization is an application packaging and deployment technologythat isolates applications from each other and limits the degree to which they interact with theunderlying OS. Application virtualization provides an alternative to traditional packaging andinstallation technologies.

Position and Adoption Speed Justification: PC application virtualization can reduce the time ittakes to deploy applications by reducing packaging complexity and scope for application conflictstypically experienced when using traditional packaging approaches (e.g., MSI). It's an establishedtechnology that's receiving high market exposure, primarily due to enterprise focus on planning forWindows 7 migrations and, more recently, for hosted virtual desktop (HVD) projects. PC applicationvirtualization tools are most often adopted as supplements to traditional PC configurationmanagement solutions as a means of addressing application packaging challenges, and mostmainstream PC configuration management vendors have either added this capability via acquisitionor partnership (OEM and reseller). In addition, increased interest in virtualization technologies hasfocused attention on this technology and solution sets.

Much of the current interest in PC application virtualization is driven by the promise that thistechnology will alleviate some of the regression-testing overhead in application deployments andWindows migrations (although it generally cannot be relied on to remediate application compatibilityissues with Windows 7). Other benefits include enabling the efficient and rapid deployment ofapplications that have not been able to be deployed previously, and improving organizations' abilityto remove administrator access by enabling them to deploy a greater percentage of the applicationsneeded by users through installation automation.

What continues to impede widespread adoption is that application virtualization cannot be used for100% of applications, and may never work with many legacy applications, especially thosedeveloped in-house. More recently, HVD projects have led to increased interest in applicationvirtualization. Organizations are increasingly using application virtualization to layer on role- or user-specific applications. Gartner believes that application virtualization is critical to making HVD moreflexible and suitable for broad user scenarios.

The market continues to be dominated by two main vendors: Microsoft (App-V) and VMware(ThinApp). Citrix Systems also has this technology, which is bundled in XenApp. Symantec also hasthis technology, and most often sells it with Altiris Client Management Suite, its PC configuration life

Gartner, Inc. | G00214402 Page 69 of 84

cycle management (PCCLM) tool. There are also several smaller vendors, such as InstallFree,Endeavors Technologies and Spoon (formerly Xenocode). Because of Microsoft's go-to-marketapproach (selling at low cost to organizations with software assurance on Windows via MicrosoftDesktop Optimization Pack) and VMware's visibility in virtual infrastructures, the viability of smallerplayers will be at risk in the long term.

User Advice: Implement PC application virtualization to reduce packaging complexity, particularly ifyou have a large number of applications that are not packaged. Analyze how this technology willinterface with established and planned PCCLM tools, to avoid driving up the cost of a newapplication delivery technology and to ensure that virtualized applications are manageable. Test asmany applications as you can during the evaluation, but recognize that some applications probablycan't be virtualized.

Consider application virtualization tools for:

■ New application deployment needs where there's no legacy packaging

■ Applications that have not already been packaged, when the overhead (cost and time) ofcurrent packaging tools is considered too high, or the number of users receiving the applicationhas been deemed too low to justify packaging

■ Applications that have not previously been successfully packaged and deployed using PCCLMtools, because of application conflicts and the need for elevated users' permission

■ Pooled HVD deployments

However, enterprises must also consider the potential support implications. Not all applicationvendors will support their applications running in a virtualized manner. Interoperability requirementsmust also be understood; with some application virtualization products, applications that callanother application during runtime must be virtualized together or be manually linked.

Business Impact: PC application virtualization can improve manageability for corporate IT. Byisolating applications, IT organizations can gain improvements in the delivery of applications, andreduce (perhaps significantly) testing and outages due to application conflicts.

Benefit Rating: High

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Citrix Systems; Dell KACE; Endeavors Technologies; InstallFree; Microsoft;Spoon; Symantec; VMware

Service-Level Reporting Tools

Analysis By: Debra Curtis; Milind Govekar; Kris Brittain

Page 70 of 84 Gartner, Inc. | G00214402

Definition: Service-level reporting tools incorporate and aggregate multiple types of metrics fromvarious management disciplines. At a minimum, they must include service desk metrics along withIT infrastructure availability and performance metrics. End-user response time metrics (includingresults from application performance monitoring [APM] tools) can enhance service-level reports,and are sometimes used as a "good enough" proxy for end-to-end IT service quality. However, acomprehensive service-level reporting tool should provide a calendaring function to specify servicehours, planned service uptime and scheduled maintenance periods for different classes of service,as well as compare measured results to the service-level targets agreed-to between the IToperations organization and the business units to determine success or failure. Very few tools in themarket today have this capability.

In addition to comparing measured historical results to service-level targets at the end of thereporting period, more-advanced service-level reporting tools will keep a running, up-to-the-minutetotal that displays real-time service-level results, and predicts when service levels will not be met.This will forewarn IT operations staff of impending trouble. These tools will increasingly have to dealwith on-premises applications and infrastructure, as well as cater to off-premises cloudinfrastructures and applications.

Position and Adoption Speed Justification: Just about every IT operations management softwarevendor offers basic reporting tools that they typically describe as "service-level managementmodule or software." In general, these tools do not satisfy all the requirements in Gartner'sdefinition of the service-level reporting category. Thus, the industry suffers from product ambiguityand market confusion, causing this category to be positioned in the Trough of Disillusionment. MostIT operations management tools are tactical tools for specific domains (such as IT service desk,network management and server administration) in which production statistics are collected forcomponent- or process-oriented operational-level agreements, rather than true, business-orientedSLAs.

Only the 5% to 15% of IT organizations that have attained the service-aligned level of Gartner'sITScore maturity model for IT infrastructure and operations have the skills and expectations todemand end-to-end IT service management capabilities from their service-level reporting tools. Thiscircumstance slows the adoption speed and lengthens the time to the Plateau of Productivity. Somecloud computing vendors have developed simplistic service displays for their infrastructures andapplications, but they're not heterogeneous and do not include on-premises infrastructures andapplications.

User Advice: Clients use many different types of tools to piece together their service-level reports.Although service-level reporting tools can be used to track just service desk metrics or ITinfrastructure component availability and performance metrics, they are most valuable when usedby clients who have defined business-oriented IT services and SLAs with penalties and incentives.Monitoring alone will not solve service-level problems. IT organizations need to focus on changingworkplace cultures and behavior, so that employees are measured, motivated and rewarded basedon end-to-end IT service quality. Clients should choose SLA metrics wisely, so that this exerciseprovides action-oriented results, rather than just becoming a reporting exercise.

Gartner, Inc. | G00214402 Page 71 of 84

Business Impact: SLAs help the IT organization demonstrate its value to the business. Once IT andthe business have agreed to IT service definitions and, thus, established a common nomenclature,service-level reporting tools are used as the primary communication vehicles to corroborate that ITservice quality is in compliance with business customer requirements. Defining business-oriented ITservices with associated SLAs, proactively measuring service levels and reporting on compliancecan help IT organizations deliver more-consistent, predictable performance and maintaincustomers' satisfaction with IT services. By tracking service levels and analyzing historical service-level trends, IT organizations can use service-level reporting tools to predict and prevent problemsbefore they affect business users.

Benefit Rating: Moderate

Market Penetration: 5% to 20% of target audience

Maturity: Adolescent

Sample Vendors: BMC Software; CA Technologies; Compuware; Digital Fuel; HP; IBM Tivoli;Interlink Software; NetIQ

Recommended Reading: "The Challenges and Approaches of Establishing IT InfrastructureMonitoring SLAs in IT Operations"

Climbing the Slope

IT Event Correlation and Analysis Tools

Analysis By: Jonah Kowall; Debra Curtis

Definition: IT event correlation and analysis (ECA) tools support the acceptance of events andalarms from IT infrastructure components; consolidate, filter and correlate events; notify theappropriate IT operations personnel of critical events; and automate corrective actions, whenpossible.

Position and Adoption Speed Justification: These tools have widespread use as the general-purpose event console monitoring IT infrastructure components such as servers (physical andvirtual), networks and storage, and are becoming critical to processes such as analyzing the rootcauses of problems.

User Advice: ECA tools are mature, and a great deal of activity in this market has involvedacquisitions and consolidation in recent years. In addition to the vendor consolidation, IT operationsorganizations are often trying to consolidate their monitoring investments to fewer tools with betterintegration to other management disciplines, such as service desk and configuration managementdatabases (CMDBs). Some ECA innovation (such as predictive analysis) has been taking place atsmaller, startup companies, which have then been acquired by larger vendors, enabling them totake advantage of new market opportunities in an otherwise mature market segment.

Page 72 of 84 Gartner, Inc. | G00214402

It has become increasingly important for ECA tools to show their value as it pertains to supportingthe business. This requires the products to provide reports that show how the tools reduce outagetime and help avoid outages. With this level of information, IT organizations are able to demonstrateECA tools' investment value and align the value with increased IT operations efficiencies.

When dealing with large vendors, clients need to understand their strategic direction to ensurecontinued product support and commitment. When dealing with small vendors, it's important tounderstand their financial stability, vision, business and product investment strategies. Without thisunderstanding, clients risk being hurt by vendor acquisitions or corporate demise, as well asadopting a product that won't support their IT operations initiatives.

Business Impact: ECA tools help lower the cost of IT operations by reducing the time required toisolate a problem across a heterogeneous IT infrastructure.

Benefit Rating: Moderate

Market Penetration: 20% to 50% of target audience

Maturity: Early mainstream

Sample Vendors: Argent Software; BMC Software; CA Technologies; eG Innovations; EMC;GroundWork Open Source; HP; IBM Tivoli; Interlink Software; Microsoft; NetIQ; Quest Software;Tango/04; uptime software; Zenoss

Recommended Reading: "Magic Quadrant for IT Event Correlation and Analysis"

"Event Correlation and Analysis Market Definition and Architecture Description, 2010"

"Aligning ECA and BSM to the IT Infrastructure and Operations Maturity Model"

Network Performance Management Tools

Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall

Definition: Network performance management tools provide performance and availabilitymonitoring solutions for the data communications network (including network devices and networktraffic). They collect performance data over time and include features such as baselining, thresholdevaluation, network traffic analysis, service-level reporting, trend analysis, historical reporting and,in some cases, interfaces to billing and chargeback systems.

There are two common methods of monitoring network performance:

■ Polling network devices to collect standard Simple Network Management Protocol (SNMP)Management Information Base (MIB) data for performance reporting and trend analysis

■ Using specialized network instrumentation (such as probes, appliances [including virtualappliances] and NetFlow) to analyze the makeup of the network traffic for performancemonitoring and troubleshooting

Gartner, Inc. | G00214402 Page 73 of 84

The goal of collecting and analyzing performance data is to enable the network manager to becomemore proactive in recognizing trends, predicting capacity problems and preventing minor servicedegradations from becoming major problems for users of the network.

Position and Adoption Speed Justification: These tools are widely deployed and are useful foridentifying network capacity use trends.

User Advice: NetFlow instrumentation has grown in popularity as an inexpensive data source, withdetails about the distribution of protocols and the makeup of application traffic on the network.However, NetFlow summarizes statistics and can't analyze packet contents. Broad NetFlowcoverage should be balanced with fine-grained packet capture capabilities for critical networksegments. Expect new form factors for traffic analysis, such as virtual appliances and microprobes,that piggy-back on existing hardware and interfaces in the network fabric, providing the depth of aprobe at a much lower cost, while approaching the ubiquity of NetFlow.

Clients should look for network performance management products that not only trackperformance, but also automatically establish a baseline measurement of "normal" behavior for timeof day and day of week, dynamically set warning and critical thresholds as standard deviations offthe baseline, and notify the network manager only when an exception condition occurs. A simplestatic threshold based on an industry average or a "rule of thumb" will generate false alarms.

Clients that are looking for the utmost efficiency should link network performance managementprocesses to network configuration management processes so that bandwidth allocation and trafficprioritization settings are automatically updated based on changing business demands and service-level agreements.

Business Impact: These tools help improve network availability and performance, confirm networkservice quality and justify network investments. Ongoing capacity use analysis enables thereallocation of network resources to higher-priority users or applications without the need foradditional capital investment, using various bandwidth allocation, traffic engineering and quality-of-service techniques. Without an understanding of previous network performance, it's impossible todemonstrate improving service levels after changes, additions or investments have been made.Without a baseline measurement for comparison, a network manager can't detect growth trendsand be forewarned of expansion requirements.

Benefit Rating: Moderate

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: AccelOps; AppNeta; CA Technologies; HP; IBM Tivoli; InfoVista; ManageEngine;NetScout Systems; Network Instruments; Opnet Technologies; Riverbed Technology; SevOne;SolarWinds; Visual Network Systems

Recommended Reading: "The Virtual Switch Will Be the Network Manager's Next Headache"

"Manage Your Videoconferencing Before It Manages You"

Page 74 of 84 Gartner, Inc. | G00214402

IT Service Desk Tools

Analysis By: David M. Coyle; Jarod Greene

Definition: IT service desk (also known as IT help desk) tools document, review, escalate, analyze,close and report incidents and problem records. Foundational functionalities include classification,categorization, business rules, workflow, reporting and search engines. These tools manage the lifecycles of incidents and problem records from recording to closing. IT service desk tools automatethe process of identifying the individual or group responsible for resolution, and of suggestingpossible resolution scenarios and escalation, if necessary, until the service and support request isresolved.

Typical IT service desk suites will extend incident and problem management to Web self-service,basic service-level agreements, end-user satisfaction survey functionality, basic knowledgemanagement and service request management. IT service desk suites often integrate withinventory, change, configuration management database (CMDB), asset and PC life cycleconfiguration management modules. In addition, the process components of the IT service desk areconsidered important functions in the Information Technology Infrastructure Library (ITIL) best-practice process frameworks. IT service desk tools are still purchased, the majority of the time, withthe on-premises perpetual model, but the software-as-a-service (SaaS) model is a fast-growingtrend for SaaS IT service desk solutions (see "SaaS Continues to Grow in the IT Service DeskMarket").

Position and Adoption Speed Justification: Market penetration in midsize to large companiesexceeds 98%, and most toolset acquisitions are rip-and-replace endeavors occurring roughly everyfive years. IT service desks are increasingly being sold as part of a larger IT service management(ITSM) suite purchase, which will make switching just IT service desk vendors more difficult in thefuture. As companies move from fragmented, department-based implementations to more-controlled and centralized IT service support models, the consolidation of tools within companieswill continue.

User Advice: The market is saturated with vendors' tools having similar sets of features. It is veryimportant to develop a well-developed business case and selection criteria prior to tool selection(see "Managing Your IT Service Desk Tool Acquisition: Key Questions That Must Be AddressedDuring the Acquisition Process"). Evaluation criteria include functional features, prices, integrationpoints with various ITSM modules (such as change management), ease of implementation, availableout-of-the-box best practices, reporting and ease of configuration.

Business Impact: IT service desk tools, processes and metrics help improve the quality of ITservice and support delivered to end users. They increase business satisfaction, lower the cost ofend-user support and increase end-user productivity.

Benefit Rating: High

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Gartner, Inc. | G00214402 Page 75 of 84

Sample Vendors: Axios Systems; BMC Software; CA Technologies; FrontRange Solutions; Hornbill;HP; IBM; LANDesk; Numara Software; ServiceNow; VMware

Recommended Reading: "SaaS Continues to Grow in the IT Service Desk Market"

"Magic Quadrant for the IT Service Desk"

"The 2010 IT Service Desk Market Landscape"

"Managing Your IT Service Desk Tool Acquisition: Key Questions That Must Be Addressed Duringthe Acquisition Process"

PC Configuration Life Cycle Management

Analysis By: Terrence Cosgrove; Ronni J. Colville

Definition: PC configuration life cycle management (PCCLM) tools manage the configurations ofclient systems. Specific functionality includes OS deployment, inventory, software distribution,patch management, software usage monitoring and remote control. Desktop support organizationsuse PCCLM tools to automate system administration and support functions that would otherwise bedone manually. The tools are used primarily to manage PCs, but many organizations use them tomanage their Windows servers, smartphones, tablets and non-Windows client platforms (e.g., Macand Linux). Application virtualization is a major functional capability that many organizations areusing or evaluating; they are looking to acquire this capability in PCCLM tools, or looking forproducts to manage virtualized packages from third-party tools (e.g., Microsoft, Citrix Systems andVMware).

Position and Adoption Speed Justification: PCCLM tools are widely adopted, particularly in largeenterprises (i.e., more than 5,000 users). The market has started to commoditize, with fewdifferences among products in some of the core functionality, such as inventory, softwaredistribution and OS deployment. Differences among products are increasingly found in the followingareas: the ability to manage security configurations, non-Windows client management, scalability,usability and integration with adjacent products (e.g., service desk, IT asset management andendpoint production products). Recently, vendors have begun to compete by offering appliance-based or software-as-a-service (SaaS)-based options that better address the needs of small ormidsize organizations, and those with highly mobile or distributed users.

User Advice: Users will benefit most from PCCLM tools when standardization and policies are inplace before automation is introduced. Although these tools can significantly offset staffingresource costs, they require dedicated resources to define resource groups, package applications,test deployments and maintain policies for updating.

Many factors could make certain vendors more appropriate for your environment than others. Forexample, evaluate:

■ Ease of deployment and usability

■ Alignment of endpoint security and PCCLM

Page 76 of 84 Gartner, Inc. | G00214402

■ Alignment service desk and PCCLM

■ Geographic focus

■ Capabilities that meet a specific regulatory requirement

Business Impact: Among IT operations management tools, PCCLM tools have one of the mostobvious ROIs — managing the client environment in an automated, one-to-many fashion, ratherthan on a manual, one-to-one basis.

Benefit Rating: Moderate

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: BMC Software; CA Technologies; Dell KACE; FrontRange Solutions; HP; IBM-BigFix; LANDesk; Matrix42; Microsoft; Novell; Symantec

Recommended Reading: "Magic Quadrant for PC Configuration Life Cycle Management Tools"

"Emerging PC Life Cycle Configuration Management Vendors"

Entering the Plateau

Network Fault-Monitoring Tools

Analysis By: Debra Curtis; Will Cappelli; Jonah Kowall

Definition: Network fault-monitoring tools indicate the up/down status of network components. Insome cases, the tools also discover and visualize the topology map of physical relationships anddependencies among network components as a way to display the up/down status in a context thatcan be easily understood.

Position and Adoption Speed Justification: These tools have been widely deployed, primarily toaddress the reactive nature of network monitoring in IT operations.

User Advice: Users should leverage network fault-monitoring tools to assess the status of networkcomponents, but work toward improving problem resolution capabilities and aligning networkmanagement tools with IT service and business goals. Network fault-monitoring tools are frequentlyused for "blame avoidance," rather than problem resolution, with the goal of proving that the currentproblem is not the network's fault. Resolving problems, rather than just avoiding blame, should bethe goal.

Business Impact: These tools help an IT organization view its network events through a single"pane of glass." This helps improve the availability of the network infrastructure and shorten theresponse time for noticing and repairing network issues. Network fault-monitoring tools support

Gartner, Inc. | G00214402 Page 77 of 84

day-to-day network administration and provide useful features, but they don't tightly align thenetwork with business goals.

Benefit Rating: Moderate

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Sample Vendors: CA Technologies; EMC; Entuity; HP; IBM Tivoli; Ipswitch; ManageEngine; Nagios;SolarWinds

Job-Scheduling Tools

Analysis By: Milind Govekar

Definition: Job-scheduling tools are used to schedule online or offline production jobs, such ascustomer bill calculations, and the transfer of data between heterogeneous systems on the basis ofevents and batch processes within packaged applications. A scheduled job usually has a date, timeand frequency, as well as other dependencies, inputs and outputs associated with it.

Position and Adoption Speed Justification: Job-scheduling tools are widely used, maturetechnologies. They support Java and .NET application server platforms, in addition to integrationtechnology. Job-scheduling tools help enterprises in their automation requirements acrossheterogeneous computing environments. They automate critical batch business processes, such asbilling or IT operational processes, including backups, and they provide event-driven automationand batch application integration — for example, integrating CRM processes with ERP processes.These tools are evolving toward handling dynamic, policy-driven workloads; thus, they are movingtoward IT workload automation broker tools (also found on this Hype Cycle).

User Advice: Enterprises should plan to use a single job-scheduling tool that is able to schedulejobs in a heterogeneous infrastructure and application environment, to improve the quality ofautomation and service, and to lower the total cost of ownership of the environment. Furthermore,enterprises should evaluate the tool's event-based capabilities, in addition to traditional date- andtime-scheduling capabilities.

Enterprises looking for policy-driven, dynamic workload management capabilities should considerIT workload automation broker tools.

Business Impact: These tools can automate a batch process to improve the availability andreliability of a business process that depends on it.

Benefit Rating: Moderate

Market Penetration: More than 50% of target audience

Maturity: Mature mainstream

Page 78 of 84 Gartner, Inc. | G00214402

Sample Vendors: Advanced Systems Concepts; Argent Software; ASG Software Solutions; BMCSoftware; CA Technologies; Cisco (Tidal); Flux; IBM Tivoli; MVP Systems Software; Open SystemsManagement; Orsyp; Redwood Software; Software and Management Associates; SOS-Berlin;Terracotta; UC4 Software; Vinzant Software

Recommended Reading: "Magic Quadrant for Job Scheduling"

"How to Modernize Your Job Scheduling Environment"

"IT Workload Automation Broker: Job Scheduler 2.0"

"Toolkit: Best Practices for Job Scheduling"

Appendixes

Gartner, Inc. | G00214402 Page 79 of 84

Figure 3. Hype Cycle for IT Operations Management, 2010

Technology Trigger

Peak ofInflated

Expectations

Trough of Disillusionment Slope of Enlightenment

Plateau of Productivity

time

expectations

Years to mainstream adoption:

less than 2 years 2 to 5 years 5 to 10 years more than 10 yearsobsoletebefore plateau

As of July 2010

Work Space Virtualization

Cloud Management Platforms

Virtual Server ResourceCapacity-Planning Tools

Application Transaction Profiling

IT Workload AutomationBroker Tools

Release Management Tools

CobiT

IT Service Portfolio Managementand IT Service Catalog Tools

Open-Source IT Operations Tools

Application Management

Run Book Automation

Mobile Service-Level Management SoftwareCMDB

Real-Time Infrastructure

Configuration Auditing

Network Configuration and Change Management Tools

Business Service Management Tools

Server Provisioning and Configuration Management

Hosted Virtual Desktops

PC ApplicationVirtualization

IT ChangeManagement Tools

IT Asset ManagementTools and Process Service-Level Agreement Monitoring and Reporting Tools

IT Chargeback ToolsIT Event Correlation and Analysis ToolsNetwork Performance Management Tools

Mobile Device ManagementResource Capacity-Planning Tools

PC Configuration Life Cycle ManagementIT Service Desk Tools

Business Consulting ServicesEnd-User Monitoring Tools

Network Monitoring ToolsJob-Scheduling Tools

IT Service Dependency Mapping

IT Infrastructure Library v.2

IT Infrastructure Library v.3

Behavior Learning Tools

SaaS Tools for IT Operations

Source: Gartner (July 2010)

Page 80 of 84 Gartner, Inc. | G00214402

Hype Cycle Phases, Benefit Ratings and Maturity Levels

Table 1. Hype Cycle Phases

Phase Definition

Technology Trigger A breakthrough, public demonstration, product launch or other event generatessignificant press and industry interest.

Peak of InflatedExpectations

During this phase of overenthusiasm and unrealistic projections, a flurry of well-publicized activity by technology leaders results in some successes, but morefailures, as the technology is pushed to its limits. The only enterprises makingmoney are conference organizers and magazine publishers.

Trough ofDisillusionment

Because the technology does not live up to its overinflated expectations, it rapidlybecomes unfashionable. Media interest wanes, except for a few cautionary tales.

Slope ofEnlightenment

Focused experimentation and solid hard work by an increasingly diverse range oforganizations lead to a true understanding of the technology's applicability, risksand benefits. Commercial off-the-shelf methodologies and tools ease thedevelopment process.

Plateau ofProductivity

The real-world benefits of the technology are demonstrated and accepted. Toolsand methodologies are increasingly stable as they enter their second and thirdgenerations. Growing numbers of organizations feel comfortable with the reducedlevel of risk; the rapid growth phase of adoption begins. Approximately 20% ofthe technology's target audience has adopted or is adopting the technology as itenters this phase.

Years to MainstreamAdoption

The time required for the technology to reach the Plateau of Productivity.

Source: Gartner (July 2011)

Gartner, Inc. | G00214402 Page 81 of 84

Table 2. Benefit Ratings

Benefit Rating Definition

Transformational Enables new ways of doing business across industries that will result in major shifts inindustry dynamics

High Enables new ways of performing horizontal or vertical processes that will result insignificantly increased revenue or cost savings for an enterprise

Moderate Provides incremental improvements to established processes that will result inincreased revenue or cost savings for an enterprise

Low Slightly improves processes (for example, improved user experience) that will bedifficult to translate into increased revenue or cost savings

Source: Gartner (July 2011)

Table 3. Maturity Levels

Maturity Level Status Products/Vendors

Embryonic ■ In labs ■ None

Emerging ■ Commercialization by vendorsPilots and deployments by industry leaders

■ First generationHigh priceMuch customization

Adolescent ■ Maturing technology capabilities and processunderstandingUptake beyond early adopters

■ Second generationLess customization

Early mainstream ■ Proven technologyVendors, technology and adoption rapidlyevolving

■ Third generationMore out of boxMethodologies

Maturemainstream

■ Robust technologyNot much evolution in vendors or technology

■ Several dominant vendors

Legacy ■ Not appropriate for new developmentsCost of migration constrains replacement

■ Maintenance revenue focus

Obsolete ■ Rarely used ■ Used/resale market only

Source: Gartner (July 2011)

Page 82 of 84 Gartner, Inc. | G00214402

Recommended ReadingSome documents may not be available as part of your current Gartner subscription.

"Understanding Gartner's Hype Cycles, 2011"

This research is part of a set of related research pieces. See Gartner's Hype Cycle Special Reportfor 2011 for an overview.

Gartner, Inc. | G00214402 Page 83 of 84

Regional Headquarters

Corporate Headquarters56 Top Gallant RoadStamford, CT 06902-7700USA+1 203 964 0096

Japan HeadquartersGartner Japan Ltd.Atago Green Hills MORI Tower 5F2-5-1 Atago, Minato-kuTokyo 105-6205JAPAN+ 81 3 6430 1800

European HeadquartersTamesisThe GlantyEghamSurrey, TW20 9AWUNITED KINGDOM+44 1784 431611

Latin America HeadquartersGartner do BrazilAv. das Nações Unidas, 125519° andar—World Trade Center04578-903—São Paulo SPBRAZIL+55 11 3443 1509

Asia/Pacific HeadquartersGartner Australasia Pty. Ltd.Level 9, 141 Walker StreetNorth SydneyNew South Wales 2060AUSTRALIA+61 2 9459 4600

© 2011 Gartner, Inc. and/or its affiliates. All rights reserved. Gartner is a registered trademark of Gartner, Inc. or its affiliates. Thispublication may not be reproduced or distributed in any form without Gartner’s prior written permission. The information contained in thispublication has been obtained from sources believed to be reliable. Gartner disclaims all warranties as to the accuracy, completeness oradequacy of such information and shall have no liability for errors, omissions or inadequacies in such information. This publicationconsists of the opinions of Gartner’s research organization and should not be construed as statements of fact. The opinions expressedherein are subject to change without notice. Although Gartner research may include a discussion of related legal issues, Gartner does notprovide legal advice or services and its research should not be construed or used as such. Gartner is a public company, and itsshareholders may include firms and funds that have financial interests in entities covered in Gartner research. Gartner’s Board ofDirectors may include senior managers of these firms or funds. Gartner research is produced independently by its research organizationwithout input or influence from these firms, funds or their managers. For further information on the independence and integrity of Gartnerresearch, see “Guiding Principles on Independence and Objectivity” on its website, http://www.gartner.com/technology/about/ombudsman/omb_guide2.jsp.

Page 84 of 84 Gartner, Inc. | G00214402