wtm technical writing example.pdf
TRANSCRIPT
The worst possible outcome of solving any problem is doing so at the expense of introducing a new
one.
I like to refer to the pitfalls of first-generation Hyperconvergence, or 1 dot ‗Oh‖, as in ‗Oh I didn‘t
realize we couldn‘t do that,‘ or:
Proof by Contradiction
In logic, proof by contradiction is a form of proof, and more specifically a form of indirect proof, that
establishes the truth or validity of a proposition. It starts by positing the opposite proposition is true,
and then shows that such an assumption leads to a contradiction.
I‘m certain that if you‘re a mathematician reading this you‘ll be quick to point out that my use of this
logic to tell the story of how everyone in the industry blew it when they rushed to be first to market
with a Hyper-converged infrastructure (HCI) appliance is not entirely appropriate. I‘m in no position
to argue otherwise since mathematical logic is not my strong point and I'm not entirely sure if a hy-
phen is required at all times between Hyper and Converged or just some of the time.
Before I get into the problems that first generation HCI introduced let‘s talk about the Data Center,
the brain of a large company, by focusing on 5 big animal factors that led to the emergence of a $2
Billion market in 2016 alone.
1. The Data Center
The Data Center begins as a properly outfitted room with a large amount of equipment associated
with supplying power, cooling, and often times, automatic fire extinguishing systems, that houses
computer equipment. It‘s a place where the most critical business processes are run on sophisticated
computer hardware hosting a myriad of software applications and operations that turn the cogs of
profit and revenue, optimally resulting in growth and success for its owners, employees, partners, and
shareholders. The Data Center is the operational responsibility of a company‘s Information Technolo-
gy (IT) department headed by the CIO (Chief Information Officer.) In many cases, the CIO reports to
the CEO of the company but commonly they also report to the CFO. By placing the IT department
under finance, the message is loud and clear: Technology is purely an operational concern, and the
main focus of technology is cost cutting.‖
2. Business-IT Alignment
Business to IT alignment is the correspondence between the business objectives and the IT require-
ments of an enterprise. These two factors, while possibly contradictory in nature, more so than ever in
the modern era of technology-driven change management and it‘s resulting challenges, require an
alignment that has to be maintained over time in order to ensure the success of an enterprise.
3. Operational Business to IT Alignment
Operational Alignment is one key area of Business-IT alignment. Central to this element is the IT
group‘s adoption of an operating model for delivering services and support that meshes with the way
the company works as a whole. Simply stated, if a strategy-driven alignment (that IT projects and
budget can be directly tied to the company strategy) is about what is getting done, then an operational
alignment is all about how it gets done. In particular, it's how IT services are delivered.
4. The Promise of ‗The Cloud‘
Simply put—whether you‘re running an application that shares photos or music files to millions of
mobile users or you‘re supporting mission critical aspects of your business, the cloud as a platform
provides instant access to formative and greatly reduced cost for IT resources. With cloud computing,
there is no need for large upfront investments in hardware off the pallet that requires a great deal of
heavy lifting and a significant learning curve in managing the new widget. Instead, you can provision
the exact type and size of all the computing resources you need to empower your consumer offering
or conduct your IT department operations with ease and confidence. You can access as many re-
sources as you need, almost instantly, and only pay for what you use. This IT on-demand model is an
attractive alternative to the consumption of on-premise IT assets as the web service provider owns
and maintains the network-connected hardware.
As companies rapidly adopted cloud computing they were introduced to newer and far more efficient
IT consumption models. Fast and easy is the underlying theme to adding compute capacity in the
cloud and this new on-demand model introduced the need for a new operational model when compa-
ny CFO‘s began to expect a cloud economics practice for their on-premise (hardware they own in the
data center) infrastructure needs.
Cloud customers enjoy a pay as you go
(marketing prefers pay as you grow)
scale on demand economic paradigm
that responds adequately to their busi-
ness needs. These can be the kind of
needs that reflect a sense of urgency,
like when projects need to move
quickly, or when business is slow in
down cycles there needs to be an ap-
propriate response as well.
This resulted in a new customer de-
mand of it‘s IT vendors - ―Don‘t make
me buy Data Center from you in big
chunks. Allow me to buy it in a granu-
lar method by allowing me to ramp it
up in increments right when I need it
that suits my pace of growth. This is
what the industry refers to as: IT mov-
ing at the speed of business.
5. Virtualization
The fundamental technology that powers cloud computing is virtualization. Virtualization is a soft-
ware that separates physical infrastructures to create various dedicated virtual resources. Virtualiza-
tion software makes it possible to run multiple operating systems and multiple applications on the
same server at the same time. A software layer is created by a process called abstraction that uses
what is called a hypervisor to emulate the underlying hardware thereby creating shared computing
resources on either a single virtualized server or a cluster or ‗pool‘ of them.
Cloud computing is the delivery of those shared computing resources, software or data — as a ser-
vice, and delivered on-demand through the internet. The cloud often includes virtualization products
to deliver the compute service or application experience to network connected user.
Putting it all together > As companies attempt to operationally align their Business to IT, with a re-
newed focus on how their IT services are delivered, they now expect to leverage the benefits of cloud
computing's IT consumption model, and it's cost effectiveness, to move their IT at the same speed as
their business; transforming the architectural complexity of their Data Center into a more flexible, and
responsive operation that delivers a better ROI to it‘s overall business.
Enter the demand for Hyper-converged Infrastructure
To properly understand hyper-convergence you need to take a step back to it's IT predecessor - con-
verged infrastructures (CI), the rack-based packages that combine existing storage, server and net-
working components. These systems provide either a complete storage and compute
objectives were concerned.
- converged infrastructures (CI), the rack-based packages that combine existing storage, server and
networking components. These systems provide either a complete storage and compute infrastructure
at the rack level by selling commercially available components bundled as a single SKU from a single
vendor or a reference architecture from which users or integrators can assemble a complete system
from multiple vendors that share in the testing, certification and support of their joint offering.
What a customer primarily gains from CI is a significant reduction in deployment time and operation-
al expense from the use of a simple, single common management tool for all their physical and virtual
server components. In it‘s simplest form a CI approach in the data center is centered around the ac-
quisition of tightly integrated systems that offer a common management tool that allows IT to easily
install and administrate these optimized assets so that they appear as a pool of additional resources
and not a new and separate silo. This is ideal for IT as the traditional approach to keeping pace with
the proliferation of business applications and hardware evolution was to deploy a new widget that
supported a unique function but required it‘s own management interface and incremental everything
such as rack space, added power and cooling cost, maintenance and licensing and worst of all, a new
potential point of failure that only performed key business functions at unique times of the day ren-
dering it underutilized since it‘s function, whether it be compute, storage or something else, sat idle
for large chunks of time.
This was a loathsome problem that felt as though it appeared out of nowhere as next generation sys-
tems offered better and better performance but who‘s return on investment was gauged to be poor
since it sat like a silo, separate from other similarly functioning widgets in the data center, idle at
times, and adding to maintenance and operation expenses (OPEX) that were on average consuming
over two-thirds of an organizations technology budget and could not easily be incorporated into a vir-
tualization strategy that always led with a common management tool.
A converged infrastructure addresses the problem of siloed architectures and IT sprawl by enabling a
pooling and sharing of IT resources. So instead of acquiring and dedicating a set of separate resources
to a particular computing technology, application or line of business, a converged infrastructure offer-
ing delivers a pool of virtualized servers, storage and networking capacity that is shared by multiple
applications and lines of business.
The result of incorporating CI systems was the realization of modern technical and business efficien-
cies from the pre-integration of these once siloed technology components at the factory into an offer-
ing that can easily be stood up and made available to users quickly. This became the model for on-
premise delivery of new hardware technology given the success that cloud computing enjoyed with
it‘s instant gratification model and the resulting desire for IT to be able to mimic in terms of offering
new pools of resources at the speed and pace of new business needs and users.
To summarize: a dramatic reduction in IT complexity, through the use of pre-integrated hardware
with a common set of virtualization and automation management tools, is an important value proposi-
tion for converged infrastructure along with the ―cloud ready‖ nature of a new system that combined
server, storage and network into a single framework capable of handling enormous data sets that
cloud computing can require.
Hyperconvergence; but first—the network!
Since the dot-com bust, and subsequent financial collapse in 2008, corporate IT budgets shrank and it
became harder over time for technology vendors to convince them to buy anything with an emphasis
on ‗differentiated‘ or ‗well engineered‘ as the preference slowly became to adopt software-defined
hardware in a commodity wrapper. This approach began at the network and Cisco was a leader in of-
fering network administrators the ability to control network traffic through policy-enabled workflow
automation.
Say What?
The basis of a software-defined network (SDN) is virtualization which made cloud computing possi-
ble and now allows data centers to dynamically provision IT resources exactly where and when they
are needed; on the fly. To keep up with the speed and complexity of all this split-second processing,
the network must also adapt, becoming more flexible and automatically responsive. We can apply the
idea of virtualization to the network as well, separating the function of traffic control from the net-
work hardware and giving it to software thus resulting in SDN.
With the explosion of mobile devices and content, server virtualization, and the advent of cloud ser-
vices were among the trends driving the networking industry to re-examine traditional network archi-
tectures since legacy networks had serious limitations and their older methods no longer worked in
the race to the cloud. As virtualization, cloud, and mobility created more complex environments, net-
works had to adapt in terms of security, scalability, and manageability. Most enterprise networks,
however, rely on siloed boxes and appliances requiring a great deal of manual administration through
a native user interface, a key contributor to the condition we already talked about – IT sprawl and
complexity. Changing or expanding these networks for new capabilities, applications, or users re-
quires reconfiguration that is time consuming and expensive. In addition, as customers consumed the-
se network devices, a learning curve came into view for their management tools that again was out of
focus where OPEX
Software-defined networks take a lesson from server virtualization and introduce an abstraction layer
separating network intelligence and configuration from physical connections and hardware. In this
way, SDN offers programmatic control over both physical and virtual network devices that can dy-
namically respond to changing network conditions. SDN like CI is an IT trend focused on reducing
complexity and administrative overhead while enabling innovation and substantially increasing return
on investment.
Hyperconvergence; Second—Storage
Software Defined Storage (SDS) places the emphasis on storage-related services rather than storage
hardware. Like SDN, the goal is to provide administrators with flexible management capabilities
through programming.
Storage for virtualization became a priority because desktop and server virtualization adoption grew
rapidly. The goal and aim for IT was to find a way to merge multiple vendor systems, likely siloed,
into a single manageable pool of storage available to IT administrators in a more efficient and auto-
mated management approach to delivering requirements for more storage through automated policy-
based settings, like SDN.
In a traditional pre-SDN world; a new high priority user shows up on the scene and the data center is
virtualized so it‘s easy enough for IT to provision a new server for their use but not always as easy to
provision new storage, depending on what the user‘s application requires. In a worst case scenario, it
may take IT more than a small chunk of time for a SAN administrator to go through multiple steps in
a specific order to create a new LUN to support the workloads of the new user. In an SDN environ-
ment, this is automated vs the undesirable task of doing this work manually.
A software-defined data center has a path that leads to automated policy-based management and pro-
visioning that is getting paved by commodity hardware – the kiss of death for hardware adoption as
IT vendors once knew it. This is because the simplicity that a software-defined approach offers com-
pletely trumps the value of the underlying hardware. Sure the underlying hardware must still be capa-
ble, high performing and resilient, the latter more so than ever, but this trend - the precursor to Hyper-
converged Infrastructure offerings – is intent on accomplishing it‘s goal with a commoditized set of
hardware building blocks to maximize it‘s ROI. The commodity hardware being number 1; inexpen-
sive and number 2; interchangeable with other hardware of it‘s type.
Converged infrastructure as a trend delivered on it‘s promise and was then bolstered as IT ramped
from there to software-defined network and storage, further decoupling the hardware from the soft-
ware with advanced capabilities taking advantage of the core benefits of virtualization. As a virtual-
ized data center continually proved itself through greatly reduced OPEX and substantially lowered
CAPEX, with the promise of cloud computing coming true in a big way, the stage was set to again do
more with less but more importantly to do so without tossing aside the initial investment in all that
integrated hardware and siloed sophisticated storage and networking. The challenge to IT vendors
was – who would do this first and who would do it best?