slalom technology - amazon s318 re:invent 2016 recap 20 microservices & containers devops 24...
TRANSCRIPT
SLALOM
TECHNOLOGY
Is Lift & Shift Actually A Quick And Painless Path To The Cloud? 3 Reasons why Lift and Shift will cost you more in the end
Deploy All Things SeriesEverything you should know to get started with DevOps
Serverless ArchitectureServerless warrants your attention
Here’s why
VOLUME 1 | MARCH 2017
2
Slalom Technology
WRITING
Brendan Schoch
Bruce Cutler
Ivan Campos
Karl Schwirz
Michael Hodgdon
DESIGN & EDITORIAL STAFF
Jessica Hopkins
Ivan Campos
Karl Schwirz
Find more content from Slalom Boston
Application Developers
slalomtechboston.com
3
Volume 1
Copyright © 2017 by Slalom Consulting
All rights reserved. No part of this publication may be reproduced, distrib-
uted, or transmitted in any form or by any means, including photocopying,
recording, or other electronic or mechanical methods, without the prior
written permission of the publisher, except in the case of brief quotations
embodied in critical reviews and certain other noncommercial uses per-
mitted by copyright law. For permission requests, write to the publisher,
addressed “Attention: Permissions Coordinator,” at the address below.
Slalom Consulting
Attn: Slalom Tech Boston
316 Stuart Street, Suite 300
Boston, MA 02116
617-316-5400
slalomtechboston.com
Printed in the United States of America
5
Volume 1
CLOUD COMPUTING
08 Is Lift & Shift Actually A Quick And Painless Path To The Cloud?
14 Serverless Architecture: Driving Toward Autonomous Operations
18 Re:Invent 2016 Recap
20 Microservices & Containers
DEVOPS
24 Deploy All Things: Stop Wasting Time & Money
32 Adopt Devops Today: 5 Ways To Incorporate In Your Delivery Process
36 How to Choose The Right Devops Tools: A Step-by-Step Guide
MODERN WEB
44 Worlds Collide: The Convergence of Modern Web, Classical Web & Mobile
48 Single Page: Application Build And Deployment
CONTENTS
6
Slalom Technology
7
Volume 1
CLOUD COMPUTING
8
Slalom Technology
Is Lift & Shift actually a quick and painless path to the cloud?3 Reasons why Lift and Shift will cost you
more in the end
Karl Schwirz & Michael Hodgdon
When our customers approach
us about moving to the cloud,
they are no longer asking why,
they are asking how. In particu-
lar, two common themes have
arisen over time:
How do we get there?
How quickly can we get there?
This post is not to answer these
questions, but rather to shine a light
on a common pitfall we often see:
assuming a Lift & Shift strategy is a
quicker and cheaper journey to the cloud.
Lift & Shift is exactly what it sounds
like, taking an existing on-premise
environment and applications, and
directly moving to the cloud with no
material change. Simply put, you’re
treating the cloud as just another data
9
Volume 1
center. This strategy misses on including any of
the diferentiating services that the cloud ofers.
Organizations typically choose this path because
it’s perceived that re-engineering for the cloud is
time consuming, not secure, more expensive and
not necessary.
While the Lift & Shift strategy has its place as part
of a comprehensive cloud migration approach, it
should be limited to exactly that, a piece. As we’ll
discover in this post, it does have hidden costs
that should be considered.
Top 3 reasons to consider with a Lift
and Shift Migration.
1. Arguments for Lift & Shift are all debunked
First, let’s continue to look at why Lift & Shift might
on the surface be appealing: Faster to the cloud,
less upfront investment, less disruption to develop-
ers and the business
These all sound like perfectly reasonable desires
for your organization when making such monu-
mental change in how your infrastructure is run,
right? Let’s look at these advantages and break
them down. They are not as straight forward as it
might seem.
Faster to the cloud
Ask anyone who has gone through a Lift and Shift
and they’ll tell you how ‘quick’ and on-time their
migration completed. The fact is, legacy applica-
tions were not architected for cloud services and
in most cases end up requiring partial or complete
re-architecting just to meet the previous operating
status quo.
Think about this example, we’ve had a client that
architected their legacy applications on 32-bit
operating systems. Come time for their Lift & Shift
migration, they realized mid-stride that only 64-bit
was available to them and could not successfully
deploy the applications. This left them scrambling
for resources to retrofit their applications and
extended the timeline significantly.
The following are some common occurrences
we’ve seen surface during Lift & Shift eforts and
significantly slow down progress:
- Dynamic IP Addresses that change at any given time
- Bloated legacy applications
- Technical Debt
- Ephemeral Storage
- Session data with load balancing
- Deploying legacy configurations on new (cloud)
hardware.
Less up-front investment
We just demonstrated an example of unexpect-
edly having to allocate development resources,
that were otherwise prioritized, to refactor a fairly
common occurrence. Now, let’s factor in the other
applications and hardware that has been accruing
technical debt over the years. The stage has been
set to be completely reactive to these issues
instead of getting ahead of it. This means more
people, more time, and ultimately more cost.
Even if you pull of the Lift & Shift unscathed, it
almost always costs more than expected which
we’ll describe in more detail later.
Less disruption to developers and the business
10
Slalom Technology
As you can see from the previous sections, this
becomes an assumption based on debunked
arguments, rather than a stand-alone reason. We
can clearly see that developers will be constantly
disrupted and stretched thin.
At the point of deadlines being in danger, extended
or missed the business is now going back to
stakeholders and customers to adjust schedules
and explain why they’re of the mark.
By the time a Lift & Shift is all said and done, many
applications can not only be re-engineered, but
optimized for the cloud. In many cases doing so in
less time and money than it would take to make a
Lift & Shift work.
2. Lift and Shift gives you an infrastructure, but
not a cloud infrastructure. When we look at our
previous experiences and many other case studies
under the Lift & Shift strategy, what it boils down
to is lost opportunity. Sure, as we just discussed, in
theory you can map a copy of your infrastructure
to a cloud provider and it might be quicker than
re-engineering, but it will be just that. A copy.
By doing this, you miss on the revolutionary
capabilities that cloud provides. For example,
scaling to meet the demands of your application
consumption. You could have your environment
configured to not only guarantee your applications
are available to customers almost 100% of the time
but, also grow and shrink your compute power
based on demand … without your team making a
single keystroke.
When you look at Lift and Shift, what it
boils down to is lost opportunity.
You could save on cost plus reduce your manage-
ment needs by deploying your application to
services that don’t require server maintenance…
this is known as serverless architecture.
These are just a few examples that scratch the
surface of the possibilities available to us when
utilizing cloud services to their potential. When
integrated into your systems, you’re no longer
migrating to just another data center, you’ll be
building a cloud infrastructure.
3. Lift and Shift hits your wallet. Among these
examples is a common thread — organizations
are trying to save money and optimize resources.
We’ve already seen several examples of unexpect-
edly bringing on more resources and pushing
out or even missing timelines. All of which have
significant costs.
The very nature of Lift and Shift [means]
you could very well end up spending just
as much or more on the monthly invoice.
Let’s pretend those aren’t factors for a moment
and the Lift & Shift succeeds unimpeded. There
are still unnecessary costs that were hidden in
the old data center. These costs range from over
provisioning infrastructure capacity based on best
guess estimates, to applications being artificially
bloated and storage heavy after years of technical
debt. The very nature of Lift and Shift and directly
mapping to a cloud infrastructure means you
could very well end up spending just as much or
more on a monthly invoice, let alone compounded
11
Volume 1
with the previous factors.
So if not Lift and Shift, what then?
The bottom line is there is no one strategy you’re
going to be able to use to port your entire infra-
structure and application portfolio. As you plan
your migration to the cloud consider an adop-
tion framework, such as The 6 Rs, while taking a
honest look at your systems and break down the
approach.
Ask questions. Here’s some good starters: Which
applications are business critical and need to be
firing at all cylinders at all times? Which applications
have been architected well and could use a tune
up? Perhaps auto-scaling or cloud storage that is a
better option than keeping data local?
· Which applications need to be sunset and rewrit-
ten? Which applications fit the Lift and Shift profiles
outlines above?
When discussing migration models, this is a case
where everyone is a unique snowflake and a
Lift & Shift might be a component of the overall
strategy, but it should remain that, a piece of a
larger puzzle.
12
Slalom Technology
13
Volume 1
AWS CompetenciesWhether you need support migrating your legacy
applications or transforming the way your teams work
with DevOps, we can help — and we’ve got the AWS
competencies to prove it.
14
Slalom Technology
Serverless Architecture: Driving toward autonomous operationsHere’s why serverless architecture warrants
your attention.
Ivan Campos
Driverless cars will create more
free time, decrease accident
rates, and seek to automate-away
trafic congestion. The same
can be said for serverless
architecture.
The term “serverless” doesn’t mean
removing servers from your architecture.
It’s about “abstracting users away from
servers, infrastructure, and having to deal
with low-level configuration or the core
operating system,” says A Cloud Guru’s
Peter Sbarski.
Modern cloud computing has become key
to meeting today’s pressures of quality,
speed, and cost. And with the shift away
from on-premise or co-located solutions
well underway, the ability to architect
your cloud-based solution for agility will
become a key competitive diferentiator.
15
Volume 1
The benefits
1. It removes the need to manage servers.
Cloud vendors’ serverless technology oferings
provide high availability, fault tolerance, and
auto-scaling by default. Horizontal auto-scaling
can grow or shrink your fleet of abstracted servers
(relative to demand) in a just-in-time manner with-
out any IT intervention. And autonomous cloud
operations can automatically manage
capacity planning.
With an infrastructure approaching infinite scal-
ability and availability, there will be a significant
drop in on-call support incidents. Delegating your
server management also simplifies your physical
architecture, because you can treat all of your
servers as ephemeral black boxes.
2. It increases focus on what matters to your
business. With server management out of the
picture, developers have more time to focus on
business logic. This focus is further intensified
as functions become the unit of deployed work.
Focusing on deploying independent functions
(Function as a Service or FaaS) leads to evolving
into a service-oriented architecture (SOA) and
microservices when fronted by a serverless API
gateway. With a concerted focus on individual
functions, we also introduce de facto best prac-
tices, like separating concerns and adhering to the
single responsibility principle.
“With server management out of the
picture, developers now have more time
to focus on business logic.”
3. It reduces costs, since you only pay for what
you use. In a serverless architecture, you treat
your Infrastructure as a Service (IaaS) costs like
you would any public utility. Just as you only pay
for water when you run your faucets, you only pay
for your functions when they run in a serverless
manner. The primary benefit of this approach is
that you don’t pay for idle time on cold servers.
And this generates an incentive to write code that
executes as fast as possible.
Vendor alternatives
Several large cloud providers have already
introduced serverless architecture enablers. Event
sources that can trigger the serverless compute
execution vary by service ofering. Examples
include: monitoring application logs; database
changes; object uploads; and calls to APIs that
front our functions.
Being event-driven ensures that our functions
only fire when needed. Currently, the most promi-
nent vendor solutions are: Amazon Web Service
(AWS), Lambda, Microsoft Azure Functions, Google
Cloud Functions, IBM Bluemix Openwhisk
“Just as you only pay for water when you
run your faucets, you only pay for your
functions when they run in a serverless
manner.”
Considerations
We’ve covered several benefits of serverless
architecture, but it’s important to understand that
16
Slalom Technology
serverless architecture is a technology trigger
moving toward a peak of inflated expectations.
An example of its inflated expectations is the
movement to NoOps. While DevOps is a movement
meant to foster communication and collaboration
between software developers and employees
working in operations, NoOps is when developers
can code and let a service deploy, manage, and
scale the code. NoOps is a divisive term signify-
ing complete automation of operations — and it’s
much too early to call for the dissolution of internal
operations teams.
“NoOps is a divisive term signifying com-
plete automation of operations — and it ’s
much too early to call for the dissolution
of internal operations teams.”
It’s also important to understand that server-
less architecture doesn’t fit all use cases. For
example: long-running transactions may become
an economic liability when you pay for what you
use. If you’re looking for appropriate applications
of serverless architecture, AWS has provided the
following reference architectures:
1. Mobile backend: Mobile Backend as a Service
(MBaaS) supports all solutions running on mobile
devices. Using this blueprint, the cost model, agil-
ity, and scalability of a serverless architecture can
be harnessed to power mobile client solutions.
2. Real-time file/stream processing: In the event
that you’re being provided files or a stream of
data, you can process what’s being sent over in
real-time solely using AWS managed components
(i.e. Lambda, Simple Storage Service (S3), Simple
Notification Service (SNS), DynamoDB, Kinesis, or
CloudWatch).
3. Web applications: For your browser-based
application needs, a serverless architecture
bypasses the headaches involved with site avail-
ability, scalability, and machine administration.
You can simply create a static website using only
S3 or a more dynamic application that can store
data and derive actionable information.
4. Internet of Things (IoT) backend: As sensors
pervade everyday objects, there needs to be a
means to capture and analyze the flood of data.
If you’re looking to automate or gain insight into
behavior from sensor data, a serverless architec-
ture can eficiently react to what our connected
devices are sending.
Lastly, a crucial consideration is vendor lock-in.
While you’re aforded the freedom to bring your
own code, as long as your programming language
is supported, there will be a natural tendency to
also leverage ancillary serverless technologies
of your event-driven compute vendor. As this
intentionally occurs in the name of simplicity and
17
Volume 1
time-to-market, your switching costs will become
greater, thereby increasing the dificulty to move
your serverless solution to another cloud provider.
Conclusion
When paired with rich front-ends, mobile clients,
IoT devices, or even next-generation chatbots, a
serverless architecture can serve as a simple yet
cost-efective solution for your future projects — or
as a convenient approach to breaking up mono-
lithic legacy applications. It could bring a future
where all of your infrastructure needs are met in a
completely autonomous manner.
19
Volume 1
re:Invent 2016 Recap
Brendan Schoch
At the annual re:Invent conference AWS
announced innovative new services in
the compute, DevOps, Data and Artificial
Intelligence spaces.
These services build on the strong set of infra-
structure and platform services already ofered
by AWS. Some of the major announcements at the
conference included:
1. Compute: Lambda & Container Services: AWS
enhanced services that allow compute provision-
ing on demand with either container services
such as ECS and Blox (a new service announced
for scheduling) or Lambda. Even in the EC2 space
there were sessions on how to efectively use spot
instances to provision compute power as needed.
Additionally, AWS seems to be positioning Lambda
to provide on demand compute power for every-
thing from Mobile to DevOps pipelines as well as
providing compute power to SnowBall or the new
Greengrass IoT product.
2 .DevOps: Management & Logging Services:
OpsWorks now provides a fully managed Chef
server that can alleviate the management head-
ache. Rounding out the DevOps pipeline is Code-
Build. CodeBuild provides a service for managing
application build servers. With CodeBuild, you are
charged only for the compute resources required
and it can help eliminate the setup time of a Jen-
kins cluster within a VPC. For enhanced applica-
tion logging insights especially with microservice
architectures, AWS released X-Ray.
3. Artificial Intelligence: IoT & Alexa: At the
partner keynote, AWS had an interesting presenta-
tion on IoT. They announced GreenGrass to enable
embedded devices with compute power using
Lambda. In the Alexa space, they announced Poly
and Lex services, the engines that power Alexa on
the echo.
4. Data Solutions: A really great new service
announced for S3 is Athena which allows for
querying S3 with SQL. AWS also announced new
features to manage data visualizations QuickSite.
Another new service announced for managing ETL
jobs is Amazon Glue. Along with these new data
services Amazon, AWS announced new features
for Redshift and EMR.
AWS rounded out existing pipelines in both the
compute and DevOps spaces with the new ser-
vices announced at re:Invent. They have made it
easier to run a complete pipeline from end to end
with both platform and infrastructure services. In
addition to the services that enhance classical ar-
chitectures, pushes into the Artificial Intelligence,
SmartHome and IoT spaces provide new low costs
tools for testing new innovative services that can
be transformative businesses.
21
Volume 1
Microservices & Containers
Ivan Campos
When presented with a large problem,
our first step is to break it down into
small, more manageable pieces.
As system architectures have evolved, we have
moved from monolithic/layered to service-orient-
ed architectures.
Each step along the evolutionary path has broken
down our systems further and further — from
monolithic to multi-tiered and onto service-
oriented architecture. A modern architecture style
that continues this trajectory towards granularity
is microservices.
Microservices serve as a means to deconstruct
our applications into a suite of small services.
Each service is modeled around a single busi-
ness capability. In doing so, we are placing a
fence around our services that marks a bound-
ary around our business context.
The term “bounded context” was first introduced
in the book Domain-Driven Design as a means
to describe this pattern. Each bounded context
afords us technical and organizational indepen-
dence.
From a technical standpoint, we can individually
deploy each service. To further isolate our ser-
vices, we can have each run in its own process.
This is typically accomplished through containers.
Containers are hermetically sealed compartments
that isolate failure in our processes so that they
do not overtake the operating system’s resources.
This failure isolation is akin to the shipping tech-
nique of bulkheading.
Bulkheads are watertight partitions designed
to prevent damage from sinking an entire ship.
Containers, like Docker, also pack all of your
system’s dependencies inside an isolated process.
This handles dependency management in a way
that enables parity across all deployment environ-
ments.
22
Slalom Technology
23
Volume 1
DEVOPS
24
Slalom Technology
Deploy All Things With DevOpsStop wasting time and money — deliver value
faster and ship great software with DevOps
Any software professional will
tell you that shipping code is
hard work.
Far too many organizations focus solely
on how long it will take to build an ap-
plication, while completely ignoring what
happens after the product has been
written. And by the time organizations
realize that they should have focused on
deployment, testing, and quality assur-
ance, it’s already too late. Getting applica-
tions back on track is far more costly than
doing it right from the start.
Has your organization fallen into this
common trap? Ask yourself:
Do your users typically tell you when
you have defects or outages before
your systems alert you of the incident?
Does a software rollback cause panic
across your entire organization, and often
put everything else on hold? Are your
developers spending exorbitant amounts
Karl Schwirz & Michael Hodgdon
25
Volume 1
of time merging code at the end of software
integration cycles? Does the lack of visibility into
your production systems prevent your teams from
solving problems in real-time? Does getting a
code change (e.g. bug, new feature, hotfix) require
days or months for production deployments?
If you answered yes to one or more of these ques-
tions, chances are your teams aren’t reaching their
potential. The key to building and maintaining ap-
plications in a way that integrates all functions of
the software delivery process is known as DevOps.
Demystifying DevOps, in three parts
In this three-part series, we’ll demystify DevOps,
provide recommendations for how to introduce
DevOps techniques into your organization, and
discuss the technology and organizational
changes necessary to succeed — resulting in
increased reliability and quality in your overall
software development process.
Over the course of this series we will demon-
strate how you can use DevOps to: increase
collaboration, increase visibility into your
development process, seamlessly roll out changes
to production daily or weekly, introduce the
highest level of quality possible into your software
solutions.
So, what is DevOps exactly?
The term DevOps is used interchangeably to
describe a lot of diferent things. Our friends at
Wikipedia provide the following definition: “The
method [DevOps] acknowledges the interdepen-
dence of software development, quality assurance,
and IT operations, and aims to help an organi-
zation rapidly produce software products and
services and to improve operations performance.”
Wait, huh? That jumped pretty quickly from “inter-
dependence” to “improving operational perfor-
mance.” There has to be more, right? In fact, there
is. When our customers ask what DevOps means,
we typically explain it like this:
1. It’s about your people. Culture — Own the
change to drive collaboration and communication.
Lean — Use lean principles to enable faster cycle
time and feedback loops.
2. Use tools that help you move faster and gain
insight. Automation — Take the manual steps
out of your value chain. Monitoring — Measure
everything and use data to refine cycles.
3. Use these insights to work smarter.
Sharing — Share experiences to enable others to
learn and improve. Improve work — Teams are
more proactive in refining their software develop-
ment process.
DevOps in the wild — and a 95%
jump in velocity
Let’s make this real by looking at how we’ve
introduced these topics with one of our long-term
clients.
They were building a massive data-collection
application that ingested streams of information
from employee devices and served by a website
26
Slalom Technology
filled with custom-built dashboards and data
discovery tools. It was a high-profile efort that
was well-funded and resourced. We were brought
in because, despite being set up for success,
progress was slow and stakeholders were getting
quite nervous.
Team resourcing didn’t appear to be the prob-
lem. The team of 12 — including web developers,
software architects, data specialists, quality
assurance leads, user experience designers, and
business analysts — seemed to have enough
depth to keep things moving. So, we started by
taking a hard look at how they were
measuring progress.
After examining their story backlog and burn
down, we calculated that they were operating at
about 30% of capacity.
The team was picking small batches of work, and
passing it through the stages of completion: First
developers would write some code, then QA would
test it, then it was deployed, and on to the next. In
other words, only one group was really working at
a given time while the rest were sitting idle.
We told them: You can’t just throw people at the
problem; you must change how the solution to the
problem is executed.
Prior to our engagement, the team had several
really bad releases that caused major internal
political storms. The issues stemmed from con-
flicts during code merges, so the team reacted by
slowing down to ensure quality. As time elapsed,
morale waned and people burned out, and it be-
came apparent that this strategy wasn’t going to
result in building quality software AND delivering
at a timely pace.
By the time we completed our assessment and
action plan, they were back on schedule and
delivering features on time with a 95% increase
in velocity. We had shifted how they approached
their build out and enabled their teams to take
control of the delivery process.
By the time we completed our assess-
ment and action plan, they were back
on schedule, delivering features on time
with a 95% increase in velocity.
So, how did we achieve these
results?
We introduced a build and deployment strategy
that allowed the team members to work together
and in sync. Before, they’d slow down time-to-
production to ensure a degree of better quality.
However, as our friends at New Relic remind us,
“You don’t have to choose stability versus
new features.”
We introduced the ability for anybody to create
any version of the software in a new environment
at the click of a button. This laid the foundation
for the team to become more productive and
increase stability. Over time, more and more ef-
ficiencies were added as they worked together to
further mature their process.
27
Volume 1
28
29
Volume 1
Slalom Overview
Slalom designs and builds strategies and systems
to help our clients solve some of their most complex
and interesting business challenges. Every individu-
al who joins us becomes part of our fabric, weaving
their talents and perspective into the greater whole
of who we are.
Visit slalom.com to learn more.
Each day Slalom helps 1,000 of the world’s most influential orga-
nizations bring their strategies to life. We craft custom solutions
that help those we serve fulfill their purpose and vision. The real
measure of our success is the actual realization of theirs.
What’s our key diff erentiator? We pair competencies with
an overarching focus on getting things done. We combine the
intimacy of a local boutique partner, the industry and domain
savvy of a strategic think-tank, the creativity of a digital agency,
the technical acumen of a software developer and the insights of
data analytics.
Why Slalom?We’re a purpose-driven consult-
ing firm that helps companies
solve business problems and
build for the future. We’ll help
you find your why, we’ll help you
embrace change, and we’ll part-
ner with you to design and build
the solution that’s right for you.
Services
Customer Engagement
Delivery Leadership
Experience Design
Information Management & Analytics
Organizational Ef ectiveness
Strategy and Operations
Technology Enablement
31
Volume 1
Slalom Partners
32
Slalom Technology
5 Ways To Incorporate DevOps Into Your Software Delivery ProcessStart adopting DevOps today
DevOps is important to any mod-
ern software development life
cycle and can have a dramatic
impact on eficiency and velocity.
But now that you’ve decided to
adopt DevOps, where do
you start?
Here are five tips to incorporate DevOps
into your software delivery process.
Remember: Adoption won’t happen
overnight. Anything worth doing requires
time and investment, so be patient, hold
your teams accountable, and commit to
the process.
1. Enable your entire team to work
together. You may think you’re already
collaborating eficiently, but development
Karl Schwirz & Michael Hodgdon
33
Volume 1
teams are often divided into silos: Developers
write code; the operations team deploys the code;
QA tests the code; production support repairs
applications/servers; and so on. There is minimal
communication between the silos and work is
often incorrectly executed or redone.
As our friends over at the GearStream Blog write:
“Breaking down silos and bringing
people together is the MOST IMPORTANT
part of DevOps.”
The reason is simple: aligning your teams to
work together enables them to drive toward the
same goal.
Embracing Agile project management is a major
tenant in DevOps culture. Agile works aggressively
toward bringing your teams together by restruc-
turing work and introducing feedback along the
way. By default, tasks are broken down from larger
sets of requirements into manageable chunks, or
stories. Work is completed in small iterations, or
sprints, that typically run for two weeks.
Defining work in smaller, discrete tasks allows
each team function to work on tasks without
getting tangled. Another benefit: Code is delivered
faster with short iterations, allowing stakeholders
to provide feedback more quickly. The result is
faster integration of required changes at a lower
cost because issues are fixed and improvements
are made closer to their introduction. In other
words, the work is done when it’s relevant.
2. Automate everything! And we mean every-
thing! A truly mature DevOps team has automated
everything — from their testing to their deploy-
ments to provisioning the machines they deploy
to. With the advent of cloud computing this has
become a more realistic goal for all.
Before, to provision a new environment, add
capacity, or triage a corrupt server, you had to
physically add or remove hardware to your net-
work. But now that physical management is left
to the cloud provider, you can script and deploy a
whole environment stack at the click of a button,
and it’s ready to use within minutes. Automating a
step further applies to deployment of the applica-
tion itself. Through tools and frameworks available
today, we can build, deploy, and test code, at the
click of a button.
Best-practice blogs, like Mobify, recommend
“treat[ing] your server configuration like develop-
ers treat code.” Extract out environmentally-spe-
cific application properties into XML configuration
files that can be stored in source control and ap-
plied using a configuration management system.
That is the key to automation, and the cornerstone
of DevOps.
The only diference between dev and production
should really boil down to a set of connection
strings and environment variables or, as Humble
and Farley put it in Continuous Delivery:
“There are diferences between deploy-
ing software to production and deploy-
ing it to testing environments — not least
34
Slalom Technology
in the level of adrenaline in the blood
of the person performing the releases.
However, in technical terms, these difer-
ences should be encapsulated in a set of
configuration files.”
3. Everyone is responsible for production.
Everyone. This one might be a little hard to swal-
low for some. We can all relate to the silo’d team
structure that Hiscox describes:
“we had development teams that threw
code over the wall to the release team
that deployed it, and then a separate
team supported the code once it went to
production. In scenarios like this, there
is very little empathy between all those
diferent teams. It ’s a bit like a relay race
and passing the baton — ‘I’ve done my
job, now it ’s your problem.’”
This cannot be. Everyone is responsible for
production, developers and operations alike, and
the reasoning is simple. Who better to triage an
issue than the person that wrote that code or
set up that server in the first place? Furthermore,
exposing developers to their “code in the wild”
will encourage them to write patterns and make
decisions that correlate to how their system will
run in production. Simply put, if you don’t task
developers with production duties, they won’t
write production-optimized code.
The Spotify team has been doing it for years:
“developers deploy their code in produc-
tion by themselves, with or without an
ops engineer to hold their hand. This …
encourages the dev in question to think
seriously about traditionally operations-
focused problem areas such as monitor-
ing, logging, packaging, and availability.”
This forces teams to work very closely with each
other because they feel responsible for each
other’s results.
4. Get obsessed with tests, then automate them,
too. We all know the importance of a good suite
of tests for a software project, and every project
starts the same way.
Your goal: 100% code coverage. You start practic-
ing TDD and when the first deadline comes along,
you’re running a little behind, but you promise
yourself that you’ll go back and put in all the
tests after the sprint wraps up. Before you know
it you’re barely sitting with five percent coverage.
Another scenario: A developer notices several
failing tests on his last check-in at the close of
a sprint. He comments-out the tests rather than
fixing them, promising to address them in the next
sprint.
To be successful at DevOps, automated tests have
to be written not only for your code coverage,
but for your infrastructure scripts as well. If you
make a change to your configuration manage-
35
Volume 1
ment scripts, there should be a test executed to
make sure it compiles properly and passes tests.
If you’re adding functionality to your application,
you should have tests for every scenario.
This is often a time-consuming exercise, but it
should be one of the more important habits you
maintain. Continuous Delivery asserts that, “If it
hurts, do it more frequently, and bring the pain
forward.” Turn your weakest link into the strongest,
and make sure that as your application grows,
you’re growing the first quadrant of the Agile Test-
ing Quadrants.
5. Become comfortable deploying frequently
to production. The hype around production
deployments makes people edgy and extremely
nervous. They shouldn’t. If you keep these tips in
mind, deploying to production should be like any
other environment. If you’ve written your tests
and your infrastructure is deployable at the click
of a button, then you shouldn’t have a problem
deploying up or rolling back at will. Etsy has
this process so under control that they routinely
deploy to their production servers 50 times a day.
They’ve achieved this by fully automating their
entire software stack. As a result, each developer
has a copy of the production environment at their
disposal when making enhancements.
Once the barrier is removed and you can freely de-
ploy to production without hesitation (or nausea),
deployment frequency can increase. Then you can
deploy in more frequent, smaller segments as you
complete your agile stories, instead of deploying
everything at the end of your sprint.
Cultural change takes time. But it’s
worth it
These cultural steps may be the hardest part of
the adoption journey. Don’t try changing overnight.
This kind of change takes thought, planning, and
time for your teams to absorb these shifts in ap-
proach. As you move along this path, it’s important
to measure your progress and use that data to
adjust your strategy to what works for your team.
36
Slalom Technology
How To Choose The Right DevOps ToolsGet to know your tools so you can become
more eficient with your time
The DevOps tools market is
flooded with options — and
choosing the right one for your
organization can be daunting.
If you Google “DevOps tools,” you’ll see
endless lists of tools — everything from
agile collaboration platforms to frame-
works that provide you with continuous
delivery capability. What you won’t find,
however, is guidance for picking one.
The fact is, there’s no single DevOps
solution that caters to every organiza-
tion’s unique needs. If you adopt specific
technologies simply because others
have done so, it could end up doing more
harm than good. Here, we’ll walk through
the best process for identifying the right
tools for your organization. We’ll cover
the gotchas and pitfalls, and how to add
valuable pieces to your workflow so you
can make your implementation a success.
As an essential first step, take time to
Karl Schwirz & Bruce Cutler
37
Volume 1
assess the current state of your delivery pipeline.
This will enable you to identify any ineficient
processes or areas that can benefit from the
adoption of DevOps tools. If you have extended
testing times or slow provisioning of new hard-
ware, it means you may have bottlenecks within
your system, hurting productivity and increasing
feature cycle time.
Bottlenecks within software delivery can ap-
pear in many diferent forms, including: Time-
consuming and error-prone manual processes
(i.e. code builds and code deployment), Manual or
non-existent testing strategies, Manual creation and
configuration of any environment, Failure to properly
understand and test the deployment process, result-
ing in extended deployment times to production,
Time spent waiting for shared resources to become
available.
Who better to ask about bottlenecks than the
team that interacts with the software delivery pro-
cesses on a daily basis? They’ll be able to provide
valuable insight.
We recommend backing up any critical items
you discover — things like logging and customer
feedback. This will help you understand the
entire workflow — customer impact, discovering
which tasks are performed most often, how long
each takes, and rates of failure. Armed with this
information, you can plan and prioritize what pain
points you want to tackle based on what would
provide the most benefit.
As the concept of DevOps has grown from
buzzword to a necessity in the last few years, the
number of tools available to automate processes
within the area of software delivery has grown
exponentially. These tools can be divided into the
five categories below. Organizations that have suc-
cessfully integrated DevOps principles within their
software delivery pipeline automate tasks using
tools from each of these major tool categories.
A common myth is that version control is de-
signed to hold source code only. We would like to
dispel this myth. The source control should store
everything that encompasses a releasable version
of your software. In short: application code, infra-
structure code, configurations, build mechanisms,
and databases should all be maintained using a
consistent version control strategy.
The initial time investment required to script all
aspects of your software will far outweigh the
long-term benefit of being able to view your entire
system as a single, releasable unit. If you’re doing
it right, any authorized team member should
be able to re-create any version of the software
system at any point in time.
You likely already have a selected version control
system in place. However, if you have the oppor-
tunity to start from scratch, consider tools like Git/
GitHub, Subversion, BitBucket, and Microsoft Team
Foundation Server. However, these are not the
only version control tools that work well within a
delivery pipeline implementing DevOps principles.
We have yet to find version control software
that doesn’t allow us to integrate well with other
technologies. The VC tools mentioned above were
highlighted because they’ve been found to be
38
Slalom Technology
flexible and reliable by a number of organizations,
including Slalom.
Some things to consider as you decide on a VC
tool are: Centralized vs distributed model,
Team size, Open source/proprietary, How well it
integrates with other parts of the DevOps toolchain.
PuppetLabs aptly define configuration manage-
ment (CM) as “the process of standardizing
resource configurations and enforcing their state
across IT infrastructure in an automated yet agile
manner.” Expanding on this definition, you write
code that describes desired configuration states,
and your chosen CM tool does the heavy lifting
to ensure that this configuration is applied to
desired targets in a consistent manner. Whether
you’re provisioning infrastructure, deploying your
application, enforcing server configurations, or
updating security policies, configuration manage-
ment tools automate tasks which were previously
performed using slow, manual steps.
Only a few years ago, these tasks would take days
or even weeks to complete, with a high possibil-
ity of configuration error given the number of
manual steps. As more and more organizations
shift toward using the scalability of cloud-hosted
infrastructure, configuration management tools’
ability to seamlessly apply desired configuration
states to hundreds or thousands of nodes at once
is extremely beneficial.
There are many open source configuration
management tools available. Some popular ones
are Chef, Puppet, Ansible, SaltStack, and CFEngine.
When choosing a specific CM tool to adopt,
consider: Does the tool require the DevOps team to
learn a new language? How does the tool integrate
with other parts of the DevOps Stack?
How complex is the tool to learn, in terms of setup
and getting started? Push vs. pull: How are updates
to nodes triggered? Is it straightforward to scale the
number of managed instances both up and down?
How good is the available documentation? Is there
active community support?
Each of the aforementioned CM tools have ad-
vantages and disadvantages, so we recommend
taking the time to do your research. Consider
some of the questions we’ve raised along with
the needs and requirements of your organization
before choosing one to use.
Build system software could arguably be the heart
of your software delivery pipeline. From compil-
ing code to orchestrating various levels of testing
suites, your build system will have a hand in some
very important tasks.
Cooperating directly with your chosen version
control software, the build system can be config-
ured to validate the integrity of code checked-in
by developers and report any build errors and unit
test failures. By doing this, the build system acts
as a virtual safety net. If build errors are reported
or testing suites fail, the proposed changes never
make it to the deployment package. The value of
this is immense, allowing organizations to have an
additional degree of confidence when deploying
code to production servers ten or even hundreds
of times per day.
39
Volume 1
The decision on which build tool to integrate
within your solution will be based on a number of
factors: Does it interact well with other members of
the toolchain, particularly version control?
What’s the level of support for third-party software
via plugin libraries, etc.? Written configuration or
web interface: How are jobs created and scheduled?
What is the quality of available documentation?
User preferences and prior experience with specific
technologies.
With a build system in place, it’s possible to
further streamline this process using an artifact
repository tool. When we develop an applica-
tion, we commonly use supporting development
libraries from a variety of diferent sources. These
libraries often get stored in the darkest depths of
your source control system and become dificult
to manage as projects scale in size.
This issue often materializes when multiple teams
require access to diferent versions of a library
with an ambiguous owner. Thankfully, situations
like this can be avoided through the use of an
artifact repository tool, which provides a central
repository for commonly employed dependencies.
This greatly simplifies the distribution of artifacts
among various project teams and has the added
benefit of versioning these files.
Along with maintaining source code using version
control software, it’s important to also store suc-
cessful software builds, so you can deploy any ver-
sion of your software at any point in time. Maybe
you’re deploying the latest build, or perhaps three
versions ago as part of a rollback. Storing pack-
ages in a repository, like NuGet or Artifactory, will
provide you with the flexibility to fully control both
the what and when of software deployment.
Along with version control and build system
software, deployment tools make up an important
part of a software delivery pipeline, because they
automate the deployment of code to specific
server instances. A number of the build system
tools listed in the previous section (Jenkins,
TravisCI, etc.) also conveniently ofer a deployment
component. Using a combined build system/
deployment tool will allow you to consolidate
some of the delivery pipeline processes, but may
lack the flexibility and scripting capabilities of
dedicated deployment tools like Capistrano or
Octopus Deploy.
When choosing a tool for application deployment,
consider: What steps are required to deploy your
application (straightforward vs complex)? Do you
require a tool that ofers extensive scripting capabili-
ties? Does a tool require the DevOps team to learn
a new language? Ease of use and documentation.
User preferences and prior experience with specific
technologies. Release management. Does the tool
ofer code promotion between environments? (Dev
>> Test >> Production)
Organizations that fully integrate DevOps prin-
ciples within its software delivery process place
an immense amount of importance on monitoring.
In a previous blog post, we shared how we were
able to achieve a 95 percent increase in velocity
by implementing DevOps principles with a client.
A big piece of that came from monitoring and
alerts setup for the solution — like notifying the
team when performance was falling on a critical
data job for the application. By doing this, the
40
Slalom Technology
team had a much better understanding of where
to start addressing problems, vs. finding out via a
user and starting at square one.
When we talk about monitoring, it’s usually in one
of two areas: application or system monitoring.
At the application level, metrics like requests per
second, transactions per second, and response
times are collected to gauge web level perfor-
mance. Whereas at the system level, metrics
relating to the underlying hardware such as CPU
utilization and memory usage are gathered. With
cloud systems, we can also view the state of your
resources — for example, bandwidth utilization
across web servers, table performance on a
database, or custom monitors to give even further
depth into your application’s execution.
When examining available monitoring options to
choose from, consider: Is setup and the presented
information intuitive? Is analysis provided on the
gathered metrics? Is the software open source and
does it ofer an API for custom metric creation? Are
there notifications based on metric triggers? If so,
are there third party integration points for collabora-
tion mediums such as JIRA or Slack? Is it straight-
forward to scale the number of managed instances
both up and down?
Fully integrating DevOps principles within an
organization’s software delivery pipeline can be
challenging. For many team members who have
used traditional software delivery techniques for
years, the idea of deploying and releasing code
to production servers multiple times per day may
seem extremely far-fetched.
Considering this, it’s important to pick the right
DevOps tools to integrate within the software de-
livery pipeline. Selecting the right tools will enable
you to demonstrate the benefits of DevOps and
alleviate the fear of change within an organization.
Instead of rushing to introduce four or five new
tools at once, you should begin by introducing
tools that will bring the largest benefit to most
people, as identified by your investigation into the
system bottlenecks.
In addition to demonstrating the enormous ben-
efits that DevOps tools can bring, it’s important to
spend time educating team members about how
the tools apply to the software delivery pipeline.
If a new tool is simply thrown at a team with no
instruction on how to use it, it’s highly likely that
they will reject the tool and regress to their previ-
ous method of working. Instead, DevOps team
members should schedule time to help others
learn about new tools and answer any questions
they may have. Doing so will provide enormous
benefits, greatly increasing the chances that the
adoption of new DevOps tools goes smoothly.
To learn more about DevOps best practices, check
out our list of five things you can start doing right
now that will get you on your way to DevOps and
our post on why DevOps can have a huge impact
on the eficiency of your SDLC.
41
Volume 1
42
Slalom Technology
43
Volume 1
MODERNWEB
44
Slalom Technology
Worlds CollideThe Convergence of Modern Web, Classical
Web and Native Mobile Development Paradigms
For years, I have switched
between building native mobile,
classical web applications and
modern web applications. The
context switching was, well,
painful, until now.
Recently, I switched from building native
iOS and Android applications to a modern
web application, but this time the switch
was to Angular 2 with TypeScript.
The learning curve no longer felt daunt-
ing. New ECMA standards, TypeScript
and Node improvements have organized
the way modern web applications are
constructed and delivered.
Modern web applications now have simi-
lar implementation paradigms as classi-
cal web and native mobile applications.
Brendan Schoch
45
Volume 1
Level Setting
To understand this evolution, let’s level set some
terminology.
Native Mobile Application — An application that
runs on the core operating system of a mobile
device e.g. iOS and Android applications from the
App or Play Store
Classical Web Applications — A web applica-
tion with server side code that is processed and
rendered dynamically e.g. Java Spring, .NET MVC/
Webforms
Modern Web Applications — Static, single page
web applications leveraging JavaScript technolo-
gies e.g. Angular, React, Ember
The Evolution
Native mobile applications and classic web
applications have many commonalities when it
comes to the programming details. They are often
built with mature object oriented languages. They
have lifecycle methods. They provide the luxury of
a compiler. And they have well documented devel-
opment patterns. The general structure of these
applications is organized and easy to consume
across language and platform.
In addition to the language feature similarities,
applications written in object-oriented languages
have well documented patterns and practices for
dependency injection and IOC.
Modern web applications on the other hand have
not ofered the same advantages. Node provided a
path and ingenious engineers crafted software to
fill JavaScript language gaps. Using NodeJS, devel-
opers could start to inject module dependencies
using require statements.
The frameworks and libraries that emerged, Angu-
lar, React, etc provided a new streamlined way to
deliver client side web applications. But, the same
organization, structure, patterns and debugging
tools did not exist. For many places, that barrier
was a risk, the learning curve was high and there
weren’t enough skills to embrace the single page
application model completely. It was confusing
to understand exactly how the frameworks were
working together. A lot of less than optimal code
was written.
What’s Changed
ECMA and TypeScript
What is diferent with the emerging standards?
How has the learning curve been shortened?
Well it starts with the new ECMA 6 standards and
TypeScript.
The new changes take many of the paradigms
for building apps that felt forced in ECMA 5 and
make them fully fledged features in ECMA 6. It all
46
Slalom Technology
starts with import and export in ECMA 6 as well as
TypeScript classes.
It has become much easier to create organized
constructs that can be imported or exported
amongst modules. In ECMA 5, the concept of
object oriented JavaScript was achievable to a
point. But as you pulled of the layers, it never
worked quite as expected or as we would have
liked. TypeScript allows the engineer to write code
in an object-oriented manner that is transpiled
into JavaScript.
Application Frameworks
The new standards are only part of the solution.
The application frameworks and libraries bring
it all together. For this post, I’ll concentrate on
Angular 2. If you remember, I talked about classi-
cal web applications and lifecycle hooks that are
organized by classes. I also mentioned depen-
dency injection. These constructs did exist in the
Angular 1, however it required a good understand-
ing of JavaScript/Node to understand exactly what
was happening.
Let’s look at the syntax of an Angular
2 Component:
At first glance, this structure should look similar to
an iOS App written with Xamarin:
It is important to note: there are major diferences
in how a TypeScript applications and native
mobile applications are compiled and executed.
However, the structure and syntax is converging.
For someone that has experience with classical
languages, the JavaScript world is no longer as
daunting. It is merely syntax rather than a com-
plete paradigm shift.
Let’s take a closer look at some notable parts of
this component. The import/export makes it much
easier for the engineer to understand exactly what
dependencies and modules the component is
going to use.
The class structure allows the engineer to create
a self-contained module with public and private
variables and functions. Imported dependencies
are now injected directly into constructors, similar
to dependency injection methods used in classi-
cal applications.
The application really starts to feel like a classical
47
Volume 1
MVC application. We can use many of the same
principles and patterns that were used in classical
web applications while fine-tuning those patterns
to the nuances of the JavaScript language.
Additional Considerations
I’ve talked a lot about the positives of the new
standards. However, there are some drawbacks.
1. The standard is emerging — these standards
are still being developed as well as being imple-
mented by browsers, so you may deal with some
bugs. Additionally, if you are an organization that
supports legacy browsers, you may not be ready
for these standards.
2. What about the compiler and debugger? — You
are still reliant on console logs or IDEs that sup-
port local debugging. React has done some work
in this space introducing Flow for Babel. The build
deploy process is streamlined and optimized with
tools such as Webpack.
3. Versioning, Config and Environment Manage-
ment — If you are building a web platform with
multiple environments, versioning and configu-
ration management is an important topic. Like
JavaScript based web applications of the past the
configuration constructs are not nearly as formal-
ized as classical applications.
48
Slalom Technology
I not so nostalgically remember
the days of configuring IIS on a to
deploy an Angular or Backbone
application.
It felt like a lot of work to serve static
webpages. Luckily, that has changed.
The build and deployment tools available
today have made single page applica-
tion build and deployment much more
eficient and much easier. In this post, we
will take a look at compiling an applica-
tion with webpack as well as deployment
via S3, CloudFront and NGINX.
Application Build and
Bundling with webpack
Webpack is a JavaScript bundler. You
specify configuration files and webpack
parses your files, locates dependencies
via the import statements in your applica-
tion and constructs a dependency graph.
Application Build & Deployment
Brendan Schoch
How single page application development has
become easier, faster, and more eficient
49
Volume 1
The advantage of this model is that you can create
bundles for components that contain only the re-
sources required (e.g. CSS, JS, Images) to run that
particular page. Just in case you glossed over that
sentence, I repeat at runtime only the resources
required for a specific page are loaded. Addition-
ally, we can use plugins to minify and consolidate
common code.
So how does webpack work? It starts with
configuration files. In a configuration file, the entry
points and output files are specified. Webpack
parses the import statement, builds a depen-
dency graph and bundles the application file(s).
It is common practice to segment entry points
and outputs. Common segments include App for
volatile application source and Vendor for more
stable vendor files. For example, with Angular 2
after the output files are created they are inserted
into the index.html.
This is just the tip of the iceberg. Webpack of-
fers plugins for many common tasks. Checkout
their documentation for more information and
examples.
Single Page Application Deployment
in the Cloud
AWS provides a couple avenues for single page
application deployment. Each has pros and cons
and provides the architecture team flexibility in
managing the application. The quickest way to
get an application deployed is through S3 static
web hosting. Configuring a S3 bucket for static
web hosting takes a few clicks. AWS assigns the
bucket a url and routes trafic to the correct index
and error pages. S3 can burst up to 800 requests/
second while remaining stable for up to 300
requests/second.
Applications deployed via S3 can be enhanced
by configuring Route53 and CloudFront. With
Route53, a custom DNS can be configured for the
application. With CloudFront, content is cached
and served from edge locations to decrease
latency.
If there are requirements that S3 and CloudFront
do not support, setting a custom architecture with
NGINX is possible. Within a VPC, AMIs with NGINX
can be deployed and scaled as needed. The serv-
ers and architecture can be configured as needed
to meet even the most challenging demands.
In Summary
Single page application deployment has come a
long way. It is now possible to build and deploy
a single page application in minutes. Applica-
tion deployment has not just gotten easier and
faster; it has gotten more eficient. Bundlers such
as webpack make page loads and dependency
management much more eficient.