storage - bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/storage_m… · or maybe an...

34
SNAPSHOT 1 SAN and NAS arrays retain primary storage primacy EDITOR’S NOTE / CASTAGNA Solution to all data storage problems is … almost here DEVOPS Storage catching up with the DevOps revolution STORAGE REVOLUTION / TOIGO The interociter, my storage testbed of the (immediate) future MARCH 2017, VOL. 16, NO. 1 SNAPSHOT 2 Solid state leads the primary storage wish list HOT SPOTS / SINCLAIR Go hybrid and beat the ‘time-to-provision’ clock CLOUD ADOPTION What to consider when evaluating cloud storage providers READ-WRITE / RICKETTS The cure for the secondary storage services blues MANAGING THE INFORMATION THAT DRIVES THE ENTERPRISE Flash, flashier and flashiest: The state of solid state Making flash even faster has little to do with the media itself and more to do with the infrastructure surrounding it. STORAGE

Upload: others

Post on 22-May-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid-state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

STORAGE • MARCH 2017 1

SNAPSHOT 1

SAN and NAS arrays retain primary storage primacy

EDITOR’S NOTE / CASTAGNA

Solution to all data storage problems is … almost here

DEVOPS

Storage catching up with the DevOps revolution

STORAGE REVOLUTION / TOIGO

The interociter, my storage testbed of the (immediate) future

MARCH 2017, VOL. 16, NO. 1

SNAPSHOT 2

Solid state leads the primary storage wish list

HOT SPOTS / SINCLAIR

Go hybrid and beat the ‘time-to-provision’ clock

CLOUD ADOPTION

What to consider when evaluating cloud storage providers

READ-WRITE / RICKETTS

The cure for the secondary storage services blues

MANAGING THE INFORMATION THAT DRIVES THE ENTERPRISE

Flash, flashier and flashiest: The state of solid state

Making flash even faster has little to do with the media itself and more to do with

the infrastructure surrounding it.

STORAGE

Page 2: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 2

WHAT'S THE HOLY grail of enterprise storage? The single “thing” that affects every bit of hardware and software that hooks into data center storage? The discovery of the end-all and be-all of not just “storagedom” but the entire IT realm and all the business processes continually gobbling up and spitting data out?

Hyper-converged infrastructure, you say? Interesting, but that could be considered mostly a reshuffling of the data center deck chairs. How about software-defined stor-age (SDS)? No, SDS really looks more like a shift in focus that de-emphasizes hardware than anything weightier than that (and it’s still mostly sold by hardware vendors—go figure). Cloud storage? Nah, that’s just another place to put your data. What about object storage, the current

darling of the array set?You’re getting warmer, because one of the coolest things

about object storage is its ability to support extended metadata, and metadata is the underpinning of the classi-fication of data, which is—indeed—the elusive holy grail of storage.

DATA CLASSIFICATION: IGNORED FOR YEARS

Yeah, yeah, I know I’ve taken up the banner of data clas-sification on more than a few occasions, but convincing people (and vendors) that there’s more to storage than zippy flash performance and high-capacity drives ain’t easy. Latency, throughput, IOPS—none of that matters all that much if you don’t know anything about the data that’s being written and read.

And while there is consensus that classification of data is important, it has languished almost as an afterthought in most shops for decades. And it keeps on getting a bad rap. Remember ILM? Information lifecycle management endeavored to bring order to data disarray, but it didn’t take long for ILM to become the kiss of death in the stor-age world. And back in the Stone Age, when mainframes roamed the earth, HSM, or hierarchical storage manage-ment, was the methodology for data classification and management. But all that seems to have been tossed to the IT trash heap with “new” storage architectures and

EDITOR’S LETTER RICH CASTAGNA

Solution to data storage problems is … almost hereData classification can make your data smart enough to know what to do with itself.

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Page 3: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 3

infrastructure and the ongoing struggle to keep up with capacity demands, processing requirements, data protec-tion and so on.

But that’s actually the best argument for data protec-tion, because it makes all of those things easier—and cheaper—to do. It helps get them done better, too.

KNOW WHAT IT IS TO KNOW WHAT TO DO

Still not convinced? Let’s try a little analogy. We’ll make believe you’re doing some spring cleaning on that catch-all hall closet packed with lots of stuff that didn’t seem to belong anywhere else. Digging through the disarray, you find something way, way in the back behind a tennis racquet with broken strings. What do you do with it? At this point, you have no idea what to do with it, of course, because I haven’t told you what that “thing” is.

It could be a button that fell from a coat hanging above or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery ticket. You might sew the button back on the coat, deep-six the badge or check to see if you’re a lucky winner and should be shopping for real estate in the south of France rather than reading this column.

The thing is, if you know what the thing is, you’ll know what to do with it. The same goes for data.

CLASSIFICATION IS CLOSE, BUT NO CURE-ALL

Classifying data so you know some basic facts about it—like what’s inside the file, why it was created, who created

it and who should be able to look at it or not—creates a wealth of information that determines how that piece of data should be handled and cared for. If it’s the corporate crown jewels, you may need to back it up multiple times, encrypt it and give limited access. If it’s plans for the com-pany Christmas party, less stringent measures are likely in order. But you wouldn’t know that without knowing more about the file than most current file systems reveal.

ILM cratered because it was an extra step, a lot of extra steps, actually, requiring a lot manual intervention and attention. Leaving something like classification of data up to the whims of humans is a pretty effective way of setting it up to fail. But if the process can be automated based on the application creating the file, the person us-ing the application, the group that person belongs to, the security clearance of the file originator and so on, the files themselves will be packed with critical info about their disposition.

In a data-centric world, data should do the talking. “Sorry, you can’t copy me to that cloud. … Hey, it’s time to archive me. … No, don’t attach me to an email.”

MOST STORAGE STILL NOT SMART ENOUGH

When you consider how many ways solid classification of data can be leveraged, it’s a wonder that every storage shop isn’t doing it today. But maybe it isn’t so surprising, as so few storage vendors actually build these capabilities into their products. The technology is, however, available in other forms and formats from compliance, security and other product category vendors.

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Page 4: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 4

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

For example, I came across a very useful document called, The Definitive Guide to Data Classification, published by Digital Guardian, a data security vendor. Yes, classifica-tion is key to effectively securing data, too.

While alternatives are available, it’s likely there’s some resistance to the vendor lock-in of putting all your data classification eggs in a single vendor’s basket. But maybe

as interest in object storage grows and becomes more widely implemented, it will encourage vendors to develop some level of metadata standardization as well. That way, applications, OSes and file systems will need just a single vocabulary to act on classified data appropriately. n

RICH CASTAGNA is TechTarget’s VP of Editorial.

Page 5: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 5

FANS OF CLASSIC science-fiction movies might appreciate the feelings that overwhelmed me this past holiday sea-son when I confronted the challenge of building a fairly complex data storage infrastructure from an assortment of undocumented parts. There I was, a veritable Dr. Cal Meacham from the 1955 classic This Island Earth, working to create a storage platform no less complicated than your proverbial “interocitor,” but without so much as a “Meta-lunan” catalog to document the component parts. I wasn’t sure whether my creation would be useful for storing data, let alone performing some sort of “electron sorting” or other exotic workload mentioned in the movie. But I attacked the project with zeal, anyway.

THE BACKSTORY

My test bench, consisting of a couple of DataCore SAN-symphony servers connected in a failover cluster, had grown into a hodgepodge of external storage boxes con-nected with everything from USB to eSATA and Fibre Channel (FC). Every slot of every installed StarTech.com eSATA board was maxed out, their drives virtualized by DataCore into storage pools split evenly between the two servers. Everything on one set of disks was replicated on the other. With my 2016 research projects ending, it was time to rethink the data storage infrastructure, make sense of this jumble and start fresh in the new year.

Around the holiday, a friend mentioned how his shop was retiring a bunch of Promise Technology arrays—three, to be exact—attached via iSCSI, FC and SAS. He said I could use these to consolidate the octopus of data storage infrastructure around each of my servers and that he could save them from the trash heap and deliver them to me for “upcycling” if I wanted. I did want them, of course, and a few days before Christmas, he appeared in my driveway with the gear in tow.

I should have known my life was about to change when he hastily offloaded his salvage and made a quick getaway. Each rig was heavy, apparently containing some terabyte and 500 GB SATA drives of various manufacture, requir-ing me, my friend and a couple of my teenage daughter’s male friends to heave them into my office.

STORAGE REVOLUTION JON TOIGO

The interocitor, my storage test-bed of the futureSci-fi flick inspires when turning a hodgepodge of an aging rig into the file storage infrastructure of today, and beyond.

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Page 6: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 6

“I will call you next week to see how you are doing,” my friend said hurriedly as he peeled away and down the road.

FIRST REEL: SETUP PHASE

It was almost as though he expected a person of my limited intellectual prowess to fail the interocitor test. But, like Meacham in the golden age of sci-fi flick, I started with the closest thing I could see to a beginning point: I bought three sturdy equipment shelves and loaded them with the array chassis: 12 bays, another 12 and 16 more. I never realized how much a few hundred terabytes weighed!

Upon visual inspection, there were no iSCSI chassis in the mix, but rather two FC and one SAS. Moreover, powering on each rack produced a cacophony of sound akin to an airplane hangar, much too noisy for my office.

So after deciding how to rack the components, I looked into cleaning fans on power supplies, and then how to pro-vide sufficient power and network connectivity to enable me to place the whole thing in a storage room about 100 feet (and two walls) away from where I work. There was also the issue of connectivity between the racks and the two servers.

SECOND REEL: CHALLENGE PHASE

The plan had been to transfer the contents of the external eSATA, USB and iSCSI storage onto the bigger virtual ca-pacity pools built using the new gear. To do this, I needed to connect the new arrays to the servers, format and pool

them with DataCore, and copy the contents of each small storage box so I could retire them.

That was where I encountered the first challenge. I had no additional slots in my servers for additional host bus adapters (HBAs), whether FC or SAS. From what I could find on eBay, I needed a PCIe x16 slot for each HBA. My servers had two, one being used for a video card, the other for a two-port FC adapter being used for failover cluster-ing. The eSATA external port cards were using PCIe x1 slots, leaving a couple of good old-fashioned PCI 32-bit slots. I could buy HBAs dirt-cheap from several vendors belonging to the Association of Service and Computer Dealers, or even on eBay, but they were no good to me if I had no slots.

To make a long story short, it turned out the biggest rig was actually an iSCSI one someone had retrofitted with a FC controller, for whatever reason. I discovered this by chatting with a very helpful Promise Technology support guy just after New Year’s, who I imagined shaking his head when asking: “Why don’t you just buy the latest VTrak from Promise?”

THIRD REEL: FINAL PHASE

It will require considerable testing to see whether the controller transplant will work. Either way, I am left with both a SAS and an FC rig. We may end up placing the FC controller from the big rig into the SAS kit, converting it to FC. That would allow me to connect each storage de-vice to one of the two FC connectors on the existing HBA in each server. Alternatively, I could buy a used Brocade

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Page 7: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 7

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

FC switch, again on the cheap (sub-$80), from one of my sources in the secondary market and just cable everything to that.

Either way, my interocitor is up and running, and shortly, all data storage infrastructure will be virtualized, and all those little four-drive arrays retired. Well, until the next time I need some elbow room.

The next step is to overlay the entire platform with StrongLINK from StrongBox Technologies and add an LTO-5 or better tape storage component running the

Linear Tape File System. That way, rarely accessed older data can migrate automatically to tape.

I invested less than a couple hundred dollars and some sweat equity to build a good data storage infrastructure that I can scale over time. That’s what I call a special Christmas. Cue the Metalunans. n

JON WILLIAM TOIGO is a 30-year IT veteran, CEO and managing principal of Toigo Partners International, and chairman of the Data Management Institute.

Page 8: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 8

FLASH

The state of solid state

Making flash even faster has little to do with the media itself and more to do with the

infrastructure surrounding it. BY GEORGE CRUMP

FLASH STORAGE SYSTEMS changed the enterprise. The move to an all-flash array has almost eliminated storage per-formance problems, for now. But user expectations and sophistication of applications will quickly catch up, and the need to improve performance will never stop. Storage vendors are not standing still, however, and there are several innovations on the horizon that will allow all-flash systems to stay ahead of users’ expectations.

The key to making flash faster has very little to do with the media itself, actually, and more to do with the infra-structure that surrounds the media. For the most part, flash—as density increased—has actually gotten slower, especially on write I/O. That said, performance is still substantially better than hard disk alternatives and still much faster and less latent than the components that surround the media.

The challenge facing flash vendors is the media is so fast and so latent-free that the rest of the solid-state package slows it down. Whether it is a flash drive or an all-flash array, vendors need to improve the packaging in order to improve performance.

IT’S THE CPU

Today’s flash storage systems are primarily software and most often run on relatively standard Intel server

HOMEVLADIMIR_TIMOFEEV/ISTOCK

Page 9: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 9

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

hardware. At the heart of the hardware is the CPU. The faster the CPU, the faster the software executes and the faster the all-flash array appears to be. In fact, most per-formance upgrades to all-flash arrays over the past three to four years have had much more to do with the power of the CPU than improvements to the media itself.

The problem facing storage software vendors is that the way CPUs are becoming more powerful is not as much from raw speed boosts as it is from increasing core density. Only a few vendors have fully exploited multithreading to correctly leverage the cores in storage hardware that their software runs on. Those vendors who have exploited mul-tithreading have achieved industry-leading performance with fewer CPUs (since they can leverage all the available cores), providing them with a competitive cost advantage.

MORE EFFICIENT STORAGE SERVICES

Storage systems, by and large, are known for the features they deliver—especially all-flash storage systems. In addi-tion to standard software features like snapshots and rep-lication, most all-flash arrays provide cost-saving features like deduplication and compression. Hybrid flash storage systems, meanwhile, automatically move data between flash and HDD tiers. Eventually, this data movement may happen between multiple types of flash offering different levels of performance.

The problem is each of these features requires computa-tional overhead and, in most cases, adds to the I/O burden. Software vendors are working on making their applica-tions more efficient so they reduce the amount of latency

their products add to the overall flash storage system. Obviously, one way to address this is to leverage multicore processors, as described above. In addition, vendors need to improve duplication and compression efficiency. This

Are hard drives dead?WITH ALL THE advancements in flash storage and

the publicity that the technology gets, it is fair to

ask about the future of the hard disk drive. Most

flash vendors now claim price parity with HDD-

based systems. If an organization can get a flash

array for the same price as a hard drive system,

why buy a hard disk?

First, you have to closely examine the first

part of that question. Have flash systems really

reached price parity with HDD systems? When

comparing flash to high-performance hard disk

arrays, the answer is yes. But when comparing

to capacity HDDs, the answer is, in general, no.

Modern object storage systems can safely use 8

TB-plus hard drives, and even apply deduplication

to them and maintain a considerable cost advan-

tage over flash systems.

Certainly, there is a significant performance

difference, but for data that is being archived or

doesn’t require the performance of a flash array,

these systems are more cost-effective options. n

Page 10: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 10

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

improvement will come largely by changing the way the array manages the metadata overhead that each of these features requires.

NVMe: FASTER FLASH CONNECTIONS

Another area to explore is the connections within the flash array. Today, most all-flash arrays are essentially servers running storage software. Those servers have CPUs connected to the flash drives, typically through a SAS connection. While SAS has plenty of raw bandwidth, the technology was designed in the hard drive era, not the flash era. That means it uses standard SCSI protocols to attach SAS flash drives.

The SCSI protocol adds latency, so vendors looked for something better, with some even creating their own proprietary protocols. While these proprietary protocols improved performance, if left to continue, every flash vendor offering would require its own driver. In the en-terprise, this means that one server would need a flash driver for each flash device it wants to store data on. The vendors would also have to develop drivers for every OS and environment.

What vendors and IT professionals needed was a standard protocol specifically for accessing flash storage systems. The industry responded with nonvolatile mem-ory express (NVMe), a standardized protocol designed specifically for memory-based storage technology.

NVMe streamlines the software I/O stack by reducing the unnecessary overhead introduced by the SCSI stack. It also supports more queues than standard SCSI, increasing

queues to 64,000, from the one queue supported by the legacy Advanced Host Controller Interface (AHCI). And since each NVMe queue can support 64,000 commands (up from the 32 commands supported by AHCI in its sin-gle queue), it should mean that an NVMe drive is 2x to 3x faster than SAS or SATA connections. Also, since it is an industry standard, NVMe drives from one vendor should interoperate with another vendor’s drives.

After flashFLASH MEMORY IS not the endgame of memory-

based storage technology. Remember that DRAM

is still faster (especially on write I/O), and it is

more durable. But DRAM’s volatility is its biggest

weakness. The next step in memory evolution is

to add persistence to DRAM. Known as nonvolatile

memory, there are several technologies compet-

ing for the attention of systems manufacturers

and IT professionals.

One of those technologies is Intel’s 3D XPoint.

Intel claims that these devices will have lower la-

tency, higher write performance and better dura-

bility for about double the price of flash memory.

But Intel is not the only company offering nonvol-

atile memory products. Companies like Crossbar,

Everspin and others are also bringing products to

market. n

Page 11: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 11

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Flash drive vendors are quickly adopting and imple-menting NVMe in their drives, while most flash array vendors have either announced or are set to announce NVMe-based versions of their products. The result is the movement of data within the storage system should im-prove significantly over the next year. For shared storage systems, however, there is still a storage network that needs traversing.

Most major networking vendors, including Brocade and Cisco, have announced support for NVMe over Fabrics, which should be available in both Ethernet and Fibre Channel flavors. This standard will take longer to work its way into the data center, but over the next few years, many data centers will make the transition. The good news is that most products coming to market will support both legacy SCSI-type access and NVMe simultaneously.

For now, most gains in connectivity will come from con-tinuing increases in bandwidth and the more intelligent use of that bandwidth.

FLASH DIMM

Most NVMe products install through the PCIe interface, but there is a faster channel available to storage memory providers: the memory bus itself. While the PCIe bus is a shared bus used for a variety of connections, the only device used in the memory bus is memory. Obviously, the memory bus has primarily been the domain of dynamic RAM (DRAM), but now, flash manufacturers are looking to exploit this high-speed path to the CPU as well. While a flash DIMM is slower than DRAM, it offers a much higher

capacity per DIMM and is much less expensive.Vendors have delivered two forms of flash DIMM

technology. In the first form, the flash DIMM looks like a flash drive, and it is used as a high-speed storage device. The DIMM-as-storage option is an ideal place to put very active files like virtual memory paging files.

The other form of flash DIMM technology is to have the flash DIMM act as memory instead of storage. The same advantages apply, density and cost, and the disadvantage, lower performance than DRAM, is not as significant as you might think. In most designs, the flash DIMM acts as a cache to the DRAM DIMM. New writes are written to DRAM and then are destaged to the large flash area when it needs to be read again.

The key payoff for flash as system memory is the po-tential of deploying twice as much memory per server at about half the cost. That combination is ideal for mod-ern scale-out applications like Cassandra, Couchbase, Spark and Splunk. Most of these environments face the challenge of managing node proliferation, but that pro-liferation is caused by a shortage of memory, not CPU performance.

THE OTHER FORM OF FLASH DIMM TECHNOLOGY IS TO HAVE THE FLASH DIMM ACT AS MEMORY INSTEAD OF STORAGE.

Page 12: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 12

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Another interesting use for flash DIMM is to prevent servers from ever losing data on a system crash. Think of a server that acts like a laptop. It simply goes to sleep if it loses power, instead of losing data. Then, when you restore power, it picks up where it left off.

CONCLUSION

For the first time, enterprises have the opportunity to provide more flash performance than most of their ap-plications and users will need. But this is not true of all

applications. In addition, as environments become more virtualized and applications continue to scale, this perfor-mance surplus will evaporate quickly.

Vendors remain focused on improving performance, but the next step will be harder than just adding flash to our normal system configurations. Keeping pace will require more efficient software as well as the improved in-ternal and external connectivity outlined in this article. n

GEORGE CRUMP is president of Storage Switzerland, an IT analyst firm focused on storage and virtualization.

Page 13: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 13

Snapshot 1SAN and NAS arrays retain primary storage primacy

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

STORAGE • MARCH 2017 13

D What important benefits do you want with your new primary storage purchase?*

D Which primary storage systems do you currently have installed?*

77% Increase storage capacity

24% Better IOPS/general performance

18% Consolidate storage footprint

18% Improve performance for certain apps/data

14% Better data management

14% Higher storage utilization rates

09% Rearchitect storage for more virtualization

07% Reduce dependence on specialized SAN skills

*MULTIPLE SELECTIONS ALLOWED

68% SAN array

37% NAS array

11% Unified array

11% Converged

10% Hyper-converged

*MULTIPLE SELECTIONS ALLOWED

6xNearly six times as many enterprises plan to purchase a SAN (46%) over a unified

array (8%) in the next year.

Page 14: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 14

THE TERM DevOps, a contraction of development and op-erations, represents a new way of working to deliver enterprise applications using Agile development meth-odologies. DevOps transfers responsibility for some of the operational functions of IT to development teams, allowing them to create, develop, amend and deploy appli-cations in a rapid fashion, typically without need for any interaction with the operations teams.

To deliver an Agile or DevOps environment, the way in which resources, including storage, are consumed and deployed changes to a more cloud-focused approach. DevOps development depends on the agility of the IT infrastructure to deliver resources for creating and deploy-ing applications as needed. So developers expect certain features from a DevOps infrastructure that are different from the way the developer community worked in the past. Typically, these differences include the following:

n On-demand availability of resources: infrastructure resources available on demand for consumption when required in the development process. This may mean, for example, the ability to create a new development environ-ment, complete with seed data, based on both container and virtual machine (VM) components.

n Automation and workflow: development environments

DevOps

Storage catching up with the DevOps

revolution Agile DevOps requirements are shifting how

enterprises consume and deploy storage resources to a more-cloud focused approach.

BY CHRIS EVANS

SSUAPHOTO/ISTOCK

HOME

Page 15: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 15

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

built on demand, and for that building process to be as automated as possible. In most cases, an application de-velopment framework will be built from a master template used to deploy the application and contain the needed components for it (e.g., database server, web server, and so on).

n Scale and transience: DevOps developments will of-ten use multiple environments to test many application changes at the same time. Each developer may want their own environment, but only need it for a short length of time. This means DevOps environments should provide the capability to spin up an application and destroy it with regular efficiency.

n Support for VMs and containers: Almost all DevOps processes rely on the development of applications within either VMs or as container instances. Storage platforms that offer native VM and container support provide an easier management and integration experience.

The use of DevOps as a methodology has introduced a range of new tools and frameworks for implementing a continuous development process. These include release management systems like Jenkins and Computer Science Corp.’s Agility; orchestration tools like Kubernetes and

Mesosphere; and, of course, virtualization frameworks such as Docker, OpenStack and Vagrant. We are starting to see these platforms integrate storage in order to provide the degree of automation and security required for con-tinuous development. Docker, for example, has extended their platform with a volume API plug-in that provides orchestration for persistent external storage. Kubernetes implements support for persistent volumes that can be provisioned from a range of sources, including traditional block and file interfaces (iSCSI, FC, NFS) or cloud and open source storage.

We should also recognize that the public cloud rep-resents a big part in DevOps, with platforms like AWS offering the capability to create and destroy development environments very easily. Storage is typically managed by the cloud platform and not exposed to the developer. As we will discuss later, one problem with using public cloud for continuous development is in the ability to seed environments with test data.

PROVISIONING

Challenges for implementing storage within a DevOps en-vironment parallel the issues seen with creating a private cloud. Storage resources must be provisioned on a much more dynamic basis, offering the ability to create and

AUTOMATION: The continuous nature of DevOps integration means the automation of storage

provisioning is essential. DevOps replaces the human element of storage workflow with automated

processes.

Page 16: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 16

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

destroy resources on demand. For example, storage simply needs to be consumable for the orchestration and man-agement frameworks that create a DevOps environment, such as Kubernetes or Mesosphere. This means having automation APIs capable of creating LUNs and volumes, and mapping them to the application as required.

Within application deployment frameworks such as OpenStack, the consumption of storage at a low level is achieved using plug-ins that allow vendors to expose their products to automation. The Cinder project of OpenStack covers the ability to dynamically create block-based stor-age and map it to an instance. There are similar projects for file (Manila) and object (Swift) storage as well. Most storage appliance and software-defined storage (SDS) vendors provide support for Cinder by offering a mid-dleware plug-in to manage the process of orchestration.

The middleware driver translates Cinder commands (like Create Volume or Delete Volume) into those for the stor-age platform, keeping track of these resources and their associated instances.

Clearly the rate of change in a development environ-ment is considerably higher than in production. The turn-over of resources will be high and any storage platform should be capable of managing a high rate of configuration change. This can create a problem for legacy storage sys-tems, where configurations were expected to be relatively static. Storage vendors increasingly recognize the need to drive their products “programmatically” using code rather than command line interfaces and GUIs, and so API sup-port has become an expected feature. These APIs should be capable of processing multiple requests in parallel (even if the resource changes are internally serialized).

MULTI-TENANCY

In general, developers aren’t concerned with how their resources are delivered. The developer is concerned that their environment is available for use and working within agreed service levels. This means focusing on delivering multi-tenancy capabilities when providing storage within a DevOps environment. Multi-tenancy defines the ability to provide multiuser access to shared resources without

RECYCLING: Secondary storage lets you use once-static backup data as the source for seeding

DevOps environments. And, by reusing existing hardware, enterprises can significantly save money

while simultaneously solving the challenge of seeding development systems.

MULTI-TENANCY DEFINES THE ABILITY TO PROVIDE MULTIUSER ACCESS TO SHARED RESOURCES WITHOUT ANY ONE USER OR “TENANT” IMPACTING ANOTHER.

Page 17: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 17

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

any one user or “tenant” impacting another. Critical for DevOps environments, the multi-tenancy aspect ensures no one application environment can consume too many storage resources, either from a capacity or performance perspective. In fact, admins should limit the amount of resources consumed per environment, especially in a situation where the underlying hardware is shared with production.

EFFICIENCY

Data optimization represents one area that significantly impacts delivering an efficient development environment. The need for data optimization is clear; you build most development environments from master images or, per-haps, copies of production data. That means features like data deduplication can significantly save on storage capacity.

Deduplication used in tandem with features like snap-shots lets you create many test environments quickly and efficiently. Particularly beneficial snapshots allow the cloning of VM instances with the minimum amount of overhead. Cloning can be much more practical than creating individual VM instances from scratch (and then configuring them), especially where lots of custom con-figurations have been applied.

SOURCING DATA

Accurate application testing requires using real-world data that reflects as closely as possible the production en-vironment. In most development scenarios, it is typical to take a regular copy or snapshot of production data and use this as the seed for testing. Data does, of course, have to be suitably anonymized to ensure customer information is adequately protected.

In a private cloud environment, creating an image copy of production can be relatively easy and achieved through the use of snapshots, clones or replication. These tech-niques assume the development platform is compatible with production, allowing you to move a copy of a snap-shot to another platform. Alternatively, both production and development could run on the same hardware, with quality of service ensuring the right level of performance for production data.

Sourcing data into the public cloud poses more of a problem, both in the cost of storing the data and in the time taken to replicate that data into the cloud environ-ment. Products such as Avere Systems’ vFXT can run on public cloud platforms and extend access from an organi-zation’s on-premises data into the cloud while improving accessibility to development data. The advantage of these products is that they only access active data, optimizing storage and networking costs.

ON DEMAND: Developers don’t care how their data gets to the test application—they just want it

available when they need it. For DevOps, it’s about having storage and data on demand all the time.

Page 18: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 18

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

A WORD ABOUT MONITORING

In a high-turnover environment where resources are created and destroyed on a regular basis, there is always the risk of storage going unused or being overconsumed. Enterprises often create development environments and then abandon or, most typically, forget about them, espe-cially when it is easy to spin up environments on demand. At the other end, it’s easy to get storage sprawl, where many development environments are created rather than reused. Monitoring and maintenance cap capacity and performance growth and identify environments no longer in use. Monitoring is also important for implementing chargeback and needs to be granular enough to work at the level at which environments are being created.

STORAGE TECHNOLOGIES

The rise of DevOps has seen the emergence of new storage technologies that offer specific features appropriate for Agile development. These include the following:

n Hyper-convergence: Storage is delivered from the same physical hardware used to run applications (either in a hypervisor or as containers). The hyper-convergence man-agement software hides the view of storage and removes the management work of provisioning storage to new

VM instances and containers. A hyper-converged product makes the DevOps process easier because the focus is on creating logical objects like VM instances, rather than physical resource management.

n VM-aware secondary storage: The term secondary storage applies to all data stored by an enterprise for nonproduction use, including backups and archive. Stor-age hardware vendors have taken the opportunity to use VM backup interfaces to build systems that implement data protection to disk-based products that can be used for purposes other than backup and restore. The flexible nature of a VM image allows you to clone VMs and entire applications from backup images and run them directly from the secondary storage platform, saving on building out a separate DevOps environment.

n Software-defined storage: SDS evolved from the first platforms to separate traditional dual-controller storage software from the hardware. Today, there are lots of scale-out SDS offerings for block, file and object. Many of these are also open source, and can be deployed relatively cheaply using commodity hardware. In development en-vironments not focused on high levels of performance, a “self-build” storage product can offer significant savings over purchasing hardware from a vendor.

SDS: Software-defined storage offers a great opportunity to deliver resources for DevOps environments.

Products are typically cheap (or open source), run on commodity hardware, and scale out on demand

and in a granular fashion.

Page 19: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 19

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

BUILD OR BUY?

In summary, the requirement for storage on a DevOps environment follows the path being forged by private cloud. Storage is becoming less visible, with automation doing the work done previously by storage administrators, removing the human factor from resource consumption.

Traditional storage is probably the least appealing op-tion for DevOps environments, with modern scale-out

products offering more attractive alternatives. You can also choose to build rather than buy, which offers signif-icant cost savings over vendor hardware. Open source products, meanwhile, can reduce the overall cost and—with the pace of feature development—be a good match to the DevOps mantra of continuous development. n

CHRIS EVANS is an independent consultant with Langton Blue.

Page 20: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 20

Snapshot 2Solid state leads the primary storage wish list

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

STORAGE • MARCH 2017 20

D How much primary storage do you have installed?

D Which technologies would you like most in your next primary storage purchase?*

SOURCE: TECHTARGET RESEARCH

46% Solid-state (flash) storage

33% Data reduction (deduplication/compression)

29% Storage virtualization

27% Cloud storage integration

24% Data archiving

21% Thin provisioning

20% Snapshots/replication

17% Automated storage tiering

16% High-speed Ethernet

15% High-speed Fibre Channel

12% Advanced caching

11% Storage resource management

*MULTIPLE SELECTIONS ALLOWED

8+19+4+6+5+7+11+11+17+12+z7%

200 TB to 299 TB

11% 50 TB to

99 TB

11% 100 TB to

199 TB 5% 300 TB to

499 TB

6% 500 TB to

749 TB

4% 750 TB to

999 TB

19% 1 PB to 9 PB

8% 10 petabytes (PB) or more

17% 10 TB to

49 TB

12% Less than

10 TB

Page 21: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 21

IF YOU HAVE a new app or use case requiring scalable, on-demand or pay-as-you-go storage, one or more public cloud storage services will probably make your short list. It’s likely your development team has at least dabbled with cloud storage, and you may be using cloud storage today to support secondary uses such as backup, archiving or analytics.

While cloud storage has come a long way, its use for production apps remains relatively limited. Taneja Group surveyed enterprises and midsize businesses in 2014 and again in 2016, asking if they are running any business- critical workloads (e.g., ERP, customer relationship man-agement [CRM] or other line-of-business apps) in a public cloud (see figure). Less than half were running one or more critical apps in the cloud in 2014, and that percent-age grew to just over 60% in 2016. Though cloud adoption for critical apps has increased significantly, many IT man-agers remain hesitant about committing production apps and data to public cloud storage providers.

ADOPTION HURDLES

Concerns about security and compliance are big obstacles to public cloud storage adoption, as IT managers balk at having critical data move and reside outside data center walls. Poor application performance, often stemming

What to consider when evaluating cloud

storage providers Security and compliance lead issues impeding

cloud storage adoption, with latency, data movement and backup not far behind.

BY JEFF BYRNE

ALEXDNDZ/ISTOCK

HOME

CLOUD ADOPTION

Page 22: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 22

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

from unpredictable spikes in network latency, is another top-of-mind issue. And then there’s the cost and difficulty of moving large volumes of data in and out of the cloud or within the cloud itself, say when pursuing a multicloud approach or switching providers. Another challenge is the need to reliably and efficiently back up cloud-based data, traditionally not well supported by most public cloud storage providers.

How can you overcome these kinds of issues and ensure your public cloud storage deployment will be successful, including for production workloads? We suggest using a three-step process to assess, compare and contrast provid-ers’ key capabilities, service-level agreements (SLAs) and track records so you can make a better informed decision (see: “Three-step approach to cloud storage adoption”).

Let’s examine specific security, compliance and per-formance capabilities as well as SLA commitments you should look for when evaluating public cloud storage providers.

SECURITY

Maintaining cloud data storage security is generally un-derstood to operate under a shared responsibility model: The provider is responsible for security of the underlying infrastructure, and you are responsible for data placed on the cloud as well as devices or data you connect to the cloud.

All three major cloud storage infrastructure-as-a- service providers (Amazon Web Services [AWS], Micro- soft Azure and Google Cloud) have made significant

investments to protect their physical data center facilities and cloud infrastructure, placing a particular emphasis on securing their networks from attacks, intrusions and the like. Smaller and regional players tend also to focus on securing their cloud infrastructure. Still, take the time to review technical white papers and best practices to fully understand available security provisions.

Though you will be responsible for securing the data you connect or move to the cloud, public cloud storage providers offer tools and capabilities to assist. These

Deployments on the risePercentage of firms with active/planned deployments of business-critical apps

in the public cloud

SOURCE: TANEJA GROUP RESEARCH

2014 2016 2018

43%

61%

??%

Page 23: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 23

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

generally fall into one of three categories of protection: data access, data in transit or data at rest.

n Data access: Overall, providers allow you to protect and control access to user accounts, compute instances, APIs and data, just as you would in your own data center. This is accomplished through authentication credentials such as passwords, cryptographic keys, certificates or digital signatures. Specific data access capabilities and policies let you restrict and regulate access to particular storage buckets, objects or files. For example, within Amazon Simple Storage Service (S3), you can use Access Control Lists (ACLs) to grant groups of AWS users read or write access to specific buckets or objects and employ Bucket Policies to enable or disable permissions across some or all of the objects in a given bucket. Check each provider’s credentials and policies to verify they satisfy your internal requirements. Though most make multifactor authenti-cation optional, we recommend enabling it for account logins.

n Data in transit: To protect data in transit, public cloud storage providers offer one or more forms of transport- level or client-side encryption. For example, Microsoft recommends using HTTPS to ensure secure transmission of data over the public internet to and from Azure Storage, and offers client-side encryption to encrypt data before it’s transferred to Azure Storage. Similarly, Amazon provides SSL-encrypted endpoints to enable secure uploading and downloading of data between S3 and client endpoints, whether they reside within or outside of AWS. Verify

that the encryption approach in each provider’s service is rigorous enough to comply with relevant security or industry-level standards.

n Data at rest: To secure data at rest, some public cloud stor-age providers automatically encrypt data when it’s stored, while others offer a choice of having them encrypt the data or doing it yourself. Google Cloud Platform services, for instance, always encrypt customer content stored at rest. Google encrypts new data stored in persistent disks using the 256-bit Advanced Encryption Standard (AES-256) and offers you the choice of having Google supply and manage the encryption keys or doing it yourself. Microsoft Azure, on the other hand, enables you to encrypt data using client-side encryption (protecting it both in transit and at rest) or to rely on Storage Service Encryption (SSE) to automatically encrypt data as it is written to Azure Stor-age. Amazon’s offering for encrypting data at rest in S3 is nearly identical to Microsoft Azure’s.

Also, check for data access logging—to enable a record of access requests to specific buckets or objects—and data disposal (wiping) provisions, to ensure data’s fully de-stroyed if you decide to move it to a new provider’s service.

COMPLIANCE STANDARDS

Your provider should offer resources and controls that allow you to comply with key security standards and industry regulations. For example, depending on your in-dustry, business focus and IT requirements, you may look for help in complying with Health Insurance Portability

Page 24: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 24

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

and Accountability Act, Service Organization Controls 1 financial reporting, Payment Card Industry Data Security Standard or FedRAMP security controls for information stored and processed in the cloud. So be sure to check out the list of supported compliance standards, including third-party certifications and accreditations.

PERFORMANCE CAPABILITIES

Unlike security and compliance, for which you can make an objective assessment, application performance is highly dependent on IT environment, including cloud infrastructure configuration, network connection speeds and the additional traffic running over that connection.

Three-step approach to cloud storage adoption

CUSTOMERS FURTHEST ALONG in adopting public cloud

storage tend to follow a systematic, three-step ap-

proach to help them select the best provider(s) and

optimize their cloud deployments:

1. Begin by documenting business, IT and regula-

tory requirements for your specific use cases, which

serve as a checklist for initial assessment. Include on

the list objectives or expectations for data availability,

security and application performance, among other

things.

2. Next, evaluate public cloud storage providers’ offer-

ings against your requirements and other deployment

criteria to determine “on paper” which best meet your

needs. Review service descriptions, capability lists and

best practices documents to get a good feel for each

offering. Look for third-party audited benchmarks or

certifications, covering standards, metrics or observed

performance in areas such as security, compliance and

data durability. Talk to colleagues or peers in other

organizations to learn about their experience with the

providers on your list, and scan community blogs to

get a sense of user satisfaction levels and any other

potential issues.

3. Finally, test-drive the selected provider’s services,

starting with nonproduction data and transitioning to

more critical data and apps as your comfort level in-

creases. For example, if you’re considering using a ser-

vice such as AWS Kinesis to load, analyze and process

streaming data, test the service with recorded data

streams and wait to introduce production streams

until your test criteria have been met. n

Page 25: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 25

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

If you’re achieving an I/O latency of 5 to 10 milliseconds running with traditional storage on premises, or even better than that with flash storage, you will want to pre-qualify application performance before committing to a cloud provider. It’s difficult to anticipate how well any

latency-sensitive application will perform in a public cloud environment without actually testing it under the kinds of conditions you expect to see in production.

Speed of access is based, in part, on data location, meaning expect better performance if you colocate apps in the cloud. If you’re planning to store primary data in the cloud but keep production workloads running on premises, evaluate the use of an on-premises cloud stor-age gateway—such as Azure StorSimple or AWS Storage Gateway—to cache frequently accessed data locally and (likely) compress or deduplicate it before it’s sent to the cloud.

To further address performance needs of I/O-intensive use cases and applications, major public cloud storage providers offer premium storage capabilities, along with

instances that are optimized for such workloads. For ex-ample, Microsoft Azure offers Premium Storage, allowing virtual machine disks to store data on SSDs. This helps solve the latency issue by enabling I/O-hungry enterprise workloads such as CRM, messaging and other database apps to be moved to the cloud. As you might expect, these premium storage services come with a higher price tag than conventional cloud storage.

Bottom line on application performance: Try before you buy.

WHAT TO LOOK FOR IN AN SLA

A cloud storage service-level agreement spells out guar-antees for minimum uptime during monthly billing pe-riods, along with the recourse you’re entitled to if those commitments aren’t met. Contrary to many customers’ wishes, SLAs do not include objectives or commitments for other important aspects of the storage service, such as maximum latency, minimum I/O performance or worst-case data durability.

In the case of the “big three” providers’ services, the monthly uptime percentage is calculated by subtracting from 100% the average percentage of service requests not fulfilled due to “errors,” with the percentages calculated every five minutes (or one hour in the case of Microsoft Azure Storage) and averaged over the course of the month.

Typically, when the uptime percentage for a provider’s single-region, standard storage service falls below 99.9% during the month, you will be entitled to a service credit. (Though it’s not calculated this way for SLA purposes,

SPEED OF ACCESS IS BASED, IN PART, ON DATA LOCATION,MEANING EXPECT BETTER PERFORMANCE IF YOU COLOCATE APPS IN THE CLOUD.

Page 26: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 26

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

99.9% availability implies no more than 43 minutes of downtime in a 30-day month.) The provider will typically credit 10% of the current monthly charges for uptime levels between 99% and 99.9%, and 25% for uptime levels

below 99% (Google Cloud Storage credits up to 50% if up-time falls below 95%). Microsoft Azure Storage considers storage transactions failures if they exceed a maximum processing time (based on request type), while Amazon

Third parties make cloud storage more effectiveIF YOU'RE LOOKING to make your cloud storage deploy-

ment work more effectively, check out the third-party

offerings in provider marketplaces or ecosystems.

Though we’re focusing here on security and perfor-

mance, provider ecosystems include products in a

wide range of other storage-related areas, such as

backup, archive, disaster recovery and file transfer

(data movement). Look especially for those that have

been prequalified or certified for use on a provider’s

cloud.

Sample third-party security offerings

n Infrastructure security: To better protect apps and

data from cyberattacks and other advanced threats

(e.g., Trend Micro Deep Security for AWS or Azure, Palo

Alto Networks VM-Series for AWS).

n Access and control: To tighten policy-based access

and improve business governance through single

sign-on and multifactor authentication (e.g., OneLogin

One Cloud).

n Vulnerability assessment: To inspect app deploy-

ments for security risks and help remediate vulnera-

bilities (e.g., Qualys Virtual Scanner Appliance for AWS

or Azure).

Also check out Microsoft Azure Security Center, a secu-

rity monitoring service with hooks to support a broad

range of third-party offerings.

Sample third-party performance products

n High performance file/block storage: Low-latency,

high IOPS/throughput for file/block storage (e.g., Za-

dara Virtual Private Storage Array).

n Hybrid file services/cloud gateways: Hybrid or mul-

ticloud file sharing, often accessed via a gateway

appliance, to enable improved access for enterprise

file sync and sharing, collaboration and so forth across

sites or regions (e.g., CTERA Enterprise File Services,

Avere Hybrid Cloud NAS, Panzura Cloud Control-

lers). n

Page 27: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 27

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

S3 and Google Cloud Storage rely on internally generated error codes to measure failed storage requests. Note that the burden is on you as the customer to request a service credit in a timely manner if a monthly uptime guarantee isn’t met.

Also, carefully evaluate the SLAs to determine whether they satisfy your availability requirements for both data and workloads. If a single-region service isn’t likely to meet your needs, it may make sense to pay the premium for a multi-region service, in which copies of data are dispersed across multiple geographies. This approach increases data availability, but it won’t protect you from in-stances of data corruption or accidental deletions, which are simply propagated across regions as data is replicated.

IS CLOUD STORAGE RIGHT FOR YOU?

With these guidelines and caveats in mind, you can better assess whether public cloud storage makes sense for your particular use cases, data and applications. If public cloud storage providers’ service-level commitments and capa-bilities fall short of meeting your requirements, consider developing a private cloud or taking advantage of managed cloud services.

Though public cloud storage may not be an ideal fit for your production data and workloads, you may find it fits the bill for some of your less demanding use cases. n

JEFF BYRNE is a senior analyst and consultant at Taneja Group. He can be reached at [email protected].

Page 28: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

BETTER ACCESS TO digital information has opened new revenue opportunities in nearly every industry. Whether it involves business intelligence and analytics, mobility or internet of things, it’s clear the next level of business competiveness is being built upon a foundation of data availability. None of this is news, of course. And, if you’re reading this column, you are likely already heavily in-volved in the ongoing battle to ensure that IT resources keep pace with the increasing demands placed on business applications and data.

The good news is IT infrastructure innovation has accelerated as well. Hardware, for example, continues to become quicker and more affordable. As a result, stor-age systems perform faster, scale higher and hold more

capacity than ever. In theory, you’d think the two, (1) growing demands served by (2) ever-more capable infra-structure, would cancel each other out. But that’s not how it works in the real world. While there are a number of rea-sons for this inconsistency, one that doesn’t get discussed enough is the time to provision new storage capacity.

TIME TO PROVISION

When storage vendors discuss time to provision, they tend to focus on how easy it is to set up and configure an array physically located on site in a rack with adequate power and cooling. Here, setup time is often a very small portion of the entire process. The true time to provision, however, encompasses everything that occurs from once you identify a storage resource need to the moment newly acquired resources are made available to applications. The full end-to-end process can take months, and the few min-utes or hours it takes to set up the final storage array is only a small part of the overall pain. Meanwhile, application demands continue to increase while the infrastructure provisioning process happens.

Delays in provisioning infrastructure deployments not only slow down new IT initiatives. In this era where busi-ness competitiveness is often determined by data access, delays can negatively impact revenue opportunities and the bottom line as well. For years, you could address the

HOT SPOTS SCOTT SINCLAIR

Go hybrid and beat the ‘time-to- provision’ clockInfrastructure provisioning delays slow down IT initiatives and negatively impact revenue and the bottom line.

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

STORAGE • MARCH 2017 28

Page 29: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 29

time-to-provision challenge by simply deploying more storage capacity than immediately necessary, giving the environment room to support near-term growth during the sometimes lengthy process of new storage system procurement. While still considered a best practice by some, having excess infrastructure just sitting around doing nothing adds unnecessary cost, a nonstarter in this age of tighter budgets.

REDUCTION METHODS

One obvious method for reducing the time to provision storage is using public cloud services. While this can pro-vide near-immediate access to new capacity, performance and security, other business considerations often lead many firms to prudently retain a significant portion of data on premises. The trick here is to achieve cloud-like agility in storage provisioning while maintaining those on-premises capabilities required by many workloads. There are a number of options available to help im-prove, or at least mask, time-to-provision challenges for on-premises infrastructure, these include the following:

n Advanced storage analytics: Storage array management continues to evolve. A number of storage systems offer management tools that enable administrators to forecast capacity growth patterns months in advance, with some allowing architects to play out different what-if scenarios investigating the performance and capacity impacts of new workload deployments on existing infrastructure, helping to size what additions may be needed. This added

intelligence does more to mask the time-to-provision challenge than reduce it, though. What it does do is enable storage administrators to recognize resource needs sooner in an attempt to start the infrastructure provisioning pro-cess earlier.

n Pay per usage for on-premises infrastructure: In response to the success of public cloud services, several on-premises storage providers have begun offering more flexible pay-ment options for storage systems on site, where you can unlock additional capacity already on the system by paying extra. In addition, a few storage startups now focus on deploying on-site hosted cloud infrastructure using a pay-per-usage model. With this model, storage is owned and managed by the vendor, but the infrastructure is in your data center. Keep in mind, these offerings vary on capa-bilities, service commitments and what, if any, obligations are required from customers in terms of future growth. Nonetheless, this type of on-premises storage can ease the cost burden of on-site infrastructure provisioning while possibly speeding up the deployment of new capacity.

n The hybrid cloud: A wealth of hybrid cloud offerings has entered the marketplace recently. Some leverage the pub-lic cloud predominantly, while placing high-performance caching devices on site. Since the bulk of new capacity is located on the public cloud, they can deliver on-premises performance while deploying new capacity at the pace of the public cloud because of the cache. Other hybrid clouds allow you to deploy storage capacity on or off premises with the ability to move data back and forth between

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Page 30: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 30

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

them. With these, you can add new capacity quickly to the public cloud side and data can be moved off on-site resources to make room locally as needed. These hybrid storage offerings greatly reduce the time to provision and improve workload agility.

BOTTOM LINE

IT demands change so rapidly that new resources are often needed immediately, not months down the road. Some organizations look to the public cloud to solve these challenges, but these services alone aren’t right for every-

one or every workload. In response, on-premises vendors are offering greater intelligence and more flexibility in payment options to ease the burden of deploying new capacity on site. While there are benefits to this approach, it can still be a challenge to match the agility of the pub-lic cloud. For that, hybrid clouds have stepped in as an excellent option to deliver on-premises performance and security while integrating the agility of public cloud infrastructure. n

SCOTT SINCLAIR is a storage analyst with Enterprise Strategy Group in Austin, Texas.

Page 31: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

ALL LARGE COMPANIES run secondary data storage services such as data protection applications for backup, disaster recovery and archiving. Many also deploy copy data man-agement to allow a single “gold” copy of a piece of data to support multiple secondary use cases (e.g., application development and testing, business analytics, data protec-tion and so on). Object storage uptake, in the meantime, has rapidly increased, converging with CDM to provide petabyte-scale storage for capacity-intensive applications such as content repositories and content sharing in geo-graphically distributed environments.

These secondary data storage platforms typically offer mature data services for copying, compressing, migrating

and retrieving data, but most enterprises face challenges when it comes to efficiently aligning data services with user-centric storage requirements. Efficient storage management is usually impeded by the inability to fully automate service-level agreement (SLA) compliance, the need to manage secondary data using different storage products and the inability to make well-informed and data-driven decisions.

These limitations often leave IT administrators asking questions, such as the following:

n How do I ensure we are delivering the performance, resiliency, governance and capacity needed for each business group and application workload?

n How do I simplify storage operations across users with a wide variety of data requirements?

n And how do I know we are providing the best storage economics?

To meet the need for more efficient user-centric storage services, secondary data storage vendors need to evolve their products to provide granular data management within converged secondary storage. Specifically, they must move beyond policy-based templates and disparate storage systems and furnish unified, software-defined storage (SDS) products that automate SLA compliance, support multiple secondary data use cases, and offer

READ / WRITE STEVE RICKETTS

The cure for your secondary storage bluesEfficient user-centric storage manage-ment eases CDM and object storage uptake while enabling secondary data services to evolve.

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

STORAGE • MARCH 2017 31

Page 32: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 32

intelligent data placement so their products improve user satisfaction and more efficiently streamline IT operations and reduce storage costs.

The length of this column doesn’t allow me to lay out a detailed description of all the requirements for efficient user-centric secondary data storage services, so I’ve sum-marized three functional categories I believe are key and identified representative vendors within each area.

n Ability to determine the level of SLA compliance: One of the most critical capabilities for business-centric services is a web-based portal for managing SLA compliance. The portal should foster timely collaboration between business groups and administrators to create user-specific SLA profiles, automate tracking of storage operations, supply visibility through dashboards to ensure SLA compliance, and alert administrators and users when data operations fall outside agreed-upon SLA guidelines. SLA profiles must go beyond defining data recovery and retention objectives to deliver the ability to set parameters such as application-specific requirements, minimum and max-imum number of copies, data governance and security considerations, thresholds for data migration and storage costs relative to user budgets. Most CDM vendors let you define SLA requirements using policy templates. Dell EMC’s eCDM went a step further to provide full-lifecycle SLA compliance that includes monitoring quality of ser-vice relative to SLAs to determine compliance level.

n Unified SDS products that support multiple secondary

storage use cases: Depending on the use case, secondary

data has different performance and capacity require- ments. Also, secondary storage products should seam-lessly move data between flash, HDD and cloud storage tiers based on storage policies governed by SLA param-eters. For example, a test/dev workload uses random I/O and should be prioritized for flash storage, whereas a backup job uses sequential I/O that should be more focused on HDDs. Content repositories for video, image and audio files have massive, petabyte-scale storage re-quirements, so these files should be placed in scale-out object storage that delivers linear performance as capacity increases.

In addition, SLA parameters can be used to migrate to cost-effective cloud storage data once usage drops to a certain level. Data efficiency (compression and dedu-plication) and data security services (encryption) should also be utilized in accordance with SLA parameters. CDM players including Actifio and Cohesity integrate policy-driven data migration that includes flash, HDD and cloud-based storage tiers. Cohesity has the advantage of seamlessly supporting large data requirements with their hyper-converged, scale-out, secondary data storage architecture. IBM Spectrum Copy Data Management automates data tiering and IBM Cloud Object Storage with scale-out storage capability is part of the IBM storage portfolio.

n Data indexing and analytics that enable data discovery

and data-driven decisions: Most CDM and object storage can create a metadata index and search and report at the virtual machine disk, storage volume and file name levels.

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

Page 33: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 33

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

This kind of reporting is good for understanding storage utilization, usage trends and available storage capacity, but falls short of providing the in-depth data visibility needed for improving security, determining data compli-ance and finding files based on content. A good example of a product with strong data visibility functionality is the Cohesity Data Platform, which is CDM with a deep search capability that enables users to search on data within files to do custom data queries and pattern matching. This full-text search capability is valuable when organizations want to find files based on content, for example, to ensure data compliance by determining the location of files with sensitive data, such as usernames and passwords.

Secondary data storage applications and services such

as CDM have evolved to deliver significant operational im-provements and substantial storage cost savings through policy-based automation and secondary data consolida-tion. For CDM and object storage adoption to increase and for secondary data management to move to the next level of delivering business value, I believe vendors need to fully enable efficient user-centric storage management without constraints that force companies to implement dissimilar data management software and multiple storage platforms. Only then will customers realize the productivity gains and cost savings needed to drive main-stream adoption. n

STEVE RICKETTS is a senior analyst at Taneja Group.

Page 34: STORAGE - Bitpipedocs.media.bitpipe.com/io_13x/io_136617/item_1509344/Storage_M… · or maybe an attendee badge from the 2009 VMworld conference or maybe it’s a long-lost lottery

STORAGE • MARCH 2017 34

Home

Castagna: Solution to all data storage problems is … almost here

Toigo: Interociter, my storage testbed of the future

The state of solid state

SAN and NAS arrays retain primary storage primacy

Storage catches up with DevOps

Solid state leads the primary storage wish list

What to consider when evaluating cloud storage providers

Sinclair: Go hybrid and beat the ‘time- to-provision’ clock

Ricketts: The cure for the secondary storage blues

About us

TechTarget Storage Media Group

Stay connected! Follow @SearchStorageTT today.

STORAGE MAGAZINEVP EDITORIAL Rich CastagnaEXECUTIVE EDITOR James Alan MillerSENIOR MANAGING EDITOR Ed HannanCONTRIBUTING EDITORS James Damoulakis, Steve Duplessie, Jacob GsoedlDIRECTOR OF ONLINE DESIGN Linda KouryASSOCIATE MANAGING EDITOR, E-PRODUCTS Nick Arena

SEARCHSTORAGE.COMSEARCHCLOUDSTORAGE.COM SEARCHCONVERGEDINFRASTRUCTURE.COMEDITORIAL DIRECTOR Dave RaffoSENIOR NEWS WRITER Sonia R. LeliiSENIOR WRITER Carol SliwaSTAFF WRITER Garry KranzSENIOR SITE EDITOR Rodney BrownSENIOR SITE EDITOR Maggie JonesASSISTANT SITE EDITOR Erin Sullivan

SEARCHDATABACKUP.COM SEARCHDISASTERRECOVERY.COM SEARCHSMBSTORAGE.COM SEARCHSOLIDSTATESTORAGE.COMEXECUTIVE EDITOR James Alan MillerSENIOR MANAGING EDITOR Ed HannanSTAFF WRITER Garry KranzSITE EDITOR Paul Crocetti

STORAGE DECISIONS TECHTARGET CONFERENCESEDITORIAL EXPERT COMMUNITY COORDINATOR Kaitlin Herbert

SUBSCRIPTIONSwww.SearchStorage.com

STORAGE MAGAZINE275 Grove Street, Newton, MA [email protected]

TECHTARGET INC. 275 Grove Street, Newton, MA 02466www.techtarget.com

©2017 TechTarget Inc. No part of this publication may be transmitted or reproduced in any form or by any means without written permission from the publisher. TechTarget reprints are available through The YGS Group.

About TechTarget: TechTarget publishes media for information technology professionals. More than 100 focused websites enable quick access to a deep store of news, advice and analysis about the technologies, products and processes crucial to your job. Our live and virtual events give you direct access to independent expert commentary and advice. At IT Knowledge Exchange, our social community, you can get advice and share solutions with peers and experts.

COVER IMAGE AND PAGE 8: VLADIMIR_TIMOFEEV/ISTOCK