en ne errg gy y--e effffiic ciie en ntt c co om mp pu uttiin

20
Energy-efficient computing in the 21st century BY MATT STANSBERRY chapter 3 Data center infra- structure efficiency 2 Cooling principles 8 Supplemental systems 13 Economizer pros and cons 15 Power distribution 18 The direct current debate

Upload: datacenters

Post on 21-Aug-2015

307 views

Category:

Technology


2 download

TRANSCRIPT

EEnneerrggyy--eeffffiicciieenntt ccoommppuuttiinngg iinn tthhee 2211sstt cceennttuurryy BBYY MMAATTTT SSTTAANNSSBBEERRRRYY

chapter 3 Data center infra-structure efficiency2 Cooling principles8 Supplemental systems

13 Economizer pros and cons15 Power distribution18 The direct current debate

nefficient, sprawling servers areresponsible for only 50% of the ITenergy crisis. Data center managersare going to have to address thecooling and power distribution asso-

ciated with servers, where IT and facili-ties management collide in an often inelegant pairing.

This chapter is a primer on energy-efficient cooling and power distributionprinciples and offers insight on the latestdata center infrastructure technologiesand best practices. It will cover the following topics:

n Data center cooling principles and raised-floor cooling

n High-density supplemental systems(such as forced air and liquid cooling)

n The pros and cons of economizers in the data center

n Energy-efficient power distributionn The direct current (DC) power

debate

I. DATA CENTER COOLING PRINCIPLES AND RAISED-FLOORCOOLINGData center cooling is where the great-est energy-efficiency improvements can be made. And cooling a data centerefficiently is impossible without properfloor plan and air-conditioning design.

The fundamental rule in energy-efficient cooling is to keep hot air andcold air separate. The hot-aisle/cold-aisle, raised-floor design has been thecooling standard for many years, yet surprisingly few data centers implementthis principle fully or correctly.

Hot aisle/cold aisle is a data centerfloor plan in which rows of cabinets areconfigured with air intakes facing themiddle of the cold aisle. The cold aisleshave perforated tiles that blow cold airfrom the computer room air-conditioning(CRAC) units up through the floor. The servers’ hot air returns blow heatexhaust out the back of cabinets into hot aisles. The hot air is then sucked into the CRAC unit to be cooled andredistributed through cold aisles.

The primary rule in data center coolingefficiency is tokeep hot andcold air frommixing.

chapter 3 Data center infrastructure efficiency

2

I

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

chapter 3 Data center infrastructure efficiency

3

While this layout is widely accepted asthe most efficient data center layout,experts note several examples of com-panies that have refused to use hot-aisle/cold-aisle design. In fact, accordingto Peter Panfil, vice president of powerengineering at Columbus, Ohio-basedLiebert Corp., only 70% of attendees ata recent Emerson Network Power DataCenter Users’ Group event said that theyuse hot-aisle/cold-aisle design.

In addition to the companies that flat-out reject hot-aisle/cold-aisle design, an even greater number of data centersdesign it incorrectly or don’t take theprinciples far enough.

Keeping hot and cold air from mixingmeans separating airflow in the front of the cabinets from the back. But somedata center managers actively sabotagethe cooling efficiency of an area. Datacenter design experts often recount horror stories about clients with highlyengineered hot-aisle/cold-aisle layouts,where a data center manager has putperforated or grated tiles in a hot aisle or used fans to direct cold air behind

the cabinets.Air conditioners operate most effi-

ciently when they are cooling the hottestair. By placing perforated tiles in a hotaisle, the hot air isn’t as warm as itshould be when it gets to air-conditioningunits. By pre-cooling the hot air goinginto the air intake, the thermostatassumes it doesn’t have to work as hard.The air-conditioning units don’t recog-nize the true load of the room, and this

THE HOT-AISLE/COLD-AISLE APPROACH

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

miscue raises the temperature and creates a zone hot spot.

Hot aisles are supposed to be hot. If you paid good money for data centerdesign consultants, you really shouldtake their advice. Data center managerscan become unnerved by how hot thehot aisle can get, or they capitulate tothe complaints of admins who don’twant to work in the hot aisle. But mixingcold air into a hot aisle is exactly whatyou want to avoid.

Data center pros who don’t activelythwart their own efficiency efforts stillrun into problems when they don’t applythe hot-aisle/cold-aisle approach fully.

“I have seen people who have read thearticles and [have] arranged data centersin hot-aisle/cold-aisle configurations to improve the cooling situation,” saidRobert McFarlane, data center designexpert and principal at New York-basedengineering firm Shen Milsom & WilkeInc. “But odd-sized cabinets, operationsconsole, and open rack space cause biggaps in the rows of cabinets, allowinghot air to recirculate into the cold aisles

chapter 3 Data center infrastructure efficiency

4

Raised-Floor FundamentalsEIGHTEEN INCHES IS the minimum recommended raised-floorheight; 24 inches to 30 inches is better, but not realistic forbuildings without high ceilings.

n Keep it clean. Get rid of the clutter—unused cables orpipes, for example—under your raised-floor. Hire a cleaningservice to clean the space periodically. Dust and debris canimpede airflow.

n Seal off cable cutouts under cabinets as well as spacesbetween floor tiles and walls or between poorly aligned floortiles. Replace missing tiles or superfluous perforated tiles.

n Use a raised-floor system with rubber gaskets under eachtile, which allows tiles to fit more snugly onto the frame, minimizing air leakage.

n To seal raised floors, data center practitioners have severalproduct options available to them, including brush grommets,specialized caulking and other widgets.

For more on blocking holes in a raised floor, read RobertMcFarlane’s SearchDataCenter.com tip “Block those holes!”

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

and cold air to bypass into the hot aisles.” Poorly executed hot-aisle/cold-aisle

configurations can actually cause moreharm than not doing anything at all. Theway to avoid this is to block the holes.Fasten blanking panels—metal sheetingthat block gaps in the racks—overunused rack space. Also, tight ceilingsand raised floors are a must. Lastly, useair seals for all the cabling and otheropenings in the floor.

Despite all these efforts, hot and coldair is going to mix around the tops ofcabinets and at the end of aisles. Datacenter pros can mitigate these designproblems by placing less importantequipment—such as patch panels andminor equipment that does not generatea lot of heat—in marginal areas.

So where should you put the sensitiveequipment, the energy hogs that needthe most cooling? According to McFar-lane, the answer is counterintuitive. Inalmost all cases, under-floor air condi-tioning units blast out a large volume ofair at high velocity. The closer you are tothose AC units, the higher the velocity of

the air and, therefore, the lower the airpressure. It’s called Bernoulli’s Law. As aresult, the cabinets closest to the airconditioners get the least amount of air.That means you should probably putyour sensitive equipment near the mid-dle of the cabinet row, around kneeheight, rather than right up against theCRAC or the perforated floor. And sincethe air gets warmer as it rises, don’tplace your highest heat-generatingequipment at the top of a cabinet.

It’s not rocket science, but it is physics.And according to McFarlane, applyingbasic physics can make a huge differ-ence in most data centers. “I don’t haveto get sophisticated,” McFarlane said.“Up to 75% [of data center efficiencygains] are easy, the next 5% are a bear,and beyond that is often impossiblewithout a total rebuild.”

Common cooling mistakes. Unfortu-nately, there are no shortcuts in physics,and McFarlane points to areas wheredata center pros can run into trouble.

Some facilities have opted to put fans

Poorly executedhot-aisle/cold-aisle configurationscan actuallycause moreharm than notdoing anythingat all.

chapter 3 Data center infrastructure efficiency

5COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

under the cabinets to pull higher vol-umes of air out of the floor. But there is alimited amount of air in the plenum, andif the air volume that all the fans demandexceeds the amount of air under thefloor, any cabinets without fans are cer-tainly going to be air starved, and thosewith fans farther from the air condition-ers may also get less air than they need.

Another ill-advised shortcut is puttingin lots of wide-open tiles to get bettercooling. Traditional raised-floor perforat-ed tiles are only 25% open, but somegrate tiles on the market are 56% open.More air is good, right?

Not necessarily. According to McFar-lane, if you have too many tiles with toomuch open area, the first few cabinetswill get a lot of air, but the air to the restwill diminish as you get farther from theair conditioners.

“The effect is like knifing a tire or pop-ping a balloon,” McFarlane said. “Airtakes the path of least resistance, andthe data center is a system: If you startfiddling with one thing, you may affectsomething else.” You need to balance

the air you have so it’s distributed whereyou need it.

According to Robert Sullivan, a datacenter cooling expert at the Santa Fe,N.M.-based Uptime Institute Inc., thetypical computer room has twice asmany perforated tiles installed as itshould. Sullivan said having too manytiles can significantly reduce static pres-sure under the floor. This translates toinsufficient airflow in the cold aisles:Thus cold air gets only about halfway upthe cabinets. The servers at the top ofracks are going to get air someplace, andthat means they will be sucking hot airout of the top of the room, recirculatingexhaust air and deteriorating the relia-bility of the server.

In data center cooling strategies, onecommon mistake is to turn the thermo-stat way down on your AC units. Mostair conditioners have an optimum settingat around 72 degrees to 75 degrees Fahren-heit for the return temperature, which iswell within the range recommended byserver manufacturers. But some datacenter managers try to keep data cen-

chapter 3 Data center infrastructure efficiency

6COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

ters cooler than a meat locker, wastingenergy and wearing down AC units.

By turning the air conditioner waydown, a unit will try to deliver colder airon a continual basis. Constantly keepingthe air in the room at, say, 60 degrees isunrealistic. The air conditioners will justkeep running full bore—until overcooledair returns to the unit, trips the thermo-stat and turns the AC off for a few min-utes, at which point the temperaturerises again and the AC unit begins anon/off, on/off pattern known as a shortcycle.

“When you do that, you wear the hellout of the air conditioner and completelylose your humidity control,” McFarlanesaid. “That short burst of demandwastes a lot of energy too.”

McFarlane said oversized air condi-tioners do the same thing. “They coolthings down too fast, and then they shutoff. Things get hot quickly, and they turnback on. If I don’t match capacity to theload, I waste energy with start/stopcycles.”

An excellent resource for effectively

chapter 3 Data center infrastructure efficiency

7

Modeling the Air Flow in Your Data CenterWHEN YOU MANAGE several thousand square feet of raised floor, supporting dozens of racks of sensitive ITequipment with various cooling demands, it is nearlyimpossible to determine how to move air handlers orchange perforated tiles to run your system more efficiently.Luckily, computational fluid dynamics (CFD) modeling tools are available to convey the impact of what you plan to do.

Lebanon, N.H.-based Fluent Inc. offers a subscription-based utility computing-style product called CoolSim that allows you to plug in a data center’s design specs to do airflow modeling. The other option is to install CFD software—such as TileFlow from Plymouth, Minn.-basedInnovative Research Inc.—on your own servers.

CFD software is not cheap. That said, you should steerclear of “free” CFD, according to Robert McFarlane, datacenter design expert and principal at New York-based engineering firm Shen Milsom & Wilke Inc. “Be cautious ofpeople offering free CFD analysis that are selling a specificproprietary solution based on the CFD they run for you.”

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

matching data center cooling with the ITload is the publication from the Ameri-can Society of Heating, Refrigerating andAir-Conditioning Engineers (ASHRAE)Thermal Guidelines for Data ProcessingEnvironments.

II. HIGH-DENSITY SUPPLEMENTAL COOLING Shortcuts won’t work, and there is onlyso much air we can push out of a plenumwithout blowing the tiles out of the floor.So how do you cool high-density heatloads? Many data center pros are turningto supplemental, high-density cooling.

Supplemental cooling systems like theInfraStruXure InRow from AmericanPower Conversion Corp. (APC), theLiebert XD and other models from Rittal,AFCO Systems and Wright Line put thecooling unit next to or on top of the cabi-net, delivering a higher volume of coldair directly to server intake: The result ismore cooling than can possibly be deliv-ered through a raised floor.

High-density cooling systems offer the

following advantages:

n They deliver more cooling thanraised-floor options.

n They deliver air more evenly up acabinet.

n They deliver cooling closer to theheat source.

Some units offer further advantage in that they can prevent hot and cold airfrom mixing by putting an intake on ahot aisle or by directing exhaust airdirectly into the AC unit. Hot air doesn’thave to travel back to the CRAC units 40 feet away.

On the downside, these systems canbe more expensive and more complex to operate; and in many cases, you stillneed the raised floor and traditionalCRAC design to maintain a baseline ofcooling and humidity for the rest of thedata center. Additionally, many top-blowsystems need to be ducted in order towork, and duct systems can be priceyand take up lots of space.

Ben Stewart, senior vice president of

To cool high-density heatloads, manydata center prosare turning tosupplemental,high-densitycooling.

chapter 3 Data center infrastructure efficiency

8COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

facilities at Miami-based hosting firmTerremark Worldwide Inc., has found agood mix by using supplemental coolingand a traditional raised-floor CRAC lay-out. His cabinets can handle 160 watts(W) per square foot, but the raised floorcovers only 80 W per square foot, whichis half the capacity.

Many of Terremark’s customers neednowhere near 80 W of cooling, but aTerremark data center can house morethan 600 customers—all of which havedifferent infrastructures and needs. SoStewart has to design flexibility into thesystem to avoid overcooling. For cus-tomers drawing more than 80 W persquare foot, Terremark installs supple-mental half-rack CRACs from APC.“We’re moving the cooling closer to theheat source and only putting it in wherewe need it.”

According to Sullivan, these supple-mental cooling systems deliver morecooling capacity with less energy, specif-ically in high-density situations. But healso warned that users can get lockedinto supplemental systems.

If your needs change, it’s hard to getthat cooling capacity across the room.“You don’t have the flexibility unless youuninstall it and physically move the unit,”he said. “Whereas with the larger under-floor units, you can move the perforatedtiles based on the load.”

Liquid cooling in the data center. Sowhat is the most efficient means forcooling servers? Water is about 3,500times more efficient than air at removingheat. Server vendors and infrastructureequipment manufacturers alike havelined up to offer all sorts of productsfrom chilled-water-rack add-ons topumped liquid refrigerants. But evidencesuggests that data center managersaren’t ready for liquid cooling.

According to SearchDataCenter.com’s2007 data center purchasing intentionssurvey, only 7.7% of respondents hadexperimented with liquid cooling toincrease data center cooling efficiency.

In fact, 65% of respondents said theywould never use liquid cooling in theirdata centers—an unsurprising finding,

chapter 3 Data center infrastructure efficiency

9

“[Supplementalcooling systems] don’t have the flexibility unlessyou uninstall itand physicallymove the unit.”—ROBERT SULLIVAN,

DATA CENTER COOLINGEXPERT, THE UPTIME INSTITUTE INC.

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

said experts. Gordon Haff, a senior ana-lyst at Nashua, N.H.-based IlluminataInc., said that liquid cooling probablyscares mainstream IT managers. Hesuggested that if increasing server density ultimately requires liquid cool-ing, companies may be more likely tooutsource the task than deal with thecomplexity.

Other predictions aren’t as dire.William DiBella, president of the datacenter user group AFCOM in Orange,Calif., said that most data center man-agers aren’t enthusiastic about liquidcooling, but he doesn’t think they willhave a choice if the high-density com-puting trend continues.

At some major companies, however,data center managers are kicking thetires on liquid cooling. “Our facilities guysare talking to engineers,” said DraganJankovic, vice president of technology atNew York-based financial giant the Gold-man Sachs Group Inc. “Blasting air fromunder the floor is very inefficient. Takingcooling to where it’s needed is interest-ing, but it’s young. We’re checking the

pulse to see where water cooling is head-ed; it’s a tactical approach right now.”

Ted Hight, senior technical architect atMinneapolis-based Target Corp., tookliquid cooling one step further. For 2007,

chapter 3 Data center infrastructure efficiency

10

Liquid Cooling Options on the MarketCOOLING CAPACITY

SYSTEM COOLANT TYPE (IN KILOWATTS)

Hewlett-Packard Co.’s Water 30 kWModular Cooling System

American Power Conversion Water 70 kWCorp.’s InfraStruXure InRow RP

Rittal Liquid Cooling Package Water 30 kW-37 kW

IBM Corp.’s Rear Door Water 15 kWHeat eXchanger

Liebert Corp.’s XD models Liquid refrigerant 15 kWR134a

ISR Inc.’s SprayCool Liquid refrigerant 12 kWFluorinert

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

the retail juggernaut is building a brand-new 45,000-square-foot data center,and Hight said the new building will beable to use chilled water to cool itsracks. “But it wouldn’t break my heart ifit didn’t come to anything,” Hight said.

A lot of this debate rages aroundwater, but cooling technology vendorLiebert is quick to note that usingpumped refrigerant eliminates a lot ofthe headaches associated with liquidcooling. Liebert’s XD systems use R134arather than water, a coolant thatchanges from liquid to gas as it passesthrough the system.

In data centers, pumped refrigeranthas some significant advantages overwater:

n Liquid refrigerant takes up a lot lessspace than do water systems, both inthe cooling coils and the piping systems.This presents a major plus for data cen-ters trying to pack cooling into a smallspace.

n If water leaks, it can damage equip-ment. If your refrigerant leaks, you won’t

have gallons seeping onto the floor. Thisdifference is significant for data centersrunning lines overhead.

n Because a refrigerant changes phasefrom liquid to gas, it takes less energy to pump than water.

n The plumbing involved with water-based systems makes them less recon-figurable than refrigerant-based coolingthat uses tubing and closed-circuit systems.

On the other hand, water is cheaperand easy to replace. Many roomsalready have chilled-water lines comingin, and facility engineers are more famil-iar with water-based systems. Addition-ally, leaked refrigerants can have animpact on greenhouse gas emissionsand harm the environment.

For the liquid cooling products dis-cussed thus far, liquid is delivered eithernear or directly to the rack and is thenused to cool air locally, which in turncools the equipment. But some compa-nies are introducing products that coolservers directly by running a coolant

chapter 3 Data center infrastructure efficiency

11

While liquidrefrigerant takesup less spaceand avoids the leakageproblems ofwater, water is cheaper andeasier toreplace.

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

over the processor. In 2007, Liberty Lake, Wash.-based

ISR Inc. rolled out its SprayCool M-Series,a system that modifies servers by replac-ing the heat sinks in the processors withspecialized chip components that arecooled by a fine spray of 3M Fluorinert.Fluorinert is an inert liquid: colorless,odorless, nonflammable and safe fordirect contact with sensitive electronics.Fluorinert evaporates on a modifiedprocessor heat sink, where it changesphase to a gas and is pumped to a heatexchanger at the bottom of the rack,where it is then cooled by a building’schilled-water system.

In 2007 the Richland, Wash.-basedPacific Northwest National Lab (PNNL)at the Department of Energy launched aprogram to test the SprayCool systemon one of its supercomputers. The sys-tem is ideal for a static grid of tightlypacked, homogenous servers runningintensive computing workloads. Accord-ing to PNNL, the SprayCool liquid cool-ing system is thermodynamically moreefficient than convection cooling with

air, resulting in the need for less energyto remove waste heat. But the lab is stillmeasuring the reliability and total cost ofownership of the SprayCool model.

PNNL is running the experiment on an eight-rack system, a 14-teraflop peak,9-sustained teraflop computer in a verysmall space: only 800 square feet. Thecomputer will run mainly computationalfluid dynamics programs, measuring per-formance, temperature of the processors

chapter 3 Data center infrastructure efficiency

12

An Argument Against Water Cooling NEIL RASMUSSEN, CTO at American Power Conversion Corp.,said that direct water cooling is a bad application for datacenters in flux, many of which are indeed in a state of change.“Every day the servers are changing. It’s a much more difficult environment to plan a structured cooling system,”Rasmussen said. “Furthermore, not everything is a server in a data center. There are routers, patch panels, storage. Thereis a dynamic hodgepodge, where it would be very impracticalto plan water piping.”

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

and overall room temperature. “We’ll be able to see how much energy

savings you can have, if there are any,over air cooling. We’ll report on itnationally and will publish results monthby month,” said Dr. Moe A. Khaleel, thedirector of the computational sciencesand mathematics division at PNNL. “Webelieve the results will be positive, butwe want to quantify things.”

III. THE PROS AND CONS OF AIR-SIDE AND WATER-SIDE ECONOMIZERSOver the past year, data center design-ers have debated and tested the effec-tiveness of using air-side and water-sideeconomizers as an alternative to tradi-tional HVAC systems. Economizers useoutside air temperatures to cool serversdirectly or to cool chilled water withoutusing a chiller.

Air-side economizers bring largequantities of cold outside air into a com-puter room with air handlers. The energysavings comes from not using mechani-

cal refrigeration (such as chillers andcompressors) to cool the air. Air han-dlers duct the air in, filter it and expel thewaste heat back outside.

Detractors have noted that the appro-priate air temperatures and humiditylevels are available only to a limited partof the country. But this objection is not avalid reason to dismiss air-side econo-mizers, since Northern California, Ore-gon and Washington State are oftencited as suitable locations for the tech-nology, and the Northwest is one of thefastest-growing regions in the countryfor data center site selection.

There are reasons to be wary, though,specifically because of particulates andfluctuating humidity levels. Sullivan ofthe Uptime Institute is not a fan of air-side economizing. Particulates andchemicals are bad news for sensitiveelectronics, and he worries about thecorrosive effect of the salt air in placeslike Seattle, Portland and San Francisco.

“I have customers that have beenburned,” Sullivan said. “Some have datacenters sitting on the outskirts of town,

chapter 3 Data center infrastructure efficiency

13COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

and when the farmers start plowing thefields, the dust clogs the AC units.”

Humidity is also a concern. “When theair outside is dry and cold and you bringit in and heat it up, it becomes really dry,and you have the potential for exposureto electrostatic discharge,” Sullivan said.“When it’s moist outside—if it’s hot—Icould get condensation that would pro-mote corrosion in the equipment. Youonly need 75% humidity for protectedsteel to rust.”

Despite these concerns, in 2007,Lawrence Berkeley National Laboratory(LBNL) in Berkeley, Calif., published astudy on the reliability of outside air tocool data centers and found that humidi-ty sensors and filters can mitigate theserisks. According to the report, “IT equip-ment reliability degradation due to out-door contamination appears to be a poorjustification for not using economizers in data centers.”

Water-side economizers are sub-stantially less controversial and avoidthe issue of particulates and humidity.When the outside air is dry and temper-

atures are below 45 degrees Fahrenheit,water-side economizers use a coolingtower to cool building water withoutoperating a chiller. The cold water in a cooling tower is used to cool a plate-and-frame heat exchanger (a heat-transfer device constructed of individualplates) that is inserted between thecooling tower and a chilled-water distribution system that runs through a building.

Dry climates can extend free coolingbecause water in the cooling towerevaporates and cools the heat exchang-er. Sullivan said Phoenix has more hoursof free cooling per year than Dallas orNew York City because of the lowhumidity.

One barrier in water-side economizingis that in some cases the water return-ing from the cooling tower is too cold tostart the chiller. Users can mitigate thisproblem by storing warmer condenserwater to warm the basin of the coolingtower.

Stewart of Terremark is a proponent of free cooling—water side and air

One barrier in water-sideeconomizing:The waterreturning from a cooling towermay be too coldto start a chiller.

chapter 3 Data center infrastructure efficiency

14COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

side—but users need to be careful, he said.

Terremark uses air-side economizersin its Santa Clara, Calif., facility. “That isunconditioned air, and its humidity andcleanliness is in question,” Stewart said.“You need to carefully monitor humidityand adjust as necessary and filter the airto remove dust and dirt.”

“Also, since you are adding air volumeto the space, you need to be removingan equal amount of volume somewhere,or you will pressurize your data centerand your doors will not close properlyand may even blow open,” Stewartwarns.

Terremark recently constructed a facil-ity in Culpeper, Va., and it will be Terre-mark’s first use of water-side free cool-ing, according to Stewart. He said theclosed system avoids the humidity, con-taminant and pressurization issues, buthe’s had to factor in other concerns, likethe addition of Glycol to the chill waterso that it doesn’t freeze.

As for the energy savings on forgoingmechanical refrigeration, the jury is still

out. A lot of it depends on the outside air temperatures, how much more ener-gy the air handlers use to filter hugeamounts of air and other factors. But in the coming years, you can expect theEPA and other agencies to begin trackingand quantifying data points.

IV. DATA CENTER POWER DISTRIBUTIONWhile not as dramatic as removingwaste heat, data center power distri-bution and backup inefficiencies offersignificant targets for data center managers.

Raise the voltage, save power. Lately,infrastructure vendors are paying a lot of attention to distribution of power athigher voltages. According to Chris Loeffler, product manager at EatonCorp., virtually all IT equipment is ratedto work with input power voltages rang-ing from 100 volts (V) to 240 V alter-nating current (AC). The higher the voltage, the more efficiently the unit

chapter 3 Data center infrastructure efficiency

“[Free cooling]is unconditionedair, and itshumidity andcleanliness is inquestion. Youneed to carefullymonitor humidi-ty … and filterthe air toremove dust and dirt.”—BEN STEWART,

TERREMARK WORLDWIDEINC.

15COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

operates. However, most equipment isrun off lower-voltage power: the tradi-tional 120 V.

According to new research from Eaton,a Hewlett-Packard Co. ProLiant DL380Generation 5 server, for example, oper-ates at 82% efficiency at 120 V, 84%efficiency at 208 V, and 85% at 230 V. A data center could gain that incremen-tal advantage just by changing the inputpower and the power distribution unit(PDU) in the rack.

Liebert’s Panfil agrees that users canget a 2% to 3% efficiency increase using208 V versus 120 V. “People say that virtually everything is coming at 208 V but they have lots of equipment comingin at 120 V,” he said. “The IT people aremore comfortable with 120 V, but thereis no safety tradeoff.”

McFarlane offers advice for data cen-ter pros exploring this approach in thefuture. “The first step is to look at yourservers,” he said. “See if they auto-sense208 volts, and see what you can doabout running 208 to your cabinetsinstead of 120. There are plenty of PDUs

that will deliver 208 and 120 to the same strip if you wire it right.”

Eaton’s research also points to gainson a larger scale. Typically an uninter-ruptible power supply (UPS) operates at 480 V, and a PDU steps down thatpower from 480 V to 208 V or 120 V. If you could eliminate that step-downtransformer in the PDU by distributingpower at 400 V/230 V and operating IT equipment at higher voltages (usingtechnology currently available inEurope), the power chain would be more efficient.

According to Eaton, distributing powerat 400 V/230 V can be 3% more effi-cient in voltage transformation and 2%more efficient in the power supply in theIT equipment. This slight increase in effi-ciency is still worthwhile; a data centerwith 1,000 servers could save $40,000annually.

Loeffler said the main factor holdingusers back from distributing power at400 V/230 V is that the equipment tohandle these voltages is CE marked (i.e.,it contains the manufacturer’s seal that

chapter 3 Data center infrastructure efficiency

16

User can get a 2% to 3% efficiency using208 volts versus120 volts.

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

it meets the European Union safety stan-dards) but not approved by Underwrit-ers Laboratories, the U.S. product testingand compliance organization.

“The global UPS manufacturers allmake 400-volt systems, and we’ve donea number of Google data centers at 400volt, bringing in our CE-marked equip-ment,” Loeffler said. “But UL meanssomething for some people and youwould have a tough time looking at thisas a partial upgrade.”

Modular UPS system design. Thebiggest energy-loss item in the powerchain is the uninterruptible power sup-ply. A double-conversion UPS takes theAC power from the line and converts itto DC; the DC then charges the batteriesand goes through a converter thatchanges it back to AC. All of these steps involve some loss of energy.

Vendors generally claim good efficien-cy for double-conversion UPS systems,but they usually publish efficiency rat-ings only at full load. Since large, tradi-tional UPS systems are usually pur-

chased with all the capacity anticipatedfor the future, they often run well belowcapacity for a number of years, if not forever, said McFarlane. “Also, goodoperating practice said you’ll never runthe UPS at full load because it leaves youno headroom. And the efficiency curveon most of these UPSs drops like a rockas the load level goes down.”

McFarlane noted that this problem isexacerbated by the need for redundancy.Three 500 kVA UPSs, for example,would be intended to deliver a maximumof 1,000 kVA in an n+1 redundant con-figuration, so if one unit fails or is shutdown for service, the full design capacityis still available.

Even at full design load, you’re runningat only 67% of actual system capacity.Now put in two of these systems for a 2n configuration of n+1 UPSs per theUptime Institute’s Tier 4 of its Tier Per-formance Standards, and you have eachUPS system running at less than 50% of its already-less-than-67% potentialload.

Under these circumstances, you could

chapter 3 Data center infrastructure efficiency

17COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

easily be running at 65% efficiency orless. The major UPS manufacturers havetaken steps to improve this situation asmuch as possible, said McFarlane, andnew products in the pipeline will addressthe problem even more effectively.

In the meantime, modular UPS sys-tems are one way to mitigate the prob-lems associated with low efficiency.With careful planning, modular UPS systems can be configured and readilyreconfigured to run closer to capacity.Some UPSs on the market are modularand operate in much smaller increments,such as 10-kilowatt (kW) to 25-kW models.

A smaller data center that needs 80-kWcapacity, for example, can purchase nine10-kW modules for 90-kW capacity. If one module breaks down, the systemhas enough headroom to cover it whilerunning at far higher utilization.

In his SearchDataCenter.com piece“Weighing centralized versus modularUPS in the data center,” McFarlaneaddresses the pros and cons of modularUPS.

V. THE DIRECT CURRENT DEBATEEngineering experts are lining up on bothsides of the direct current (DC) powerdata center debate, and the feud is asheated as the original between Thomas

chapter 3 Data center infrastructure efficiency

18

Flywheels: Old-School Green Technology FLYWHEEL ENERGY STORAGE technology has been around fordecades. The primary power source spins a heavy disk calleda flywheel. This builds up kinetic energy based on the mass ofthe flywheel and the speed at which it rotates, which can beas fast as 54,000 rotations per minute. When the power goesout, even if it’s for a second or two, the flywheel releases thebuilt-up kinetic energy back into the data center until powerresumes or a backup generator turns on, which usually takesbetween 10 seconds and 20 seconds.

In most operations, flywheels work side by side with batter-ies. Short outages can kill battery life, and according to theElectric Power Research Institute in Palo Alto, Calif., 98% ofutility interruptions last less than 10 seconds. If a flywheel cansustain power for that time, it can prolong the life of a stringof batteries by reducing how many times they are “cycled.”

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

Edison and George Westinghouse. The idea of powering data center

equipment with DC has generated inter-est in the industry as a way to save ener-gy in the data center, especially since therelease of a 2006 study published byLBNL, which indicated that companiescould see a 10% to 20% energy savingsif they adopt DC power over AC.

In a traditional system, the utility com-pany sends electricity to a data center inAC, which is easier to distribute in thatform over long distances. The AC is con-verted to DC at the power distributionunit, converted back to AC to begin itspath to servers and finally convertedback again to DC by each individualserver.

In a DC system, there is only one con-version from the utility (AC) to the DCdistribution plant and servers. Fewerconversions mean less energy is lost inthe course of distribution.

But the road to DC is rocky; there aremyriad potential pitfalls:

n You can’t just go plugging servers

into racks with DC. Every time you plugsomething in, it changes the currentdraw. In fact, experts say you’re going toneed an electrical engineer on staff todeal with DC in the data center.

n A DC UPS can cost 20% to 40%more than AC.

n Some users say DC equipment isscarce. Sun Microsystems Inc., CiscoSystems Inc. and Rackable Systems Inc.offer a lot of DC products, but HP, IBMCorp. and Hitachi Data Systems arelacking.

In researching this article, one UPSmanufacturer suggested that LBNL com-pared cutting-edge DC with outdatedAC technology. But William Tschudi,project leader at LBNL and longtime datacenter efficiency advocate, put thatrumor to rest.

“We were accepting the equipmentvendors loaned us,” Tschudi said “Wegot their best in class, and [DC power]still saw 10% savings against a very effi-cient [AC] UPS system.”

Nonetheless, the general consensus

chapter 3 Data center infrastructure efficiency

19

The road to DC is rocky;there are myriad potential pitfalls.

COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE

from UPS vendors is that there are a lotof easier ways to save energy in the datacenter before reverting to DC to savethat little bit more: Time, effort andmoney can be better spent elsewhere.

Tschudi concedes there are issuesaround voltage sags, connections andgrounding that people are worriedabout. But companies are overcomingthese problems in other applications,and as the price of power skyrockets,more data centers and vendors mayexplore DC power technology.

Prioritizing. For data center managersplanning to implement a green strategy,it’s important to have short-, mid- andlong-term goals. When it comes tomechanical infrastructure efficiency, thealternatives range from the mundane tothe experimental. Near-term strategiesinclude auditing hot-aisle/cold-aisleimplementation and raised-floor mainte-nance. Another tactic is to ensure thatthe voltage from the PDU to the server isrunning at 208 V and not 120 V. Theseapproaches are low to no cost.

In the midterm, data center managers

should investigate high-efficiency sup-plemental cooling units for high-densityserver deployments, and smaller UPSsystems for modular growth.

And over the long term—when newconstruction is warranted, more energyefficiency data is available and stan-dards are in place—companies shouldinvestigate economizers, liquid coolingand DC power. n

Matt Stansberry has been reporting on the convergenceof IT, facility management and energy issues since2003. Since the Web site’s launch in January 2005, hehas been writing and editing for SearchDataCenter.com.Prior to that, he was the managing editor of Today’sFacility Manager magazine and a staff writer for the U.S. Green Building Council. He can be reached [email protected].

The generalconsensus fromUPS vendors is that there area lot of easierways to saveenergy in thedata centerbefore revertingto DC.

chapter 3 Data center infrastructure efficiency

20COOLING PRINCIPLES

SUPPLEMENTAL COOLING

ECONOMIZERPROS AND CONS

POWER DISTRIBUTION

DIRECT CURRENTDEBATE