case study: energy systems integration facility (esif)€¦ · esif: energy system integration...
TRANSCRIPT
Case Study: Energy Systems Integration Facility (ESIF) at the U.S. Department of Energy’s National Renewable Energy Laboratory (NREL) in Golden, Colorado
HIGH BAY
LABORATORIES
DATA CENTER
OFFICE
1
Case Study Roadmap
• Steve: NREL Introduction
• Steve: Performance Specs & Integrated Process Design – Holistic approach to data center design and integration
• Peter: Cooling and Energy Recovery
• Steve: Preliminary (early) data
• All: Panel Discussion
2
NREL Snapshot
• Leading clean-energy innovation for 37 years
• ~2300 total staff in world-class facilities
• Campus is a living model of sustainable energy • ~34,000 visitors in FY2013.
• Located in Golden, Colorado
• Owned by the Department of Energy • Operated by the Alliance for Sustainable Energy
Only National Laboratory Dedicated Solely to Energy Efficiency and Renewable Energy
3
Scope of Mission
Energy Efficiency Renewable Energy Systems Integration Market Focus
Residential
Buildings
Commercial
Buildings
Data Centers
Personal and
Commercial
Vehicles
Solar
Wind and Water
Biomass
Hydrogen
Geothermal
Grid
Infrastructure
Distributed
Energy
Interconnection
Battery and
Thermal Storage
Transportation
Private Industry
Federal
Agencies
Defense Dept.
State/Local
Govt.
International
4
ESIF: Energy System Integration Facility
• New 185,000 s.f. research facility
– Office space for 220.
– High bay and laboratory space.
– Data center
• Integrated “chips-to-bricks” approach.
• Process: Design build with performance specs.
• LEED Platinum – Significant achievement for building with large lab and data center load.
• Planning started in 2006. 5
NREL Data Center • Showcase Facility
– 10MW, 10,000 s.f.
– Leverage favorable climate
– Use evaporative cooling, NO mechanical cooling.
– Waste heat captured and used to heat labs & offices.
– World’s most energy efficient data center, PUE 1.06!
• High Performance Computing – Petascale+ HPC Capability in 2013
– 20 year planning horizon
• 5 to 6 HPC generations.
– Insight Center • Scientific data visualization
• Collaboration and interaction.
Lower CapEx and lower OpEx.
Leveraged expertise in energy
efficient buildings to focus on
showcase data center.
Integrated chips to bricks approach.
National Renewable Energy Laboratory Steve Hammond
Critical Data Center Specs • Warm water cooling, 75F (24C)
• Water much better working fluid than air - pumps trump fans.
• Utilize high quality waste heat, 95F (35C) or warmer.
• +90% IT heat load to liquid. • Up to 10% IT heat load to air.
• High power distribution • 480VAC, Eliminate conversions.
• Think outside the box • Don’t be satisfied with an energy efficient data
center nestled on campus surrounded by inefficient laboratory and office buildings.
• Innovate, integrate, optimize.
Dashboards report instantaneous, seasonal and cumulative PUE values. National Renewable Energy Laboratory Steve Hammond
System Integration – Designing for Energy Reuse
HIGH PERFORMANCE COMPUTING DATA CENTER
• IT Load, Grows from 1 MW up to 10 MW
• Achieving Energy Efficiency Goals Requires:
• Evaporative Hydronic Cooling (Not Compressor Based)
• Maximize Direct Heat to Water, Minimize Use of Air for Cooling
• Maximize Energy Reuse in the Form of Low Grade Heating
OFFICE SPACE and HIGH BAY LABORATORIES
• Require Large Volumes of Ventilation Air (Outside Air)
• Achieving Energy Efficiency Goals Requires:
• Maximize Evaporative Cooling, Minimize Compressor Use
• Utilize Energy Recovery to Reduce Operational Energy Use
• Provide for Static Pressure Reset & Exhaust Stack Velocity
Turn-Down via Wind Anemometer Control.
MUTUALLY BENEFICIAL
RELATIONSHIP
8
COOLING AND ENERGY RECOVERY
Peter starts here
9
Liquid Cooling Technologies – Direct Contact Liquid Cooling
TYPICAL SYSTEM CONFIGURATION
• A Cooling Distribution Unit (CDU) Isolates
the Computer Cooling Liquid (Water) from
the Central Building Systems
• The CDUs Circulate Water to Liquid-Cooled
Local Heat Sinks (Multiple Each Server)
• Distribution Manifold Provides Brings Water
to Each Server (Radiant Piping Technology)
• Supports Higher Density Solutions with a
Minimal Amount of Air Cooling Required
• Water is in Direct Contact with Electronic
Equipment (No Heat Pipe Interface)
• Higher Water Quality Requirements
• Higher Water Damage Risk Due to Multiple
System Connections and Distribution Piping
10
Benefits of Liquid Cooling – Thermal Stability
DESIGN CONSIDERATIONS
• Due to the High and Variable Heat Loads within the Servers,
Manufacturer’s Favor a Stable, Consistent Temperature Profile
• Cooling Distribution Units (CDUs) Act as a Buffer to Central Building
Systems and Give the User Control over Operating Set Points
• Dedicated Central Systems Serving the Data Center Need to BOTH
Energy Efficiency & Temperature Stability 11
NREL – Energy Systems Integration Facility
Design Conditions and Capacity • Liquid Cooling
– 100% total capacity
– 75 deg F chilled water supply
– Minimum 95 deg F return temp
• Design Capacity – Day One: 1.0 MW IT load
– Final: 10 MW IT load
• Air Cooling – 20% total capacity (Day One 10% capacity)
– 80 deg F inlet air / 25% RH minimum / 60% RH (42 deg f dp) maximum
– 100 deg f return temp
NREL – Energy Systems Integration Facility
14
15
45 deg Chilled Water System
29% of the Year in Full Water Side Econmizer
16
75 deg “Chilled” Water System
100% of the Year in Full Water Side Econmizer
Cooling Load (kW, total)
Cooling Load (kW, per rack)
% Water-cooled
Supply Water Temp (F)
Water delta T
(F)
Water press. Drop
(PSI)
Air Inlet/ Outlet Temp
(F)
System I 942.9 112 96.5% 70-75 25-30
(100F RWT) 20 80/-
System II
398 51 86% 75 10
(85F RWT) 15 80/87
System III
216 100 91% 75 27.1
(102.1F RWT) 21 80/96.5
System IV
416-506 44.3-55.3 100% 75 20+ 14.5 -/-
17
18
19
20
21
Heat of Vaporization & Evaporative Cooling
22
NREL – Energy Systems Integration Facility
Heat Recovery System
Recovered Heat to Lab
and Office AHUs
NREL – Energy Systems Integration Facility
0
1
2
3
4
5
6
7
8
9
10
Energy Targets
PUE 1.06
EUE 0.9
Current Design
PUE 1.05
EUE 0.7
Annual Energ
y G
Wh/y
r
NREL-ESIF Data CenterEnergy Recovery at 1 MW IT Load
Data Center
Equipment
Load
Thermal
Energy
RecoveredEUE 0.9
EUE 0.7
0
500
1,000
1,500
2,000
2,500
Heat M
Btu
Annual Heat Demand & Recovery
Campus Hot Water
Heat for Export to Campus
Heat Recovered
Energy Usage Effectiveness (EUE) =
Total Data Center Annual Energy – Total Energy Recovered
Total IT Equipment Annual Energy
“We want the bytes AND the btu’s!”
Recovered
Heat
STEVE TAKES BACK TO CLOSE
25
ESIF HPC Datacenter PUE Calculation NREL ESIF – DATA CENTER ENERGY EFFICIENCY METRICS
(TYPICAL FOR HPC RACKS)
LIGHTING & PLUG POWER COOLING LOADS PUMP LOADS HVAC LOADS IT EQUIPMENT POWER
DATA CENTER NORMAL POWER
SERVICE ENTRANCE SWITCHBOARD
AUTOMATIC TRANSFER SWITCH, ATS-U1SB
ENGINE BLOCK & FUEL HEATERS
UPS DISTRIBUTION BOARD ATS-U1SB
AIR
-HA
ND
LIN
G U
NIT
S A
HU
-DC1
, AH
U-D
C2
MA
KE-U
P A
IR-H
AN
DLI
NG
UN
IT, M
AU
-DC1
W
ITH
EVA
PORA
TIV
E CO
OLI
NG
PU
MP
ENER
GY
RECO
VER
Y PU
MPS
P-
604A
, P-6
04B
MECHANICAL DISTRIBUTION BOARD, DSB-MS1
AUTOMATIC TRANSFER SWITCH, ATS-LS
EMER
GEN
CY L
IGH
TIN
G –
DAT
A C
ENTE
R Pa
nel E
SL2
EPM
NO
RMA
L LI
GH
TIN
G –
DAT
A C
ENTE
R PA
NEL
SL3
EPM EPM EPM
NO
RMA
L PO
WER
(PLU
G) –
DAT
A C
ENTE
R PA
NEL
SP3
DISTRIBUTION BOARD
DSB-DC1
DISTRIBUTION BOARD
DSB-DC2
DISTRIBUTION BOARD
DSB-DC3
DISTRIBUTION BOARD
DSB-DC4
STAND-BY GENERATOR
CRH
EPM EPM
CENTRAL PLANT DISTRIBUTION BOARD DSB-CP
CENTRAL PLANT EMERGENCY DISTRIBUTION BOARD, DSB-CPE
TOW
ER W
ATER
PU
MPS
– D
ATA
CEN
TER
P-60
2A, P
-602
B
COO
LIN
G T
OW
ER F
ILTR
ATIO
N U
NIT
&
TRA
CE H
EATE
RS F
OR
PIPI
NG
COO
LIN
G T
OW
ERS –
DAT
A C
ENTE
R CT
-602
A, C
T-60
2B, C
T-60
2C, C
T-6O
2D
EPM EPM
AUTOMATIC TRANSFER SWITCH, ATS-CPLE
DIS
TRIB
UTI
ON
BO
ARD
DSB
-DCU
-1
EPM
COOLING DISTRIBUTION RACK, CDU-1
COOLING DISTRIBUTION RACK, CDU-2
COOLING DISTRIBUTION RACK, CDU-3
COOLING DISTRIBUTION RACK, CDU-4
COOLING DISTRIBUTION RACK, CDU-5
COOLING DISTRIBUTION RACK, CDU-6
POW
ER D
ISTR
IBU
TIO
N U
NIT
, LE
GA
CY P
DU
-DCU
-2
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C1-2
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C1-1
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C2-2
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C2-1
POW
ER D
ISTR
IBU
TIO
N U
NIT
, LE
GA
CY P
DU
-DC4
-2
POW
ER D
ISTR
IBU
TIO
N U
NIT
, H
PC P
DU
-DC4
-1
POW
ER D
ISTR
IBU
TIO
N U
NIT
, H
PC P
DU
-DCU
-1
EPM EPM EPM EPM
DATA CENTER ELECTRICAL
TRANSFORMER
HPC
FLE
X RA
CK #
9
HPC
FLE
X RA
CK #
10
HPC
FLE
X RA
CK #
11
HPC
FLE
X RA
CK #
12
NORMAL LOAD
STAND-BY POWER
STA
ND
-BY
LOA
D
ELECTRICAL POWER METER (TYPICAL)
POWER USE EFFECTIVENESS (PUE)
DATA CENTER ENERGY EFFICIENCY IS BENCHMARKED USING AN INDUSTRY STANDARD METRIC OF POWER USE EFFECTIVENESS (PUE). THE PUE IS DEFINED AS FOLLOWS:
POWER USE EFFECTIVENESS = TOTAL FACILITY POWER IT EQUIPMENT POWER
TOTAL FACILITY POWER = LIGHTING & PLUG POWER + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT POWER
IT EQUIPMENT POWER = TOTAL POWER USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER
CRH
DISTRIBUTION BOARD DSB-DCU
UPS UPM1 & UPM2
RECOVERED ENERGY BENEFICIALLY USED
OUTSIDE DATA CENTER, HEAT EXCHANGER
HX-605A
ENERGY USE EFFECTIVENESS (EUE)
ENERGY USE EFFECTIVENESS (EUE) IS A METRIC OF RECOVERED ENERGY BENEFICIALLY USED OUTSIDE OF THE DATA CENTER (HEATING). THE EUE IS DEFINED AS FOLLOWS:
ENERGY USE EFFECTIVENESS = TOTAL FACILITY ENERGY – RECOVERED ENERGY TOTAL FACILITY ENERGY
TOTAL FACILITY ENERGY = LIGHTING & PLUG ENERGY + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT ENERGY
IT EQUIPMENT ENERGY = TOTAL ENERGY USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER
RECOVERED ENERGY
CALCULATED POWER BASED ON RUNTIME HOURS (TYPICAL)
DSB-U1SB
Preliminary ESIF HPC Datacenter PUE NREL ESIF – DATA CENTER ENERGY EFFICIENCY METRICS
(TYPICAL FOR HPC RACKS)
LIGHTING & PLUG POWER COOLING LOADS PUMP LOADS HVAC LOADS IT EQUIPMENT POWER
DATA CENTER NORMAL POWER
SERVICE ENTRANCE SWITCHBOARD
AUTOMATIC TRANSFER SWITCH, ATS-U1SB
ENGINE BLOCK & FUEL HEATERS
UPS DISTRIBUTION BOARD ATS-U1SB
AIR
-HA
ND
LIN
G U
NIT
S A
HU
-DC1
, AH
U-D
C2
MA
KE-U
P A
IR-H
AN
DLI
NG
UN
IT, M
AU
-DC1
W
ITH
EVA
PORA
TIV
E CO
OLI
NG
PU
MP
ENER
GY
RECO
VER
Y PU
MPS
P-
604A
, P-6
04B
MECHANICAL DISTRIBUTION BOARD, DSB-MS1
AUTOMATIC TRANSFER SWITCH, ATS-LS
EMER
GEN
CY L
IGH
TIN
G –
DAT
A C
ENTE
R Pa
nel E
SL2
EPM
NO
RMA
L LI
GH
TIN
G –
DAT
A C
ENTE
R PA
NEL
SL3
EPM EPM EPM
NO
RMA
L PO
WER
(PLU
G) –
DAT
A C
ENTE
R PA
NEL
SP3
DISTRIBUTION BOARD
DSB-DC1
DISTRIBUTION BOARD
DSB-DC2
DISTRIBUTION BOARD
DSB-DC3
DISTRIBUTION BOARD
DSB-DC4
STAND-BY GENERATOR
CRH
EPM EPM
CENTRAL PLANT DISTRIBUTION BOARD DSB-CP
CENTRAL PLANT EMERGENCY DISTRIBUTION BOARD, DSB-CPE
TOW
ER W
ATER
PU
MPS
– D
ATA
CEN
TER
P-60
2A, P
-602
B
COO
LIN
G T
OW
ER F
ILTR
ATIO
N U
NIT
&
TRA
CE H
EATE
RS F
OR
PIPI
NG
COO
LIN
G T
OW
ERS –
DAT
A C
ENTE
R CT
-602
A, C
T-60
2B, C
T-60
2C, C
T-6O
2D
EPM EPM
AUTOMATIC TRANSFER SWITCH, ATS-CPLE
DIS
TRIB
UTI
ON
BO
ARD
DSB
-DCU
-1
EPM
COOLING DISTRIBUTION RACK, CDU-1
COOLING DISTRIBUTION RACK, CDU-2
COOLING DISTRIBUTION RACK, CDU-3
COOLING DISTRIBUTION RACK, CDU-4
COOLING DISTRIBUTION RACK, CDU-5
COOLING DISTRIBUTION RACK, CDU-6
POW
ER D
ISTR
IBU
TIO
N U
NIT
, LE
GA
CY P
DU
-DCU
-2
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C1-2
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C1-1
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C2-2
DIS
TRIB
UTI
ON
BO
ARD
D
SB-D
C2-1
POW
ER D
ISTR
IBU
TIO
N U
NIT
, LE
GA
CY P
DU
-DC4
-2
POW
ER D
ISTR
IBU
TIO
N U
NIT
, H
PC P
DU
-DC4
-1
POW
ER D
ISTR
IBU
TIO
N U
NIT
, H
PC P
DU
-DCU
-1
EPM EPM EPM EPM
DATA CENTER ELECTRICAL
TRANSFORMER
HPC
FLE
X RA
CK #
9
HPC
FLE
X RA
CK #
10
HPC
FLE
X RA
CK #
11
HPC
FLE
X RA
CK #
12
NORMAL LOAD
STAND-BY POWER
STA
ND
-BY
LOA
D
ELECTRICAL POWER METER (TYPICAL)
POWER USE EFFECTIVENESS (PUE)
DATA CENTER ENERGY EFFICIENCY IS BENCHMARKED USING AN INDUSTRY STANDARD METRIC OF POWER USE EFFECTIVENESS (PUE). THE PUE IS DEFINED AS FOLLOWS:
POWER USE EFFECTIVENESS = TOTAL FACILITY POWER IT EQUIPMENT POWER
TOTAL FACILITY POWER = LIGHTING & PLUG POWER + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT POWER
IT EQUIPMENT POWER = TOTAL POWER USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER
CRH
DISTRIBUTION BOARD DSB-DCU
UPS UPM1 & UPM2
RECOVERED ENERGY BENEFICIALLY USED
OUTSIDE DATA CENTER, HEAT EXCHANGER
HX-605A
ENERGY USE EFFECTIVENESS (EUE)
ENERGY USE EFFECTIVENESS (EUE) IS A METRIC OF RECOVERED ENERGY BENEFICIALLY USED OUTSIDE OF THE DATA CENTER (HEATING). THE EUE IS DEFINED AS FOLLOWS:
ENERGY USE EFFECTIVENESS = TOTAL FACILITY ENERGY – RECOVERED ENERGY TOTAL FACILITY ENERGY
TOTAL FACILITY ENERGY = LIGHTING & PLUG ENERGY + COOLING LOADS + PUMP LOADS + HVAC LOADS + IT EQUIPMENT ENERGY
IT EQUIPMENT ENERGY = TOTAL ENERGY USED TO MANAGE, PROCESS, STORE, OR ROUTE DATA WITHIN THE DATA CENTER
RECOVERED ENERGY
CALCULATED POWER BASED ON RUNTIME HOURS (TYPICAL)
DSB-U1SB
6 + 2 + 34 + 7 + 636
636 = 1.077!
IT Equipment, 636KW
Lighting, Plug Load, Misc., 6
Evap Cooling Towers, 2
Pumps, 34
Fan Walls, 7
Green Data Center Bottom Line
IT Load Energy
Recovery
NO
Mechanical Chillers
Heat ESIF Offices,
Labs, ventilation
(save $200K / year)
CapEx
No Chillers
Initial Build: 600 tons
10 Yr. growth: 2400 tons
10-year Savings: ($1.5K / ton)
Savings
No Chillers
$.9M
$3.6M
$4.5M
OpEx (10MW IT Load)
PUE of 1.3
PUE of 1.06
Annual Savings
10-year Savings ($1M / MW year)
Utilities
$13M
$10.6M
$2.4M
$24M (excludes heat
recovery benefit)
Evap. Water Towers
Cost less to build
Cost less to operate
Comparison of ESIF
PUE 1.06 vs efficient
1.3 data center.
What’s Next - System Integration, DR, and Energy Mgmt
MAXIMIZE ENERGY EFFICIENCY &
REUSE
• Maximize Beneficial Daylighting,
Minimize Lighting Loads
• Active Radiant (Chilled) Beams –
Perimeter Cooling & Heating
• Underfloor Air, Natural Ventilation, &
Solar-Powered Relief Fans
ENERGY Mgmt & Demand Response
• Go beyond power capping
• Energy Management
• Alter workload to meet opportunity.
• Alter workload minimize impact.
29
30