alex shye, berkin ozisikyilmaz, arindam mallik, gokhan memik, peter a. dinda, robert p. dick, and...

Post on 11-Jan-2016

215 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Alex Shye, Berkin Ozisikyilmaz, Arindam Mallik, Gokhan Memik, Peter A. Dinda, Robert P. Dick, and Alok N.

ChoudharyNorthwestern University, EECS

International Symposium on Computer Architecture, June 2008. Beijing, China.

Findings/Contributions1. User satisfaction is correlated to CPU performance

2. User satisfaction is non-linear, application-dependent, and user-dependent

1. We can use hardware performance counters to learn and leverage user satisfaction to optimize power consumption while maintaining satisfaction

Claim: Any optimization ultimately exists to satisfy the end userClaim: Current architectures largely ignore the individual user

22Architectural trade-offsexposed to the user

11User-centric applications

33Optimization opportunityUser variation = optimization potential

Use

r S

atis

fact

ion

Your favorite metric(IPS, throughput, etc.)

????

Performance Level

????

Performance Level

Leverage knowledgefor optimization

Leverage knowledgefor optimization Learn relationship between user

satisfaction and hardware performanceLearn relationship between user

satisfaction and hardware performance

Hardware performance counters are supported on all modern processors

Low overhead Non-intrusive

WinPAPI interface; 100Hz

For each HPC: Maximum Minimum Standard deviation Range Average

IBM Thinkpad T43p Pentium M with Intel Speedstep Supports 6 Frequencies (2.2Ghz -- 800Mhz)

Two user studies: 20 users each First to learn about user satisfaction Second to show we can leverage user satisfaction

Three multimedia/interactive applications: Java game: A first-person-shooter tank game Shockwave: A 3D shockwave animation Video: DVD-quality MPEG video

Goal: Learn relationship between

HPCs and user satisfaction

How: Randomly change

performance/frequency Collect HPCs Ask the user for their

satisfaction rating!

Compare each set of HPC values with user satisfaction ratings Collected 360 satisfaction levels (20 users, 6 frequencies, 3

applications) 45 metrics per satisfaction level

Pearson’s Product Moment Correlation Coefficient (r) -1: negative linear correlation, 1: positive linear correlation

Strong correlation: 21 of 45 metrics over .7 r value

rx, y N xy ( x)( y)

[N x2 ( x)2 ][N y2 ( y)2 ]

Combine all user data

Fit into a neural network Inputs: HPCs and user ID Output: User satisfaction

Observe relative importance factor

User more than two times more important than the second-most important factor

User satisfaction is highly user-specific!

HPCsUser ID

User Satisfaction

User satisfaction is often non-linear User satisfaction is application-specific Most importantly, user satisfaction is user-

specific

Observations: User satisfaction is non-linear User satisfaction is application dependent User satisfaction is user dependent

All three represent optimization potential!

Based on observations, we construct Individualized DVFS (iDVFS)

Dynamic voltage and frequency scaling (DVFS) effective for improving power consumption

Common DVFS schemes (i.e., Windows XP DVFS, Linux ondemand governor) are based on CPU-utilization

User-aware performance

prediction model

Predictive user-aware Dynamic

Frequency Scaling

Building correlation network based on counters stats and

user feedback

Learning/Modeling Stage

Runtime Power Management

Hardware counter states

Hardware counter states

User Satisfaction Feedback

Train per-user and per-application Small training set!

Two modifications to neural network training▪ Limit inputs (used two highest correlation HPCs)

▪ BTAC_M-average and TOT_CYC-average

▪ Repeated trainings using most accurate NN

HPCs User Satisfaction

ρ: user satisfaction tradeoff threshold αf: per frequency threshold M: maximum user satisfaction

Greedy approach Make prediction every 500ms If within user satisfaction within αfρ of M twice

in a row, decrease frequency If not, increase frequency and is αf decreased to

prevent ping-ponging between frequency

Goal: Evaluate iDVFS with real users

How: Users randomly use application with iDVFS and with Windows XP DVFS

Afterwards, users asked to rate each one Frequency logs maintained through

experiments▪ Replayed through National Instruments DAQ

for system power

iDVFS can scale frequency effectively based upon user satisfaction

In this case, we slightly decrease power compared to Windows DVFS

iDVFS significantly improves power consumption Here, CPU utilization not equal to user satisfaction

No change in user satisfaction, significant power savings

Same user satisfaction, same power savings

Red: Users gave high ratings to lower frequencies

Dashed Black: Neural network bad

Lowered user satisfaction, improved power

Blue: Gave constant ratings during training

Slight increase in ESP Benefits in energy reduction outweigh loss in

user satisfaction with ESP

We explore user satisfaction relative to actual hardware performance

Show correlation from HPCs to user satisfaction for interactive applications

Show that user satisfaction is generally non-linear, application-, and user-specific

Demonstrate an example for leveraging user satisfaction to improve power consumption over 25%

Questions?

For more information, please visit: http://www.empathicsystems.org

top related