1 mor harchol-balter carnegie mellon university school of computer science

39
1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

Post on 19-Dec-2015

219 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

1

Mor Harchol-BalterCarnegie Mellon UniversitySchool of Computer Science

Page 2: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

2

“size” = service requirement

load < 1

jobs SRPT

jobs

jobs PS

FCFS

Q: Which minimizes mean response time?

Page 3: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

3

“size” = service requirement

jobs SRPT

jobs

load < 1

jobs PS

FCFS

Q: Which best represents scheduling in web servers ?

Page 4: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

4

IDEA

How about using SRPT instead of PS in web servers?

Linux 0.S.

WEBSERVER(Apache)

client 1

client 2

client 3

“Get File 1”

“Get File 2”

“Get File 3”

Internet

Page 5: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

5

Many servers receive mostly static web requests.

“GET FILE”

For static web requests, know file size

Approx. know service requirement of request.

Immediate Objections

1) Can’t assume known job size

2) But the big jobs will starve ...

Page 6: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

6

Outline of Talk

[Sigmetrics 01] “Analysis of SRPT: Investigating Unfairness”[Performance 02] “Asymptotic Convergence of Scheduling Policies…”[Sigmetrics 03*] “Classifying Scheduling Policies wrt Unfairness …”

THEORY

IMPLEMENT

www.cs.cmu.edu/~harchol/

[TOCS 03] “Size-based Scheduling to Improve Web Performance”[ITC 03*, TOIT 06] “Web servers under overload: How scheduling helps”[ICDE 04,05,06] “Priority Mechanisms for OLTP and Web Apps”

(M/G/1)

Schroeder

Wierman

IBM/CMU Patent

Page 7: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

7

THEORY SRPT has a long history ...

1966 Schrage & Miller derive M/G/1/SRPT response time:

1968 Schrage proves optimality

1979 Pechinkin & Solovyev & Yashkov generalize

1990 Schassberger derives distribution on queue length

BUT WHAT DOES IT ALL MEAN?

Page 8: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

8

THEORYSRPT has a long history (cont.)1990 - 97 7-year long study at Univ. of Aachen under Schreiber SRPT WINS BIG ON MEAN!

1998, 1999 Slowdown for SRPT under adversary: Rajmohan, Gehrke, Muthukrishnan, Rajaraman, Shaheen, Bender, Chakrabarti, etc. SRPT STARVES BIG JOBS!

Various o.s. books: Silberschatz, Stallings, Tannenbaum: Warn about starvation of big jobs ...

Kleinrock’s Conservation Law: “Preferential treatment given to one class of customers is afforded at the expense of other customers.”

Page 9: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

9

Unfairness Question

SRPT

PS?

?

Let =0.9. Let G: Bounded Pareto(= 1.1, max=1010)

Question: Which queue does biggest job prefer?

M/G/1

M/G/1

THEORY

Page 10: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

10

Results on UnfairnessLet =0.9. Let G: Bounded Pareto(= 1.1, max=1010)

SRPT

PS

I SRPT

Page 11: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

11

Unfairness – General Distribution

All-can-win-theorem:

For all distributions, if ½,

E[T(x)]SRPT E[T(x)]PS for all x.

Page 12: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

12

All-can-win-theorem:

For all distributions, if ½,

E[T(x)]SRPT E[T(x)]PS for all x.

Proof idea:

x

t

dt

0 )1 1x

)

x

xFx

2

2

))1(2

(

0x

dttft 2 )(

Waiting time (SRPT) Residence (SRPT) Total (PS)

Page 13: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

13

Classification of Scheduling Policies

AlwaysUnfair

Sometimes Unfair

AlwaysFair

SRPT

Age-BasedPolicies

Preemptive Size-basedPolicies

Remaining Size-basedPolicies

Non-preemptive

PS PLCFS

FB

PSJF

LRPT

FCFS

LJF

SJF

FSP[Sigmetrics 01, 03]

[Sigmetrics 04]• Henderson FSP (Cornell) (both FAIR & efficient)• Levy’s RAQFM (Tel Aviv) (size + temporal fairness)• Biersack’s, Bonald’s flow fairness (France)• Nunez, Borst TCP/DPS fairness (EURANDOM)

Page 14: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

14

What does SRPT mean within a Web server?

• Many devices: Where to do the scheduling?

• No longer one job at a time.

IMPLEMENT From theory to practice:

Page 15: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

15

Server’s Performance BottleneckIMPLEMENT

5

Linux 0.S.

WEBSERVER(Apache)

client 1

client 2

client 3

“Get File 1”

“Get File 2”

“Get File 3”

Rest ofInternet ISP

Site buyslimited fractionof ISP’s bandwidth

We model bottleneck by limiting bandwidth on server’s uplink.

Page 16: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

16

Network/O.S. insides of traditional Web server

Sockets take turnsdraining --- FAIR = PS.

WebServer

Socket 1

Socket 3

Socket 2Network Card

Client1

Client3

Client2BOTTLENECK

IMPLEMENT

Page 17: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

17

Network/O.S. insides of our improved Web server

Socket corresponding to filewith smallest remaining datagets to feed first.

WebServer

Socket 1

Socket 3

Socket 2Network Card

Client1

Client3

Client2

priorityqueues.

1st

2nd

3rd

S

M

L

BOTTLENECK

IMPLEMENT

Page 18: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

18

Experimental Setup

Implementation SRPT-based scheduling: 1) Modifications to Linux O.S.: 6 priority Levels 2) Modifications to Apache Web server 3) Priority algorithm design.

Linux 0.S.

1

2

3

APACHEWEB

SERVER

Linux

12

3

200

Linux

123

200

Linux

12

3

200

switch

WA

N E

MU

WA

N E

MU

WA

N E

MU

Page 19: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

19

Experimental Setup

APACHEWEB

SERVER

Linux 0.S.

123

Linux

123

200

Linux

123

200

Linux

123

200

switch

WA

N E

MU

WA

N E

MU

WA

N E

MU

Trace-based workload: Number requests made: 1,000,000Size of file requested: 41B -- 2 MBDistribution of file sizes requested has HT property.

Flash

Apache

WAN EMU

Geographically-dispersed clients

10Mbps uplink

100Mbps uplink

Surge

Trace-based

Open system

Partly-open

Load < 1

Transient overload

+ Other effects: initial RTO; user abort/reload; persistent connections, etc.

Page 20: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

20

Preliminary Comments

• Job throughput, byte throughput, and bandwidth utilization were same under SRPT and FAIR scheduling.

• Same set of requests complete.

• No additional CPU overhead under SRPT scheduling. Network was bottleneck in all experiments.

APACHEWEB

SERVER

Linux 0.S.

123

Linux

123

200

Linux

123

200

Linux

123

200

switch

WA

N E

MU

WA

N E

MU

WA

N E

MU

Page 21: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

21

Load

FAIR

SRPTMea

n R

esp

onse

Tim

e (s

ec)

Results: Mean Response Time (LAN)

.

.

.

.

.

.

Page 22: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

22

Percentile of Request Size

Mea

n R

esp

onse

tim

e (

s)

FAIR

SRPT

Load =0.8

Mean Response Time vs. Size Percentile (LAN)

Page 23: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

23

Transient Overload

Page 24: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

24

Transient Overload - Baseline

Mean response time

SRPTFAIR

Page 25: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

25

Transient overloadResponse time as function of job

size

small jobswin big!

big jobsaren’t hurt!

FAIR

SRPT

WHY?

Page 26: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

26

Baseline Case

WAN propagation delays

WAN loss

Persistent Connections

Initial RTO value

SYN Cookies

User Abort/Reload

Packet Length

Realistic Scenario

WAN loss + delay

RTT: 0 – 150 ms

Loss: 0 – 15%

Loss: 0 – 15%RTT: 0 – 150 ms,

0 – 10 requests/conn.

RTO = 0.5 sec – 3 sec

ON/OFF

Abort after 3 – 15 sec, with 2,4,6,8 retries.

Packet length = 536 – 1500 Bytes

RTT = 100 ms; Loss = 5%; 5 requests/conn.,RTO = 3 sec; pkt len = 1500B; User abortsAfter 7 sec and retries up to 3 times.

FACTORS

Page 27: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

27

Transient Overload - Realistic

Mean response time

FAIR SRPT

Page 28: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

28

More questions …

STATIC web

requests

Everything so far in talk …

DYNAMICweb

requests

Current work…(ICDE 04,05,06)

SchroederMcWherterSchroeder

Wierman

Page 29: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

29

Online Shopping

Internet

client 1

client 2

client 3

“buy”

“buy”

“buy”

Web Server(eg: Apache/Linux)

Database(eg: DB2, Oracle, PostgreSQL)

• Dynamic responses take much longer – 10sec• Database is bottleneck.

Page 30: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

30

Online Shopping

Internet

client 1

client 2

client 3

“$$$buy$$$”

“buy”

“buy”

Web Server(eg: Apache/Linux)

Database(eg: DB2, Oracle, PostgreSQL)

Goal: Prioritize requests

Page 31: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

31

Isn’t “prioritizing requests” problem already solved?

Internet

“$$$buy$$$”

“buy”

“buy”

Web Server(eg: Apache/Linux)

Database(eg: DB2, Oracle, PostgreSQL)

No. Prior work is simulation or RTDBMS.

Page 32: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

32

Which resource to prioritize?“$$$buy$$$”

“buy”

“buy”

Web Server(eg: Apache/Linux)

Internet

Internet

Database

Disks LocksCPU(s)

High-Priority client Low-Priority client

Page 33: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

33

Q: Which resource to prioritize?“$$$buy$$$”

“buy”

“buy”

Web Server(eg: Apache/Linux)

Internet

Internet

Database

Disks LocksCPU(s)

High-Priority client Low-Priority client

A: 2PL Lock Queues

Page 34: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

34

What is bottleneck resource?

• IBM DB2 -- Lock waiting time (yellow) is bottleneck.• Therefore, need to schedule lock queues to have impact.

Fix at 10 warehouses #clients = 10 x #warehouses

Page 35: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

35

Existing Lock scheduling policiesLock resource 1

HHLL L

HHLL L

Lock resource 2

NP Non-preemptive. Can’t kick out lock holder.

NPinherit NP + Inheritance.

Pabort Preemptively abort.But suffer rollback cost + wasted work.

Page 36: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

36

Results:

Think time

Non-preemptive policies

High

Low

Think time

ResponseTime (sec) Low

High

ResponseTime(sec)

Preemptive-abort policy

New idea: POW (Preempt-on-Wait)Preempt selectively: only preempt those waiting.

Page 37: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

37

Results:

Think time(sec)

ResponseTime (sec)

Pabort

NPinherit

Pabort

NPinherit POW:

Best of both

IBM/CMUpatent

Page 38: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

38

External DBMS scheduling

DBMS(eg: DB2, Oracle)QoSInternet

“$$$buy$$$”

“buy”

“buy”

Web Server

Scheduling

HL LL

Page 39: 1 Mor Harchol-Balter Carnegie Mellon University School of Computer Science

39

Scheduling is a very cheap solution… No need to buy new hardware No need to buy more memory Small software modifications

…with a potentially very big win.

Conclusion

Thank you!