concurrency control and recovery zachary g. ives university of pennsylvania cis 650 – implementing...
Post on 22-Dec-2015
216 views
TRANSCRIPT
Concurrency Control and Recovery
Zachary G. IvesUniversity of Pennsylvania
CIS 650 – Implementing Data Management Systems
February 7, 2005
Some content on recovery courtesy Hellerstein, Ramakrishnan, Gehrke
2
Administrivia
No normal office hour this week (out of town)
Upcoming talks of interest: George Candea, Stanford, recovery-oriented
computing, 2/24 Mike Swift, U Wash., restartable OS device drivers, 3/1 Andrew Whitaker, U Wash., paravirtualization, 3/15 Sihem Amer-Yahia, AT&T Research, text queries over
XML, 3/17 Krishna Gummadi, U Wash., analysis of P2P systems,
3/29 Muthian Sivathanu, U Wisc., speeding disk access, 3/31
3
Today’s Trivia Question
4
Recall the Fundamental Concepts of Updatable DBMSs
Transactions as atomic units of operation always commit or abort can be terminated and restarted by the DBMS – an essential
property typically logged, restartable, recoverable
ACID properties: Atomicity: transactions may abort (“rollback”) due to error
or deadlock (Mohan+) Consistency: guarantee of consistency between transactions Isolation: guarantees serializability of schedules (Gray+,
Kung & Rob.) Durability: guarantees recovery if DBMS stops running
(Mohan+)
5
Serializability and Concurrent Deposits
Deposit 1 Deposit 2read(X.bal) read(X.bal)X.bal := X.bal + $50 X.bal:= X.bal + $10write(X.bal) write(X.bal)
6
Violations of Serializability
Dirty data: data written by an uncommitted transaction; a dirty read is a read of dirty data (WR conflict)
Unrepeatable read: a transaction reads the same data item twice and gets different values (RW conflict)
Phantom problem: a transaction retrieves a collection of tuples twice and sees different results
7
Two Approaches to Serializability (or other Consistency Models)
Locking – a “pessimistic” strategy First paper (Gray et al.): hierarchical locking,
plus ways of compromising serializability for performance
Optimistic concurrency control Second paper (Kung & Robinson): allow writes
by each session in parallel, then try to substitute them in (or reapply to merge)
8
Locking
A “lock manager” grants and releases locks on objects
Two basic types of locks: Shared locks (read locks): allow other shared locks to be
granted on the same item Exclusive locks (write locks): do not coexist with any other
locks
Generally granted in two phase locking (2PL) model: Growing phase: locks are granted Shrinking phase: locks are released (no new locks granted) Well-formed, two-phase locking guarantees serializability Strict 2PL: shrinking phase is at the end of the transaction
(Note that deadlocks are possible!)
9
Gray et al.: Granularity of Locks
For performance and concurrency, want different levels of lock granularity, i.e., a hierarchy of locks: database extent table page row attribute
But a problem arises: What if T1 S-locks a row and T2 wants to X-lock a table? How do we easily check whether we should give a lock to
T2?
10
Intention Locks
Two basic types: Intention to Share (IS): a descendant item will be
locked with a share lock Intention Exclusive lock: a descendant item will be
locked with an exclusive lock Locks are granted top-down, released bottom-up
T1 grabs IS lock on table, page; S lock on row T2 can’t get X-lock on table until T1 is done
But T3 can get an IS or S lock on the table
11
Lock Compatibility Matrix
IS IX S SIX X
IS Y Y Y Y N
IX Y Y N N N
S Y N Y N N
SIX Y N N N N
X N N N N N
12
Lock Implementation
Maintain as a hash table based on items to lock Lock/unlock are atomic operations in critical
sections First-come, first-served queue for each locked
object All adjacent, compatible items are a compatible group The group’s mode is the most restrictive of its members
What if a transaction wants to convert (upgrade) its lock? Should we send it to the back of the queue? No – will almost assuredly deadlock! Handle conversions immediately after the current group
13
Degrees of Consistency
Full locking, guaranteeing serializability, is generally very expensive
So they propose several degrees of consistency as a compromise (these are roughly the SQL isolation levels): Degree 0: T doesn’t overwrite dirty data of other
transactions Degree 1: above, plus T does not commit writes before
EOT Degree 2: above, plus T doesn’t read dirty data Degree 3: above, plus other transactions don’t dirty any
data T read
14
Degrees and Locking
Degree 0: short write locks on updated items
Degree 1: long write locks on updated items
Degree 2: long write locks on updated items, short read locks on read items
Degree 3: long write and read locks
Does Degree 3 prevent phantoms? If not, how do we fix this?
15
What If We Don’t Want to Lock?
Conflicts may be very uncommon – so why incur the overhead of locking? Typically hundreds of instructions for every
lock/unlock Examples: read-mostly DBs; large DB with few
collisions; append-mostly; hierarchical data
Kung & Robinson – break lock into three phases: Read – and write to private copy of each page (i.e.,
copy-on-write) Validation – make sure no conflicts between
transactions Write – swap the private copies in for the public ones
16
Validation
Goal: guarantee that only serializable schedules result in merging Ti and Tj writes
Approach: find an equivalent serializable schedule: Assign each transaction a number Ensure equivalent serializable schedule as follows:
If TN(Ti) < TN(Tj) then we must satisfy one of:
1. Ti finishes writing before Tj starts reading (serial)
2. WS(Ti) disjoint from RS(Tj) and Ti finishes writing before Tj writes
3. WS(Ti) disjoint from RS(Tj) and WS(Ti) disjoint from WS(Tj), and Ti finishes read phase before Tj completes its read phase
17
Why Does This Work?
Condition 1 – obvious since it’s serial Condition 2:
No W-R conflicts since disjoint In all R-W conflicts, Ti precedes Tj since Ti reads before it
writes (and that’s before Tj)
In all W-W conflicts, Ti precedes Tj
Condition 3: No W-R conflicts since disjoint No W-W conflicts since disjoint In all R-W conflicts, Ti precedes Tj since Ti reads before it
writes (and that’s before Tj)
18
The Achilles Heel
How do we assign TNs? Not optimistically – they get assigned at the
end of read phase Note that we need to maintain all of the read
and write sets for transactions that are going on concurrently – long-lived read phases cause difficulty here Solution: bound buffer, abort and restart
transactions when out of space Drawback: starvation – need to solve by locking the
whole DB!
19
Serial Validation
Simple: writes won’t be interleaved, so test1. Ti finishes writing before Tj starts reading (serial)2. WS(Ti) disjoint from RS(Tj) and Ti finishes writing before Tj
writes
Put in critical section: Get TN Test 1 and 2 for everyone up to TN Write
Long critical section limits parallelism of validation, so can optimize: Outside critical section, get a TN and validate up to there Before write, in critical section, get new TN, validate up to
that, writeReads: no need for TN – just validate up to highest TN
at end of read phase (no critical section)
20
Parallel Validation
For allowing interleaved writes Save active transactions (finished reading, not
writing) Abort if intersect current read/write set Validate:
CRIT: Get TN; copy active set; add self to active set Check (1), (2) against everything from start to finish Check (3) against all active set If OK, write CRIT: Increment TN counter, remove self from active
Drawback: might conflict in condition (3) with someone who gets aborted
21
Who’s the Top Dog?Optimistic vs. Non-Optimistic
Drawbacks of the optimistic approach: Generally requires some sort of global state, e.g., TN
counter If there’s a conflict, requires abort and full restart
Study by Agrawal et al. comparing optimistic vs. locking: Need load control with low resources Locking is better with moderate resources Optimistic is better with infinite or high resources
Both of these provided isolation; transactions and policies ensure consistency – what about atomicity, durability?
22
Rollback and Recovery
The Recovery Manager provides:Atomicity:
Transactions may abort (“rollback”) to start or to a “savepoint”.
Durability: What if DBMS stops running? (Causes?)
Desired behavior after system restarts:– T1, T2 & T3 should
be durable– T4 & T5 should be
aborted (effects not seen)
crash!T1T2T3T4T5
Assumptions in Recovery SchemesWe’re using concurrency control via locks
Strict 2PL at least at the page, possibly record level
Updates are happening “in place” No shadow pages: data is overwritten on (deleted from)
the disk
ARIES: Algorithm for Recovery and Isolation Exploiting Semantics Attempts to provide a simple, systematic simple scheme
to guarantee atomicity & durability with good performance
Let’s begin with some of the issues faced by any DBMS recovery scheme…
Managing Pages in the Buffer PoolBuffer pool is finite, so…
Q: How do we guarantee durability of committed data?
A: Need policy on what happens when a transaction completes, what transactions can do to get more pages
Force write of buffer pages to disk at commit? Provides durability But poor response time
Steal buffer-pool frames from uncommited Xacts? If not, poor throughput If so, how can we ensure
atomicity?
Force
No Force
No Steal Steal
Trivial
Desired
More on Steal and Force
STEAL (why enforcing Atomicity is hard) To steal frame F: Current page in F (say P) is
written to disk; some Xact holds lock on P What if the Xact with the lock on P aborts? Must remember the old value of P at steal time (to
support UNDOing the write to page P)
NO FORCE (why enforcing Durability is hard) What if system crashes before a modified page is
written to disk? Write as little as possible, in a convenient place,
at commit time, to support REDOing modifications
Basic Idea: Logging
Record REDO and UNDO information, for every update, in a log Sequential writes to log (put it on a separate
disk) Minimal info (diff) written to log, so multiple
updates fit in a single log page
Log: An ordered list of REDO/UNDO actions Log record contains:
<XID, pageID, offset, length, old data, new data>
and additional control info (which we’ll see soon)
Write-Ahead Logging (WAL)
The Write-Ahead Logging Protocol:1. Force the log record for an update before the
corresponding data page gets to disk Guarantees Atomicity
2. Write all log records for a Xact before commit• Guarantees Durability (can always rebuild from the log)
Is there a systematic way of doing logging (and recovery!)? The ARIES family of algorithms
WAL & the Log
Each log record has a unique Log Sequence Number (LSN) LSNs always increase
Each data page contains a pageLSN The LSN of the most recent log record
for an update to that page
System keeps track of flushedLSN The max LSN flushed so far
WAL: Before a page is written, pageLSN flushedLSN
LSNs
DB
pageLSNs
RAM
flushedLSN
pageLSN
Log recordsflushed to disk
“Log tail” in RAM
Log Records
Possible log record types: Update Commit Abort End (signifies end of
commit or abort) Compensation Log
Records (CLRs) To log UNDO actions
prevLSNXIDtype
lengthpageID
offsetbefore-imageafter-image
LogRecord fields:
updaterecordsonly
Other Log-Related State
Transaction Table: One entry per active Xact Contains XID, status
(running/commited/aborted), and lastLSN
Dirty Page Table: One entry per dirty page in buffer pool Contains recLSN – the LSN of the log record
which first caused the page to be dirty
Normal Execution of an Xact
Series of reads & writes, followed by commit or abortWe will assume that write is atomic on disk
In practice, additional details to deal with non-atomic writes
Strict 2PLSTEAL, NO-FORCE buffer management, with
Write-Ahead Logging
Checkpointing
Periodically, the DBMS creates a checkpoint Minimizes recovery time in the event of a system crash Write to log:
begin_checkpoint record: when checkpoint began end_checkpoint record: current Xact table and dirty page table A ‘fuzzy checkpoint’:
Other Xacts continue to run; so these tables accurate only as of the time of the begin_checkpoint record
No attempt to force dirty pages to disk; effectiveness of checkpoint limited by oldest unwritten change to a dirty page. (So it’s a good idea to periodically flush dirty pages to disk!)
Store LSN of checkpoint record in a safe place (master record)
The Big Picture: What’s Stored Where
DB
Data pageseachwith apageLSN
Xact TablelastLSNstatus
Dirty Page TablerecLSN
flushedLSN
RAM
prevLSNXIDtype
lengthpageID
offsetbefore-imageafter-image
LogRecords
LOG
master record
Simple Transaction Abort
For now, consider an explicit abort of a Xact No crash involved
We want to “play back” the log in reverse order, UNDOing updates Get lastLSN of Xact from Xact table Can follow chain of log records backward via the
prevLSN field Before starting UNDO, write an Abort log record
For recovering from crash during UNDO!
Abort, cont.
To perform UNDO, must have a lock on data!No problem – no one else can be locking it
Before restoring old value of a page, write a CLR: You continue logging while you UNDO!! CLR has one extra field: undoNextLSN
Points to the next LSN to undo (i.e. the prevLSN of the record we’re currently undoing).
CLRs never Undone (but they might be Redone when repeating history: guarantees Atomicity!)
At end of UNDO, write an “end” log record.
Transaction Commit
Write commit record to log All log records up to Xact’s lastLSN are
flushed Guarantees that flushedLSN lastLSN Note that log flushes are sequential,
synchronous writes to disk Many log records per log page
Commit() returns Write end record to log
Crash Recovery: Big Picture
Start from a checkpoint (found via master record)
Three phases:1. Figure out which Xacts
committed since checkpoint, which failed (Analysis)
2. REDO all actions– (repeat history)
3. UNDO effects of failed Xacts
Oldest log rec. of Xact active at crash
Smallest recLSN in dirty page table after Analysis
Last chkpt
CRASH
A R U
Recovery: The Analysis PhaseReconstruct state at checkpoint
via end_checkpoint record
Scan log forward from checkpoint End record: Remove Xact from Xact table Other records: Add Xact to Xact table, set
lastLSN=LSN, change Xact status on commit Update record: If P not in Dirty Page Table,
Add P to D.P.T., set its recLSN=LSN
Recovery: The REDO PhaseWe repeat history to reconstruct state at crash:
Reapply all updates (even of aborted Xacts!), redo CLRs
Scan forward from log rec containing smallest recLSN in D.P.T. For each CLR or update log rec LSN, REDO the action unless:
Affected page is not in the Dirty Page Table, or Affected page is in D.P.T., but has recLSN > LSN, or pageLSN (in DB) LSN
To REDO an action: Reapply logged action Set pageLSN to LSN. No additional logging!
Recovery: The UNDO Phase
ToUndo = { l | l a lastLSN of a “loser” Xact}Repeat:
Choose largest LSN among ToUndo. If this LSN is a CLR and undoNextLSN==NULL
Write an End record for this Xact. If this LSN is a CLR, and undoNextLSN != NULL
Add undoNextLSN to ToUndo (Q: what happens to other CLRs?)
Else this LSN is an update. Undo the update, write a CLR, add prevLSN to ToUndo.
Until ToUndo is empty.
Example of Recovery
begin_checkpoint
end_checkpoint
update: T1 writes P5
update T2 writes P3
T1 abort
CLR: Undo T1 LSN 10
T1 End
update: T3 writes P1
update: T2 writes P5
CRASH, RESTART
LSN LOG
00
05
10
20
30
40
45
50
60
Xact TablelastLSNstatus
Dirty Page TablerecLSN
flushedLSN
ToUndo
prevLSNs
RAM
Example: Crash During Restart
begin_checkpoint, end_checkpoint
update: T1 writes P5
update T2 writes P3
T1 abort
CLR: Undo T1 LSN 10, T1 End
update: T3 writes P1
update: T2 writes P5
CRASH, RESTART
CLR: Undo T2 LSN 60
CLR: Undo T3 LSN 50, T3 end
CRASH, RESTART
CLR: Undo T2 LSN 20, T2 end
LSN LOG00,05
10
20
30
40,45
50
60
70
80,85
90
Xact TablelastLSNstatus
Dirty Page TablerecLSN
flushedLSN
ToUndo
undonextLSN
RAM
Additional Crash Issues
What happens if system crashes during Analysis? During REDO?
How do you limit the amount of work in REDO? Flush asynchronously in the background. Watch “hot spots”!
How do you limit the amount of work in UNDO? Avoid long-running Xacts.
Summary of Logging/Recovery
Recovery Manager guarantees Atomicity & Durability
Use WAL to allow STEAL/NO-FORCE w/o sacrificing correctness
LSNs identify log records; linked into backwards chains per transaction (via prevLSN)
pageLSN allows comparison of data page and log records
Summary, Continued
Checkpointing: A quick way to limit the amount of log to scan on recovery.
Recovery works in 3 phases: Analysis: Forward from checkpoint Redo: Forward from oldest recLSN Undo: Backward from end to first LSN of oldest
Xact alive at crash
Upon Undo, write CLRs Redo “repeats history”: Simplifies the
logic!