2. Transactions
• Concurrent execution of user programs is
essential for good DBMS performance.
– Because disk accesses are frequent, and relatively
slow, it is important to keep the cpu humming by
working on several user programs concurrently.
• A user’s program may carry out many
operations on the data retrieved from the
database, but the DBMS is only concerned
about what data is read/written from/to the
database.
• A transaction is the DBMS’s abstract view of
a user program: a sequence of reads and
writes.
3. Query Optimization
and Execution
Relational Operators
Files and Access Methods
Buffer Management
Disk Space Management
DB
These layers must
consider concurrency
control and recovery
(Transaction, Lock,
Recovery Managers)
Structure of a DBMS
4. Concurrency Control &
Recovery
• Concurrency Control
– Provide correct and highly available data
access in the presence of concurrent
access by many users
• Recovery
– Ensures database is fault tolerant, and
not corrupted by software, system or
media failure
– 24x7 access to mission critical data
• A boon to application authors!
– Existence of CC&R allows applications be
be written without explicit concern for
concurrency and fault tolerance
5. Transactions and Concurrent
Execution
• Transaction (“xact”)- DBMS’s abstract view of
a user program (or activity):
– A sequence of reads and writes of database objects.
– Unit of work that must commit or abort as an atomic
unit
• Transaction Manager controls the execution of
transactions.
• User’s program logic is invisible to DBMS!
– Arbitrary computation possible on data fetched from
the DB
– The DBMS only sees data read/written from/to the DB.
• Challenge: provide atomic transactions to
concurrent users!
– Given only the read/write interface.
6. Concurrency in a DBMS
• Users submit transactions, and can think
of each transaction as executing by
itself.
– Concurrency is achieved by the DBMS, which
interleaves actions (reads/writes of DB
objects) of various transactions.
– Each transaction must leave the database in
a consistent state if the DB is consistent
when the transaction begins.
• DBMS will enforce some ICs, depending on the ICs
declared in CREATE TABLE statements.
• Beyond this, the DBMS does not really understand
the semantics of the data. (e.g., it does not
understand how the interest on a bank account is
computed).
7. Atomicity of
Transactions
• A transaction might commit after
completing all its actions, or it could
abort (or be aborted by the DBMS) after
executing some actions.
• A very important property guaranteed by
the DBMS for all transactions is that
they are atomic. That is, a user can
think of a Xact as always executing all
its actions in one step, or not
executing any actions at all.
– DBMS logs all actions so that it can undo
the actions of aborted transactions.
9. ACID properties of Transaction
Executions
• A
A tomicity: All actions in the Xact happen, or
none happen.
• C
C onsistency: If each Xact is consistent, and
the DB starts consistent, it ends up consistent.
• I
I solation: Execution of one Xact is isolated
from that of other Xacts.
• D
D urability: If a Xact commits, its effects
persist.
10. Atomicity and Durability
• A transaction ends in one of two ways:
– commit after completing all its actions
• “commit” is a contract with the caller of the DB
– abort (or be aborted by the DBMS) after executing
some actions.
• Or system crash while the xact is in progress; treat as abort.
• Two important properties for a transaction:
– Atomicity : Either execute all its actions, or none
of them
– Durability : The effects of a committed xact must
survive failures.
• DBMS ensures the above by logging all actions:
– Undo the actions of aborted/failed transactions.
– Redo actions of committed transactions not yet
propagated to disk when system crashes.
A.C.I.D.
11. Transaction Consistency
• Transactions preserve DB consistency
– Given a consistent DB state, produce another
consistent DB state
• DB Consistency expressed as a set of
declarative Integrity Constraints
– CREATE TABLE/ASSERTION statements
• E.g. Each CS186 student can only register in one project
group. Each group must have 2 students.
– Application-level
• E.g. Bank account total of each customer must stay the
same during a “transfer” from savings to checking account
• Transactions that violate ICs are aborted
– That’s all the DBMS can automatically check!
A.C.I.D.
12. Isolation (Concurrency)
• DBMS interleaves actions of many xacts concurrently
– Actions = reads/writes of DB objects
• DBMS ensures xacts do not “step onto” one another.
• Each xact executes as if it were running by itself.
– Concurrent accesses have no effect on a Transaction’s
behavior
– Net effect must be identical to executing all
transactions for some serial order.
– Users & programmers think about transactions in isolation
• Without considering effects of other concurrent transactions!
A.C.I.D.
13. Example
• Consider two transactions (Xacts):
T1: BEGIN A=A+100, B=B-100 END
T2: BEGIN A=1.06*A, B=1.06*B END
• 1st xact transfers $100 from B’s account to A’s
• 2nd credits both accounts with 6% interest.
• Assume at first A and B each have $1000. What are the
legal outcomes of running T1 and T2?
• T1 ; T2 (A=1166,B=954)
• T2 ; T1 (A=1160,B=960)
• In either case, A+B = $2000 *1.06 = $2120
• There is no guarantee that T1 will execute before T2 or
vice-versa, if both are submitted together.
14. Example (Contd.)
• Consider a possible interleaved schedule:
T1: A=A+100, B=B-100
T2: A=1.06*A, B=1.06*B
This is OK (same as T1;T2). But what about:
T1: A=A+100, B=B-100
T2: A=1.06*A, B=1.06*B
• Result: A=1166, B=960; A+B = 2126, bank loses
$6 !
• The DBMS’s view of the second schedule:
T1: R(A), W(A), R(B), W(B)
T2: R(A), W(A), R(B), W(B)
15. Scheduling Transactions:
Definitions
• Serial schedule: no concurrency
– Does not interleave the actions of different transactions.
• Equivalent schedules: same result on any DB state
– For any database state, the effect (on the set of objects in
the database) of executing the first schedule is identical
to the effect of executing the second schedule.
• Serializable schedule: equivalent to a serial
schedule
– A schedule that is equivalent to some serial execution of
the transactions.
(Note: If each transaction preserves consistency,
every serializable schedule preserves consistency. )
16. Anomalies with Interleaved
Execution
• Reading Uncommitted Data (WR
Conflicts, “dirty reads”):
• Unrepeatable Reads (RW Conflicts):
T1: R(A), W(A), R(B), W(B), Abort
T2: R(A), W(A), C
T1: R(A), R(A), W(A), C
T2: R(A), W(A), C
18. Lock-Based Concurrency
Control
• A simple mechanism to allow concurrency but
avoid the anomalies just described…
• Two-phase Locking (2PL) Protocol:
– Always obtain a S (shared) lock on object before reading
– Always obtain an X (exclusive) lock on object before
writing.
– If an Xact holds an X lock on an object, no other Xact
can get a lock (S or X) on that object.
– DBMS internally enforces the above locking protocol
– Two phases: acquiring locks, and releasing them
• No lock is ever acquired after one has been released
• “Growing phase” followed by “shrinking phase”.
• Lock Manager tracks lock requests, grants locks on
database objects when they become available.
19. Aborting a Transaction
• If a transaction Ti is aborted, all its
actions have to be undone. Not only
that, if Tj reads an object last written
by Ti, Tj must be aborted as well!
• Most systems avoid such cascading aborts
by releasing a transaction’s locks only
at commit time.
– If Ti writes an object, Tj can read this
only after Ti commits.
• In order to undo the actions of an
aborted transaction, the DBMS maintains
a log in which every write is recorded.
This mechanism is also used to recover
20. Strict 2PL
• 2PL allows only serializable schedules
but is subjected to cascading aborts.
• Example: rollback of T1 requires
rollback of T2!
• To avoid Cascading aborts, use Strict
2PL
• Strict Two-phase Locking (Strict 2PL)
Protocol:
– Same as 2PL, except:
– A transaction releases no locks
T1: R(A), W(A), Abort
T2: R(A), W(A), R(B), W(B)
21. Introduction to Crash
Recovery
• Recovery Manager
– Upon recovery from crash:
• Must bring DB to a consistent transactional state
– Ensures transaction Atomicity and Durability
– Undoes actions of transactions that do not
commit
– Redoes lost actions of committed transactions
• lost during system failures or media failures
• Recovery Manager maintains log
information during normal execution of
transactions for use during crash
recovery
22. The Log
• Log consists of “records” that are written
sequentially.
– Stored on a separate disk from the DB
– Typically chained together by Xact id
– Log is often duplexed and archived on stable storage.
• Log stores modifications to the database
– if Ti writes an object, write a log record with:
• If UNDO required need “before image”
• IF REDO required need “after image”.
– Ti commits/aborts: a log record indicating this action.
• Need for UNDO/REDO depend on Buffer Mgr (!!)
– UNDO required if uncommitted data can overwrite stable
version of committed data (STEAL buffer management).
– REDO required if xact can commit before all its updates are
on disk (NO FORCE buffer management).
23. Logging Continued
• Write Ahead Logging (WAL) protocol
– Log record must go to disk before the changed
page!
• implemented via a handshake between log
manager and the buffer manager.
– All log records for a transaction (including its
commit record) must be written to disk before
the transaction is considered “Committed”.
• All log related activities are handled
transparently by the DBMS.
– As was true of CC-related activities such as
lock/unlock, dealing with deadlocks, etc.
24. ARIES Recovery
• There are 3 phases in ARIES recovery protocol:
– Analysis: Scan the log forward (from the most recent
checkpoint) to identify all Xacts that were active,
and all dirty pages in the buffer pool at the time of
the crash.
– Redo: Redoes all updates to dirty pages in the buffer
pool, as needed, to ensure that all logged updates are
in fact carried out and written to disk.
– Undo: The writes of all Xacts that were active at
the crash are undone (by restoring the before value of
the update, as found in the log), working backwards in
the log.
• At the end --- all committed updates and only
those updates are reflected in the database.
• Some care must be taken to handle the case of a
crash occurring during the recovery process!
25. Summary
• Concurrency control and recovery are among the
most important functions provided by a DBMS.
• Concurrency control (Isolation) is automatic.
– DBMS issues proper Two-Phase Locking (2PL) requests
– Enforces lock discipline (S & X)
– End result promised to be “serializable”: equivalent to
some serial schedule
• Atomicity and Durability ensured by Write-Ahead
Logging (WAL) and recovery protocol
– used to undo the actions of aborted transactions (no
subatomic stuff visible after recovery!)
– used to redo the lost actions of committed transactions
Editor's Notes
#1:The slides for this text are organized into several modules. Each lecture contains about enough material for a 1.25 hour class period. (The time estimate is very approximate--it will vary with the instructor, and lectures also differ in length; so use this as a rough guideline.) This covers Lecture 1A in Module (6); it is a 1-lecture overview of the material, and is an alternative to Lectures 1 through 4 in this module, which provide more detailed coverage. Note that the text contains enough material for an even more detailed treatment than Lectures 1 through 4, e.g., view serializability, B-tree CC, non-locking approaches.)
Module (1): Introduction (DBMS, Relational Model)
Module (2): Storage and File Organizations (Disks, Buffering, Indexes)
Module (3): Database Concepts (Relational Queries, DDL/ICs, Views and Security)
Module (4): Relational Implementation (Query Evaluation, Optimization)
Module (5): Database Design (ER Model, Normalization, Physical Design, Tuning)
Module (6): Transaction Processing (Concurrency Control, Recovery)
Module (7): Advanced Topics
#4:Allows DMBS to have concurrent execution and recovery from system failure, esp involving mission critical data.
Users to write applications without having to worry about concurrent control and recovery. Increased programmer productivity and allows new applications to be added more easily and safely to existing system.
#5:While one transaction is waiting for a page to be read in from disk, CPU can process another transaction. Overlapping I/O and CPU activity reduces the amount of time disk and processors are idle and increases system throughput (useful work completed per unit time)
In serial execution, a short transaction can get stuck behind a long transaction. Interleaving the execution of a short transaction with a long transaction allows the short transaction to complete quickly.
Transaction is defined as any one execution of a user program in a DBMS and differs from an execution outside the DBMS.
#9:Transaction executions are said to respect the following 4 properties.
#10:
1) Users should not have to worry about the effect of incomplete transactions (say when a system crash occurs)
#12:Sees only the state of a database that could occur if the transaction were the only one running against the database and produce only the results that it could produce if it were running alone.
#13:Sum of balance of A and B should be the same regardless of whether T1/T2 commits/aborts.
#14:T1 followed by T2 …
T2 followed by T1 are both ok. In both cases, A+B is 2120
#15:Executing transactions serially in different orders may produce different results, but all are presumed to be acceptable.
#22:Duplexed: Store at two different disks perhaps at different locations.
Xact id: Log sequence number (LSN). Monotonically increasing number.