SlideShare a Scribd company logo
DBMS Storage and Indexing
198:541
Disk Storage
Disks and Files
 DBMS stores information on (“hard”) disks.
 This has major implications for DBMS design!
 READ: transfer data from disk to main memory
(RAM).
 WRITE: transfer data from RAM to disk.
 Both are high-cost operations, relative to in-
memory operations, so must be planned
carefully!
Why Not Store Everything in Main Memory?
 Costs too much.
 Main memory is volatile. We want data
to be saved between runs. (Obviously!)
 Typical storage hierarchy:
 Main memory (RAM) for currently used
data.
 Disk for the main database (secondary
storage).
 Tapes, DVD for archiving older versions
of the data (tertiary storage).
Disks
 Secondary storage device of choice.
 Main advantage over tapes: random access
vs. sequential.
 Data is stored and retrieved in units called
disk blocks or pages.
 Unlike RAM, time to retrieve a disk page
varies depending upon location on disk.
 Therefore, relative placement of pages on
disk has major impact on DBMS performance!
See textbook for in-depth discussion on
disk storage
 Physical storage of files to avoid high I/O delays
 Seek time and rotational delay dominate.
 Seek time varies from about 1 to 20msec
 Rotational delay varies from 0 to 10msec
 Transfer rate is about 1msec per 4KB page
 Key to lower I/O cost: reduce seek/rotation
delays! Hardware vs. software solutions?
 RAID organization
 Reliability
 Redundancy
Buffer Management in a DBMS
 Data must be in RAM for DBMS to operate on it!
 Table of <frame#, pageid> pairs is maintained.
DB
MAIN MEMORY
DISK
disk page
free frame
Page Requests from Higher Levels
BUFFER POOL
choice of frame dictated
by replacement policy
Buffer Replacement Policy
 Frame is chosen for replacement by a
replacement policy:
 Least-recently-used (LRU), Clock, MRU etc.
 Policy can have big impact on # of I/O’s;
depends on the access pattern.
 Sequential flooding: Nasty situation caused by
LRU + repeated sequential scans.
 # buffer frames < # pages in file means each
page request causes an I/O. MRU much better in
this situation (but not in all situations, of course).
 DBMS buffer policy has specific requirements
Record Organization
Record Formats: Fixed Length
 Information about field types same for all
records in a file; stored in system catalogs.
 Finding i’th field does not require scan of
record.
Base address (B)
L1 L2 L3 L4
F1 F2 F3 F4
Address = B+L1+L2
Record Formats: Variable Length
 Two alternative formats (# fields is fixed):
 Second offers direct access to i’th field, efficient storage
of nulls (special don’t know value); small directory overhead.
4 $ $ $ $
Field
Count
Fields Delimited by Special Symbols
F1 F2 F3 F4
F1 F2 F3 F4
Array of Field Offsets
Page Formats: Fixed Length Records
 Record id = <page id, slot #>. In first
alternative, moving records for free space
management changes rid; may not be acceptable.
Slot 1
Slot 2
Slot N
. . . . . .
N M
1
0
. . .
M ... 3 2 1
PACKED UNPACKED, BITMAP
Slot 1
Slot 2
Slot N
Free
Space
Slot M
1
1
number
of records
number
of slots
Page Formats: Variable Length Records
 Can move records on page without changing
rid; so, attractive for fixed-length records too.
Page i
Rid = (i,N)
Rid = (i,2)
Rid = (i,1)
Pointer
to start
of free
space
SLOT DIRECTORY
N . . . 2 1
20 16 24 N
# slots
Files of Records
 Page or block is OK when doing I/O, but higher
levels of DBMS operate on records, and files of
records.
 FILE: A collection of pages, each containing a
collection of records. Must support:
 insert/delete/modify record
 read a particular record (specified using record
id)
 scan all records (possibly with some conditions
on the records to be retrieved)
File Organization
Alternative File Organizations
Many alternatives exist, each ideal for some situations,
and not so good in others:
 Heap (random order) files: Suitable when typical
access is a file scan retrieving all records.
 Sorted Files: Best if records must be retrieved in
some order, or only a `range’ of records is needed.
 Indexes: Data structures to organize records via trees
or hashing.
 Like sorted files, they speed up searches for a subset of
records, based on values in certain (“search key”) fields
 Updates are much faster than in sorted files.
Unordered (Heap) Files
 Simplest file structure contains records in no
particular order.
 As file grows and shrinks, disk pages are allocated
and de-allocated.
 To support record level operations, we must:
 keep track of the pages in a file
 keep track of free space on pages
 keep track of the records on a page
 There are many alternatives for keeping track of
this.
Heap File Implemented as a List
 The header page id and Heap file name must be
stored someplace.
 Each page contains 2 `pointers’ plus data.
Header
Page
Data
Page
Data
Page
Data
Page
Data
Page
Data
Page
Data
Page
Pages with
Free Space
Full Pages
Heap File Using a Page Directory
 The entry for a page can include the number of
free bytes on the page.
 The directory is a collection of pages; linked list
implementation is just one alternative.
 Much smaller than linked list of all HF pages!
Data
Page 1
Data
Page 2
Data
Page N
Header
Page
DIRECTORY
Index Structures
Indexes
 An index on a file speeds up selections on the
search key fields for the index.
 Any subset of the fields of a relation can be the
search key for an index on the relation.
 Search key is not the same as key (minimal set of
fields that uniquely identify a record in a relation).
 An index contains a collection of data entries, and
supports efficient retrieval of all data entries k*
with a given key value k.
 Given data entry k*, we can find record with key k in
at most one disk I/O. (Details soon …)
Alternatives for Data Entry k* in Index
 In a data entry k* we can store:
 Data record with key value k, or
 <k, rid of data record with search key value k>, or
 <k, list of rids of data records with search key k>
 Choice of alternative for data entries is orthogonal to
the indexing technique used to locate data entries
with a given key value k.
 Examples of indexing techniques: B+ trees, hash-
based structures
 Typically, index contains auxiliary information that
directs searches to the desired data entries
Alternatives for Data Entries (Contd.)
 Alternative 1:
 If this is used, index structure is a file organization for
data records (instead of a Heap file or sorted file).
 At most one index on a given collection of data
records can use Alternative 1. (Otherwise, data
records are duplicated, leading to redundant storage
and potential inconsistency.)
 If data records are very large, # of pages containing
data entries is high. Implies size of auxiliary
information in the index is also large, typically.
Alternatives for Data Entries (Contd.)
 Alternatives 2 and 3:
 Data entries typically much smaller than data records.
So, better than Alternative 1 with large data records,
especially if search keys are small. (Portion of index
structure used to direct search, which depends on size
of data entries, is much smaller than with Alternative
1.)
 Alternative 3 more compact than Alternative 2, but
leads to variable sized data entries even if search keys
are of fixed length.
B+ Tree Indexes
 Leaf pages contain data entries, and are chained (prev & next)
 Non-leaf pages have index entries; only used to direct searches:
P0 K 1 P 1 K 2 P 2 K m P m
index entry
Non-leaf
Pages
Pages
(Sorted by search key)
Leaf
Example B+ Tree
 Find 28*? 29*? All > 15* and < 30*
 Insert/delete: Find data entry in leaf, then change
it. Need to adjust parent sometimes.
 And change sometimes bubbles up the tree
2* 3*
Root
17
30
14* 16* 33* 34* 38* 39*
13
5
7*
5* 8* 22* 24*
27
27* 29*
Entries < 17 Entries > = 17
Note how data entries
in leaf level are sorted
Hash-Based Indexes
 Good for equality selections.
 Index is a collection of buckets.
 Bucket = primary page plus zero or more overflow
pages.
 Buckets contain data entries.
 Hashing function h: h(r) = bucket in which (data
entry for) record r belongs. h looks at the search
key fields of r.
 No need for “index entries” in this scheme.
Index Classification
 Primary vs. secondary: If search key contains primary
key, then called primary index.
 Unique index: Search key contains a candidate key.
 Clustered vs. unclustered: If order of data records is the
same as, or `close to’, order of data entries, then called
clustered index.
 Alternative 1 implies clustered; in practice, clustered also
implies Alternative 1 (since sorted files are rare).
 A file can be clustered on at most one search key.
 Cost of retrieving data records through index varies greatly
based on whether index is clustered or not!
Clustered vs. Unclustered Index
 Suppose that Alternative (2) is used for data entries, and that
the data records are stored in a Heap file.
 To build clustered index, first sort the Heap file (with some free
space on each page for future inserts).
 Overflow pages may be needed for inserts. (Thus, order of data
recs is `close to’, but not identical to, the sort order.)
Index entries
Data entries
direct search for
(Index File)
(Data file)
Data Records
data entries
Data entries
Data Records
CLUSTERED UNCLUSTERED
Comparing Storage Techniques
Cost Model for Our Analysis
We ignore CPU costs, for simplicity:
 B: The number of data pages
 R: Number of records per page
 D: (Average) time to read or write disk page
 Measuring number of page I/O’s ignores gains of pre-
fetching a sequence of pages; thus, even I/O cost is
only approximated.
 Average-case analysis; based on several simplistic
assumptions.
 Good enough to show the overall trends!
Comparing File Organizations
 Heap files (random order; insert at
eof)
 Sorted files, sorted on <age, sal>
 Clustered B+ tree file, Alternative (1),
search key <age, sal>
 Heap file with unclustered B + tree
index on search key <age, sal>
 Heap file with unclustered hash index
on search key <age, sal>
Operations to Compare
 Scan: Fetch all records from disk
 Equality search
 Range selection
 Insert a record
 Delete a record
Assumptions in Our Analysis
 Heap Files:
 Equality selection on key; exactly one match.
 Sorted Files:
 Files compacted after deletions.
 Indexes:
 Alt (2), (3): data entry size = 10% size of record
 Hash: No overflow buckets.
 80% page occupancy => File size = 1.25 data size
 Tree: 67% occupancy (this is typical).
 Implies file size = 1.5 data size
Assumptions (contd.)
 Scans:
 Leaf levels of a tree-index are chained.
 Index data-entries plus actual file
scanned for unclustered indexes.
 Range searches:
 We use tree indexes to restrict the set of
data records fetched, but ignore hash
indexes.
Cost of Operations
(a) Scan (b)
Equality
(c ) Range (d) Insert (e) Delete
(1) Heap
(2) Sorted
(3) Clustered
(4) Unclustered
Tree index
(5) Unclustered
Hash index
 Several assumptions underlie these (rough) estimates!
Cost of Operations
(a) Scan (b) Equality (c ) Range (d) Insert (e) Delete
(1) Heap BD 0.5BD BD 2D Search
+D
(2) Sorted BD Dlog 2B D(log 2 B +
# pgs with
match recs)
Search
+ BD
Search
+BD
(3)
Clustered
1.5BD Dlog F 1.5B D(log F 1.5B
+ # pgs w.
match recs)
Search
+ D
Search
+D
(4) Unclust.
Tree index
BD(R+0.15) D(1 +
log F 0.15B)
D(log F 0.15B
+ # pgs w.
match recs)
Search
+ 2D
Search
+ 2D
(5) Unclust.
Hash index
BD(R+0.125) 2D BD Search
+ 2D
Search
+ 2D
 Several assumptions underlie these (rough) estimates!
Common Indexing Structures:
B+ Tree
B+ Tree: Most Widely Used Index
 Insert/delete at log F N cost; keep tree height-
balanced. (F = fanout, N = # leaf pages)
 Minimum 50% occupancy (except for root). Each
node contains d <= m <= 2d entries. The
parameter d is called the order of the tree.
 Supports equality and range-searches efficiently.
Index Entries
Data Entries
("Sequence set")
(Direct search)
Example B+ Tree
 Search begins at root, and key
comparisons direct it to a leaf.
 Search for 5*, 15*, all data entries >=
24* ...
 Based on the search for 15*, we know it is not in the tree!
Root
17 24 30
2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
13
B+ Trees in Practice
 Typical order: 100
 capacity is 200
 min 100 keys per node, except root)
 Typical fill-factor: 67%.
 average fanout = 133
 Typical capacities:
 Height 4: 1334
= 312,900,700 records
 Height 3: 1333
= 2,352,637 records
 Can often hold top levels in buffer pool:
 Level 1 = 1 page = 8 Kbytes
 Level 2 = 133 pages = 1 Mbyte
 Level 3 = 17,689 pages = 133 MBytes
Inserting a Data Entry into a B+ Tree
 Find correct leaf L.
 Put data entry onto L.
 If L has enough space, done!
 Else, must split L (into L and a new node L2)
 Redistribute entries evenly, copy up middle key.
 Insert index entry pointing to L2 into parent of L.
 This can happen recursively
 To split index node, redistribute entries evenly, but
push up middle key. (Contrast with leaf splits.)
 Splits “grow” tree; root split increases height.
 Tree growth: gets wider or one level taller at top.
Inserting 8* into Example B+ Tree
Root
17 24 30
2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
13
Inserting 8* into Example B+ Tree
 Observe how
minimum
occupancy is
guaranteed in
both leaf and
index pg splits.
 Note difference
between copy-
up and push-up;
be sure you
understand the
reasons for this.
2* 3* 5* 7* 8*
5
Entry to be inserted in parent node.
(Note that 5 is
continues to appear in the leaf.)
s copied up and
appears once in the index. Contrast
5 24 30
17
13
Entry to be inserted in parent node.
(Note that 17 is pushed up and only
this with a leaf split.)
Example B+ Tree After Inserting 8*
 Notice that root was split, leading to increase in height.
 In this example, we can avoid split by re-distributing
entries; however, this is usually not done in practice.
2* 3*
Root
17
24 30
14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
13
5
7*
5* 8*
Deleting a Data Entry from a B+ Tree
 Start at root, find leaf L where entry belongs.
 Remove the entry.
 If L is at least half-full, done!
 If L has only d-1 entries,
 Try to re-distribute, borrowing from sibling
(adjacent node with same parent as L).
 If re-distribution fails, merge L and sibling.
 If merge occurred, must delete entry (pointing to L
or sibling) from parent of L.
 Merge could propagate to root, decreasing height.
Example Tree After (Inserting 8*, Then)
Deleting 19* and 20* ...
2* 3*
Root
17
24 30
14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39*
13
5
7*
5* 8*
Example Tree After (Inserting 8*, Then)
Deleting 19* and 20* ...
 Deleting 19* is easy.
 Deleting 20* is done with re-distribution.
Notice how middle key is copied up.
2* 3*
Root
17
30
14* 16* 33* 34* 38* 39*
13
5
7*
5* 8* 22* 24*
27
27* 29*
... And Then Deleting 24*
 Must merge.
 Observe `toss’ of
index entry (on
right), and `pull
down’ of index entry
(below).
30
22* 27* 29* 33* 34* 38* 39*
2* 3* 7* 14* 16* 22* 27* 29* 33* 34* 38* 39*
5* 8*
Root
30
13
5 17
Prefix Key Compression
 Important to increase fan-out. (Why?)
 Key values in index entries only `direct traffic’; can
often compress them.
 E.g., If we have adjacent index entries with search key
values Dannon Yogurt, David Smith and Devarakonda
Murthy, we can abbreviate David Smith to Dav. (The
other keys can be compressed too ...)
 In general, while compressing, must leave each index entry
greater than every key value (in any subtree) to its left.
 Insert/delete must be suitably modified.
Bulk Loading of a B+ Tree
 If we have a large collection of records, and we want
to create a B+ tree on some field, doing so by
repeatedly inserting records is very slow.
 Bulk Loading can be done much more efficiently.
 Initialization: Sort all data entries, insert pointer to
first (leaf) page in a new (root) page.
3* 4* 6* 9* 10* 11* 12* 13* 20* 22* 23* 31* 35* 36* 38* 41* 44*
Sorted pages of data entries; not yet in B+ tree
Root
Bulk Loading (Contd.)
 Index entries for
leaf pages always
entered into right-
most index page
just above leaf
level. When this
fills up, it splits.
(Split may go up
right-most path to
the root.)
 Much faster than
repeated inserts,
especially when
one considers
locking!
3* 4* 6* 9* 10*11* 12*13* 20*22* 23* 31* 35*36* 38*41* 44*
Root
Data entry pages
not yet in B+ tree
35
23
12
6
10 20
3* 4* 6* 9* 10*11* 12*13* 20*22* 23* 31* 35*36* 38*41* 44*
6
Root
10
12 23
20
35
38
not yet in B+ tree
Data entry pages
Summary of Bulk Loading
 Option 1: multiple inserts.
 Slow.
 Does not give sequential storage of leaves.
 Option 2: Bulk Loading
 Has advantages for concurrency control.
 Fewer I/Os during build.
 Leaves will be stored sequentially (and
linked, of course).
 Can control “fill factor” on pages.
A Note on `Order’
 Order (d) concept replaced by physical space
criterion in practice (`at least half-full’).
 Index pages can typically hold many more entries than
leaf pages.
 Variable sized records and search keys mean different
nodes will contain different numbers of entries.
 Even with fixed length fields, multiple records with the
same search key value (duplicates) can lead to
variable-sized data entries (if we use Alternative (3)).
Summary
 Tree-structured indexes are ideal for range-searches, also good for
equality searches.
 B+ tree is a dynamic structure.
 Inserts/deletes leave tree height-balanced; log F N cost.
 High fanout (F) means depth rarely more than 3 or 4.
 Almost always better than maintaining a sorted file.
 Typically, 67% occupancy on average.
 Usually preferable to ISAM, modulo locking considerations; adjusts to
growth gracefully.
 If data entries are data records, splits can change rids!
 Key compression increases fanout, reduces height.
 Bulk loading can be much faster than repeated inserts for creating a
B+ tree on a large data set.
 Most widely used index in database management systems because
of its versatility. One of the most optimized components of a DBMS.
Common Indexing Structures:
Hash Table
Introduction
 As for any index, 3 alternatives for data entries k*:
 Data record with key value k
 <k, rid of data record with search key value k>
 <k, list of rids of data records with search key k>
 Choice orthogonal to the indexing technique
 Hash-based indexes are best for equality selections.
Cannot support range searches.
 Static and dynamic hashing techniques exist.
Static Hashing
 # primary pages fixed, allocated
sequentially, never de-allocated; overflow
pages if needed.
 h(k) mod M = bucket to which data entry
with key k belongs. (M = # of buckets)
h(key) mod N
h
key
Primary bucket pages Overflow pages
2
0
N-1
Static Hashing (Contd.)
 Buckets contain data entries.
 Hash fn works on search key field of record r. Must
distribute values over range 0 ... M-1.
 h(key) = (a * key + b) usually works well.
 a and b are constants; lots known about how to tune
h.
 Long overflow chains can develop and degrade
performance.
 Extendible and Linear Hashing: Dynamic techniques
to fix this problem.
Extendible Hashing
 Situation: Bucket (primary page) becomes full. Why
not re-organize file by doubling # of buckets?
 Reading and writing all pages is expensive!
 Idea: Use directory of pointers to buckets, double # of
buckets by doubling the directory, splitting just the
bucket that overflowed!
 Directory much smaller than file, so doubling it is much
cheaper. Only one page of data entries is split. No
overflow page!
 Trick lies in how hash function is adjusted!
Example
 Directory is array of size 4.
 To find bucket for r, take
last `global depth’ # bits
of h(r); we denote r by
h(r).
 If h(r) = 5 = binary
101, it is in bucket
pointed to by 01.
 Insert: If bucket is full, split it (allocate new page, re-distribute).
 If necessary, double the directory. (As we will see, splitting a
bucket does not always require doubling; we can tell by
comparing global depth with local depth for the split bucket.)
13*
00
01
10
11
2
2
2
2
2
LOCAL DEPTH
GLOBAL DEPTH
DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
DATA PAGES
10*
1* 21*
4* 12* 32* 16*
15* 7* 19*
5*
Insert h(r)=20 (Causes Doubling)
20*
00
01
10
11
2 2
2
2
LOCAL DEPTH 2
2
DIRECTORY
GLOBAL DEPTH
Bucket A
Bucket B
Bucket C
Bucket D
Bucket A2
(`split image'
of Bucket A)
1* 5* 21*13*
32*16*
10*
15* 7* 19*
4* 12*
19*
2
2
2
000
001
010
011
100
101
110
111
3
3
3
DIRECTORY
Bucket A
Bucket B
Bucket C
Bucket D
Bucket A2
(`split image'
of Bucket A)
32*
1* 5* 21*13*
16*
10*
15* 7*
4* 20*
12*
LOCAL DEPTH
GLOBAL DEPTH
Points to Note
 20 = binary 10100. Last 2 bits (00) tell us r
belongs in A or A2. Last 3 bits needed to tell
which.
 Global depth of directory: Max # of bits needed to
tell which bucket an entry belongs to.
 Local depth of a bucket: # of bits used to determine
if an entry belongs to this bucket.
 When does bucket split cause directory doubling?
 Before insert, local depth of bucket = global depth.
Insert causes local depth to become > global depth;
directory is doubled by copying it over and `fixing’
pointer to split image page.
Comments on Extendible Hashing
 If directory fits in memory, equality search answered
with one disk access; else two.
 100MB file, 100 bytes/rec, 4K pages contains 1,000,000
records (as data entries) and 25,000 directory
elements; chances are high that directory will fit in
memory.
 Directory grows in spurts, and, if the distribution of
hash values is skewed, directory can grow large.
 Multiple entries with same hash value cause problems!
 Delete: If removal of data entry makes bucket
empty, can be merged with `split image’. If each
directory element points to same bucket as its split
image, can halve directory.
Summary
 Hash-based indexes: best for equality searches,
cannot support range searches.
 Static Hashing can lead to long overflow chains.
 Extendible Hashing avoids overflow pages by splitting
a full bucket when a new data entry is to be added to
it. (Duplicates may require overflow pages.)
 Directory to keep track of buckets, doubles periodically.
 Can get large with skewed data; additional I/O if this
does not fit in main memory.
 For hash-based indexes, a skewed data distribution is
one in which the hash values of data entries are not
uniformly distributed!
Choosing a File Organization
Understanding the Workload
 For each query in the workload:
 Which relations does it access?
 Which attributes are retrieved?
 Which attributes are involved in selection/join conditions?
How selective are these conditions likely to be?
 For each update in the workload:
 Which attributes are involved in selection/join conditions?
How selective are these conditions likely to be?
 The type of update (INSERT/DELETE/UPDATE), and the
attributes that are affected.
Choice of Indexes
 What indexes should we create?
 Which relations should have indexes? What
field(s) should be the search key? Should we
build several indexes?
 For each index, what kind of an index should
it be?
 Clustered? Hash/tree?
Choice of Indexes (Contd.)
 One approach: Consider the most important queries
in turn. Consider the best plan using the current
indexes, and see if a better plan is possible with an
additional index. If so, create it.
 Obviously, this implies that we must understand how a
DBMS evaluates queries and creates query evaluation
plans!
 For now, we discuss simple 1-table queries.
 Before creating an index, must also consider the
impact on updates in the workload!
 Trade-off: Indexes can make queries go faster, updates
slower. Require disk space, too.
System Catalogs
 For each index:
 structure (e.g., B+ tree) and search key fields
 For each relation:
 name, file name, file structure (e.g., Heap file)
 attribute name and type, for each attribute
 index name, for each index
 integrity constraints
 For each view:
 view name and definition
 Plus statistics, authorization, buffer pool size, etc.
 Catalogs are themselves stored as relations!
Index Selection Guidelines
 Attributes in WHERE clause are candidates for index keys.
 Exact match condition suggests hash index.
 Range query suggests tree index.
 Clustering is especially useful for range queries; can also help on
equality queries if there are many duplicates.
 Multi-attribute search keys should be considered when a
WHERE clause contains several conditions.
 Order of attributes is important for range queries.
 Such indexes can sometimes enable index-only strategies for
important queries.
 For index-only strategies, clustering is not important!
 Try to choose indexes that benefit as many queries as
possible. Since only one index can be clustered per
relation, choose it based on important queries that would
benefit the most from clustering.
Examples of Clustered Indexes
 B+ tree index on E.age can be
used to get qualifying tuples.
 How selective is the condition?
 Is the index clustered?
 Consider the GROUP BY query.
 If many tuples have E.age >
10, using E.age index and
sorting the retrieved tuples
may be costly.
 Clustered E.dno index may be
better!
 Equality queries and
duplicates:
 Clustering on E.hobby helps!
SELECT E.dno
FROM Emp E
WHERE E.age>40
SELECT E.dno, COUNT (*)
FROM Emp E
WHERE E.age>10
GROUP BY E.dno
SELECT E.dno
FROM Emp E
WHERE E.hobby=Stamps
Indexes with Composite Search Keys
 Composite Search Keys: Search
on a combination of fields.
 Equality query: Every field value is
equal to a constant value. E.g. wrt
<sal,age> index:
 age=20 and sal =75
 Range query: Some field value is
not a constant. E.g.:
 age =20; or age=20 and sal > 10
 Data entries in index sorted by
search key to support range
queries.
 Lexicographic order, or
 Spatial order.
sue 13 75
bob
cal
joe 12
10
20
80
11
12
name age sal
<sal, age>
<age, sal> <age>
<sal>
12,20
12,10
11,80
13,75
20,12
10,12
75,13
80,11
11
12
12
13
10
20
75
80
Data records
sorted by name
Data entries in index
sorted by <sal,age>
Data entries
sorted by <sal>
Examples of composite key
indexes using lexicographic order.
Composite Search Keys
 To retrieve Emp records with age=30 AND
sal=4000, an index on <age,sal> would be better
than an index on age or an index on sal.
 Choice of index key orthogonal to clustering etc.
 If condition is: 20<age<30 AND
3000<sal<5000:
 Clustered tree index on <age,sal> or <sal,age> is
best.
 If condition is: age=30 AND 3000<sal<5000:
 Clustered <age,sal> index much better than
<sal,age> index!
 Composite indexes are larger, updated more
often.
Index-Only Plans
 A number of
queries can
be
answered
without
retrieving
any tuples
from one or
more of the
relations
involved if a
suitable
index is
available.
SELECT E.dno, COUNT(*)
FROM Emp E
GROUP BY E.dno
SELECT E.dno, MIN(E.sal)
FROM Emp E
GROUP BY E.dno
SELECT AVG(E.sal)
FROM Emp E
WHERE E.age=25 AND
E.sal BETWEEN 3000 AND 5000
<E.dno>
<E.dno,E.sal>
Tree index!
<E. age,E.sal>
or
<E.sal, E.age>
Tree index!
Index-Only Plans (Contd.)
 Index-only plans
are possible if the
key is <dno,age>
or we have a tree
index with key
<age,dno>
 Which is better?
 What if we
consider the
second query?
SELECT E.dno, COUNT (*)
FROM Emp E
WHERE E.age=30
GROUP BY E.dno
SELECT E.dno, COUNT (*)
FROM Emp E
WHERE E.age>30
GROUP BY E.dno
Index-Only Plans (Contd.)
 Index-only
plans can also
be found for
queries
involving more
than one
table; more on
this later.
SELECT D.mgr
FROM Dept D, Emp E
WHERE D.dno=E.dno
SELECT D.mgr, E.eid
FROM Dept D, Emp E
WHERE D.dno=E.dno
<E.dno>
<E.dno,E.eid>
Summary
 Many alternative file organizations exist, each
appropriate in some situation.
 If selection queries are frequent, sorting the file
or building an index is important.
 Hash-based indexes only good for equality search.
 Sorted files and tree-based indexes best for range
search; also good for equality search. (Files rarely
kept sorted in practice; B+ tree index is better.)
 Index is a collection of data entries plus a way to
quickly find entries with given key values.
Summary (Contd.)
 Data entries can be actual data records, <key,
rid> pairs, or <key, rid-list> pairs.
 Choice orthogonal to indexing technique used to
locate data entries with a given key value.
 Can have several indexes on a given file of data
records, each with a different search key.
 Indexes can be classified as clustered vs.
unclustered, primary vs. secondary, and dense vs.
sparse. Differences have important consequences
for utility/performance.
Summary (Contd.)
 Understanding the nature of the workload for the
application, and the performance goals, is essential to
developing a good design.
 What are the important queries and updates? What
attributes/relations are involved?
 Indexes must be chosen to speed up important queries
(and perhaps some updates!).
 Index maintenance overhead on updates to key fields.
 Choose indexes that can help many queries, if possible.
 Build indexes to support index-only strategies.
 Clustering is an important decision; only one index on a
given relation can be clustered!
 Order of fields in composite index key can be important.

More Related Content

PPT
3620121datastructures.ppt
PPT
Unit08 dbms
PPTX
Data storage and indexing
PPT
Unit 08 dbms
PPT
Mba admission in india
PPT
Database Management Systems full lecture
PPT
Database Management Systems full lecture
PPT
Indexing and hashing
3620121datastructures.ppt
Unit08 dbms
Data storage and indexing
Unit 08 dbms
Mba admission in india
Database Management Systems full lecture
Database Management Systems full lecture
Indexing and hashing

Similar to INDEXING METHODS USED IN DATABASE STORAGE (20)

PPTX
DBMS-Unit5-PPT.pptx important for revision
PPT
Chapter13
PPT
Storage struct
PPTX
DB LECTURE 4 INDEXINGS PPT NOTES.pptx
PPT
Tree-structured indexes lectures for student.ppt
PPTX
DBMS (UNIT 5)
PPT
MYCH8 database management system in .ppt
PPT
File oraganization and indexing B Tree sequential file aceess
PPTX
normalization process in relational data base management
PPT
Ardbms
PPTX
lecture 2 notes indexing in application of database systems.pptx
PPTX
Ch 17 disk storage, basic files structure, and hashing
PPTX
DBMS Data Storage and Query Processing.
PPT
Csci12 report aug18
PDF
Lecture storage-buffer
PDF
File organisation
PPT
PPT
PPTX
6 chapter 6 record storage and primary file organization
PPTX
overview of storage and indexing BY-Pratik kadam
DBMS-Unit5-PPT.pptx important for revision
Chapter13
Storage struct
DB LECTURE 4 INDEXINGS PPT NOTES.pptx
Tree-structured indexes lectures for student.ppt
DBMS (UNIT 5)
MYCH8 database management system in .ppt
File oraganization and indexing B Tree sequential file aceess
normalization process in relational data base management
Ardbms
lecture 2 notes indexing in application of database systems.pptx
Ch 17 disk storage, basic files structure, and hashing
DBMS Data Storage and Query Processing.
Csci12 report aug18
Lecture storage-buffer
File organisation
6 chapter 6 record storage and primary file organization
overview of storage and indexing BY-Pratik kadam
Ad

Recently uploaded (20)

PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Electronic commerce courselecture one. Pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPT
Teaching material agriculture food technology
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
MYSQL Presentation for SQL database connectivity
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PPTX
Big Data Technologies - Introduction.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Programs and apps: productivity, graphics, security and other tools
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Spectral efficient network and resource selection model in 5G networks
MIND Revenue Release Quarter 2 2025 Press Release
Electronic commerce courselecture one. Pdf
Network Security Unit 5.pdf for BCA BBA.
NewMind AI Weekly Chronicles - August'25 Week I
Chapter 3 Spatial Domain Image Processing.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Teaching material agriculture food technology
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
MYSQL Presentation for SQL database connectivity
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
sap open course for s4hana steps from ECC to s4
Diabetes mellitus diagnosis method based random forest with bat algorithm
Understanding_Digital_Forensics_Presentation.pptx
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Big Data Technologies - Introduction.pptx
Ad

INDEXING METHODS USED IN DATABASE STORAGE

  • 1. DBMS Storage and Indexing 198:541
  • 3. Disks and Files  DBMS stores information on (“hard”) disks.  This has major implications for DBMS design!  READ: transfer data from disk to main memory (RAM).  WRITE: transfer data from RAM to disk.  Both are high-cost operations, relative to in- memory operations, so must be planned carefully!
  • 4. Why Not Store Everything in Main Memory?  Costs too much.  Main memory is volatile. We want data to be saved between runs. (Obviously!)  Typical storage hierarchy:  Main memory (RAM) for currently used data.  Disk for the main database (secondary storage).  Tapes, DVD for archiving older versions of the data (tertiary storage).
  • 5. Disks  Secondary storage device of choice.  Main advantage over tapes: random access vs. sequential.  Data is stored and retrieved in units called disk blocks or pages.  Unlike RAM, time to retrieve a disk page varies depending upon location on disk.  Therefore, relative placement of pages on disk has major impact on DBMS performance!
  • 6. See textbook for in-depth discussion on disk storage  Physical storage of files to avoid high I/O delays  Seek time and rotational delay dominate.  Seek time varies from about 1 to 20msec  Rotational delay varies from 0 to 10msec  Transfer rate is about 1msec per 4KB page  Key to lower I/O cost: reduce seek/rotation delays! Hardware vs. software solutions?  RAID organization  Reliability  Redundancy
  • 7. Buffer Management in a DBMS  Data must be in RAM for DBMS to operate on it!  Table of <frame#, pageid> pairs is maintained. DB MAIN MEMORY DISK disk page free frame Page Requests from Higher Levels BUFFER POOL choice of frame dictated by replacement policy
  • 8. Buffer Replacement Policy  Frame is chosen for replacement by a replacement policy:  Least-recently-used (LRU), Clock, MRU etc.  Policy can have big impact on # of I/O’s; depends on the access pattern.  Sequential flooding: Nasty situation caused by LRU + repeated sequential scans.  # buffer frames < # pages in file means each page request causes an I/O. MRU much better in this situation (but not in all situations, of course).  DBMS buffer policy has specific requirements
  • 10. Record Formats: Fixed Length  Information about field types same for all records in a file; stored in system catalogs.  Finding i’th field does not require scan of record. Base address (B) L1 L2 L3 L4 F1 F2 F3 F4 Address = B+L1+L2
  • 11. Record Formats: Variable Length  Two alternative formats (# fields is fixed):  Second offers direct access to i’th field, efficient storage of nulls (special don’t know value); small directory overhead. 4 $ $ $ $ Field Count Fields Delimited by Special Symbols F1 F2 F3 F4 F1 F2 F3 F4 Array of Field Offsets
  • 12. Page Formats: Fixed Length Records  Record id = <page id, slot #>. In first alternative, moving records for free space management changes rid; may not be acceptable. Slot 1 Slot 2 Slot N . . . . . . N M 1 0 . . . M ... 3 2 1 PACKED UNPACKED, BITMAP Slot 1 Slot 2 Slot N Free Space Slot M 1 1 number of records number of slots
  • 13. Page Formats: Variable Length Records  Can move records on page without changing rid; so, attractive for fixed-length records too. Page i Rid = (i,N) Rid = (i,2) Rid = (i,1) Pointer to start of free space SLOT DIRECTORY N . . . 2 1 20 16 24 N # slots
  • 14. Files of Records  Page or block is OK when doing I/O, but higher levels of DBMS operate on records, and files of records.  FILE: A collection of pages, each containing a collection of records. Must support:  insert/delete/modify record  read a particular record (specified using record id)  scan all records (possibly with some conditions on the records to be retrieved)
  • 16. Alternative File Organizations Many alternatives exist, each ideal for some situations, and not so good in others:  Heap (random order) files: Suitable when typical access is a file scan retrieving all records.  Sorted Files: Best if records must be retrieved in some order, or only a `range’ of records is needed.  Indexes: Data structures to organize records via trees or hashing.  Like sorted files, they speed up searches for a subset of records, based on values in certain (“search key”) fields  Updates are much faster than in sorted files.
  • 17. Unordered (Heap) Files  Simplest file structure contains records in no particular order.  As file grows and shrinks, disk pages are allocated and de-allocated.  To support record level operations, we must:  keep track of the pages in a file  keep track of free space on pages  keep track of the records on a page  There are many alternatives for keeping track of this.
  • 18. Heap File Implemented as a List  The header page id and Heap file name must be stored someplace.  Each page contains 2 `pointers’ plus data. Header Page Data Page Data Page Data Page Data Page Data Page Data Page Pages with Free Space Full Pages
  • 19. Heap File Using a Page Directory  The entry for a page can include the number of free bytes on the page.  The directory is a collection of pages; linked list implementation is just one alternative.  Much smaller than linked list of all HF pages! Data Page 1 Data Page 2 Data Page N Header Page DIRECTORY
  • 21. Indexes  An index on a file speeds up selections on the search key fields for the index.  Any subset of the fields of a relation can be the search key for an index on the relation.  Search key is not the same as key (minimal set of fields that uniquely identify a record in a relation).  An index contains a collection of data entries, and supports efficient retrieval of all data entries k* with a given key value k.  Given data entry k*, we can find record with key k in at most one disk I/O. (Details soon …)
  • 22. Alternatives for Data Entry k* in Index  In a data entry k* we can store:  Data record with key value k, or  <k, rid of data record with search key value k>, or  <k, list of rids of data records with search key k>  Choice of alternative for data entries is orthogonal to the indexing technique used to locate data entries with a given key value k.  Examples of indexing techniques: B+ trees, hash- based structures  Typically, index contains auxiliary information that directs searches to the desired data entries
  • 23. Alternatives for Data Entries (Contd.)  Alternative 1:  If this is used, index structure is a file organization for data records (instead of a Heap file or sorted file).  At most one index on a given collection of data records can use Alternative 1. (Otherwise, data records are duplicated, leading to redundant storage and potential inconsistency.)  If data records are very large, # of pages containing data entries is high. Implies size of auxiliary information in the index is also large, typically.
  • 24. Alternatives for Data Entries (Contd.)  Alternatives 2 and 3:  Data entries typically much smaller than data records. So, better than Alternative 1 with large data records, especially if search keys are small. (Portion of index structure used to direct search, which depends on size of data entries, is much smaller than with Alternative 1.)  Alternative 3 more compact than Alternative 2, but leads to variable sized data entries even if search keys are of fixed length.
  • 25. B+ Tree Indexes  Leaf pages contain data entries, and are chained (prev & next)  Non-leaf pages have index entries; only used to direct searches: P0 K 1 P 1 K 2 P 2 K m P m index entry Non-leaf Pages Pages (Sorted by search key) Leaf
  • 26. Example B+ Tree  Find 28*? 29*? All > 15* and < 30*  Insert/delete: Find data entry in leaf, then change it. Need to adjust parent sometimes.  And change sometimes bubbles up the tree 2* 3* Root 17 30 14* 16* 33* 34* 38* 39* 13 5 7* 5* 8* 22* 24* 27 27* 29* Entries < 17 Entries > = 17 Note how data entries in leaf level are sorted
  • 27. Hash-Based Indexes  Good for equality selections.  Index is a collection of buckets.  Bucket = primary page plus zero or more overflow pages.  Buckets contain data entries.  Hashing function h: h(r) = bucket in which (data entry for) record r belongs. h looks at the search key fields of r.  No need for “index entries” in this scheme.
  • 28. Index Classification  Primary vs. secondary: If search key contains primary key, then called primary index.  Unique index: Search key contains a candidate key.  Clustered vs. unclustered: If order of data records is the same as, or `close to’, order of data entries, then called clustered index.  Alternative 1 implies clustered; in practice, clustered also implies Alternative 1 (since sorted files are rare).  A file can be clustered on at most one search key.  Cost of retrieving data records through index varies greatly based on whether index is clustered or not!
  • 29. Clustered vs. Unclustered Index  Suppose that Alternative (2) is used for data entries, and that the data records are stored in a Heap file.  To build clustered index, first sort the Heap file (with some free space on each page for future inserts).  Overflow pages may be needed for inserts. (Thus, order of data recs is `close to’, but not identical to, the sort order.) Index entries Data entries direct search for (Index File) (Data file) Data Records data entries Data entries Data Records CLUSTERED UNCLUSTERED
  • 31. Cost Model for Our Analysis We ignore CPU costs, for simplicity:  B: The number of data pages  R: Number of records per page  D: (Average) time to read or write disk page  Measuring number of page I/O’s ignores gains of pre- fetching a sequence of pages; thus, even I/O cost is only approximated.  Average-case analysis; based on several simplistic assumptions.  Good enough to show the overall trends!
  • 32. Comparing File Organizations  Heap files (random order; insert at eof)  Sorted files, sorted on <age, sal>  Clustered B+ tree file, Alternative (1), search key <age, sal>  Heap file with unclustered B + tree index on search key <age, sal>  Heap file with unclustered hash index on search key <age, sal>
  • 33. Operations to Compare  Scan: Fetch all records from disk  Equality search  Range selection  Insert a record  Delete a record
  • 34. Assumptions in Our Analysis  Heap Files:  Equality selection on key; exactly one match.  Sorted Files:  Files compacted after deletions.  Indexes:  Alt (2), (3): data entry size = 10% size of record  Hash: No overflow buckets.  80% page occupancy => File size = 1.25 data size  Tree: 67% occupancy (this is typical).  Implies file size = 1.5 data size
  • 35. Assumptions (contd.)  Scans:  Leaf levels of a tree-index are chained.  Index data-entries plus actual file scanned for unclustered indexes.  Range searches:  We use tree indexes to restrict the set of data records fetched, but ignore hash indexes.
  • 36. Cost of Operations (a) Scan (b) Equality (c ) Range (d) Insert (e) Delete (1) Heap (2) Sorted (3) Clustered (4) Unclustered Tree index (5) Unclustered Hash index  Several assumptions underlie these (rough) estimates!
  • 37. Cost of Operations (a) Scan (b) Equality (c ) Range (d) Insert (e) Delete (1) Heap BD 0.5BD BD 2D Search +D (2) Sorted BD Dlog 2B D(log 2 B + # pgs with match recs) Search + BD Search +BD (3) Clustered 1.5BD Dlog F 1.5B D(log F 1.5B + # pgs w. match recs) Search + D Search +D (4) Unclust. Tree index BD(R+0.15) D(1 + log F 0.15B) D(log F 0.15B + # pgs w. match recs) Search + 2D Search + 2D (5) Unclust. Hash index BD(R+0.125) 2D BD Search + 2D Search + 2D  Several assumptions underlie these (rough) estimates!
  • 39. B+ Tree: Most Widely Used Index  Insert/delete at log F N cost; keep tree height- balanced. (F = fanout, N = # leaf pages)  Minimum 50% occupancy (except for root). Each node contains d <= m <= 2d entries. The parameter d is called the order of the tree.  Supports equality and range-searches efficiently. Index Entries Data Entries ("Sequence set") (Direct search)
  • 40. Example B+ Tree  Search begins at root, and key comparisons direct it to a leaf.  Search for 5*, 15*, all data entries >= 24* ...  Based on the search for 15*, we know it is not in the tree! Root 17 24 30 2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13
  • 41. B+ Trees in Practice  Typical order: 100  capacity is 200  min 100 keys per node, except root)  Typical fill-factor: 67%.  average fanout = 133  Typical capacities:  Height 4: 1334 = 312,900,700 records  Height 3: 1333 = 2,352,637 records  Can often hold top levels in buffer pool:  Level 1 = 1 page = 8 Kbytes  Level 2 = 133 pages = 1 Mbyte  Level 3 = 17,689 pages = 133 MBytes
  • 42. Inserting a Data Entry into a B+ Tree  Find correct leaf L.  Put data entry onto L.  If L has enough space, done!  Else, must split L (into L and a new node L2)  Redistribute entries evenly, copy up middle key.  Insert index entry pointing to L2 into parent of L.  This can happen recursively  To split index node, redistribute entries evenly, but push up middle key. (Contrast with leaf splits.)  Splits “grow” tree; root split increases height.  Tree growth: gets wider or one level taller at top.
  • 43. Inserting 8* into Example B+ Tree Root 17 24 30 2* 3* 5* 7* 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13
  • 44. Inserting 8* into Example B+ Tree  Observe how minimum occupancy is guaranteed in both leaf and index pg splits.  Note difference between copy- up and push-up; be sure you understand the reasons for this. 2* 3* 5* 7* 8* 5 Entry to be inserted in parent node. (Note that 5 is continues to appear in the leaf.) s copied up and appears once in the index. Contrast 5 24 30 17 13 Entry to be inserted in parent node. (Note that 17 is pushed up and only this with a leaf split.)
  • 45. Example B+ Tree After Inserting 8*  Notice that root was split, leading to increase in height.  In this example, we can avoid split by re-distributing entries; however, this is usually not done in practice. 2* 3* Root 17 24 30 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13 5 7* 5* 8*
  • 46. Deleting a Data Entry from a B+ Tree  Start at root, find leaf L where entry belongs.  Remove the entry.  If L is at least half-full, done!  If L has only d-1 entries,  Try to re-distribute, borrowing from sibling (adjacent node with same parent as L).  If re-distribution fails, merge L and sibling.  If merge occurred, must delete entry (pointing to L or sibling) from parent of L.  Merge could propagate to root, decreasing height.
  • 47. Example Tree After (Inserting 8*, Then) Deleting 19* and 20* ... 2* 3* Root 17 24 30 14* 16* 19* 20* 22* 24* 27* 29* 33* 34* 38* 39* 13 5 7* 5* 8*
  • 48. Example Tree After (Inserting 8*, Then) Deleting 19* and 20* ...  Deleting 19* is easy.  Deleting 20* is done with re-distribution. Notice how middle key is copied up. 2* 3* Root 17 30 14* 16* 33* 34* 38* 39* 13 5 7* 5* 8* 22* 24* 27 27* 29*
  • 49. ... And Then Deleting 24*  Must merge.  Observe `toss’ of index entry (on right), and `pull down’ of index entry (below). 30 22* 27* 29* 33* 34* 38* 39* 2* 3* 7* 14* 16* 22* 27* 29* 33* 34* 38* 39* 5* 8* Root 30 13 5 17
  • 50. Prefix Key Compression  Important to increase fan-out. (Why?)  Key values in index entries only `direct traffic’; can often compress them.  E.g., If we have adjacent index entries with search key values Dannon Yogurt, David Smith and Devarakonda Murthy, we can abbreviate David Smith to Dav. (The other keys can be compressed too ...)  In general, while compressing, must leave each index entry greater than every key value (in any subtree) to its left.  Insert/delete must be suitably modified.
  • 51. Bulk Loading of a B+ Tree  If we have a large collection of records, and we want to create a B+ tree on some field, doing so by repeatedly inserting records is very slow.  Bulk Loading can be done much more efficiently.  Initialization: Sort all data entries, insert pointer to first (leaf) page in a new (root) page. 3* 4* 6* 9* 10* 11* 12* 13* 20* 22* 23* 31* 35* 36* 38* 41* 44* Sorted pages of data entries; not yet in B+ tree Root
  • 52. Bulk Loading (Contd.)  Index entries for leaf pages always entered into right- most index page just above leaf level. When this fills up, it splits. (Split may go up right-most path to the root.)  Much faster than repeated inserts, especially when one considers locking! 3* 4* 6* 9* 10*11* 12*13* 20*22* 23* 31* 35*36* 38*41* 44* Root Data entry pages not yet in B+ tree 35 23 12 6 10 20 3* 4* 6* 9* 10*11* 12*13* 20*22* 23* 31* 35*36* 38*41* 44* 6 Root 10 12 23 20 35 38 not yet in B+ tree Data entry pages
  • 53. Summary of Bulk Loading  Option 1: multiple inserts.  Slow.  Does not give sequential storage of leaves.  Option 2: Bulk Loading  Has advantages for concurrency control.  Fewer I/Os during build.  Leaves will be stored sequentially (and linked, of course).  Can control “fill factor” on pages.
  • 54. A Note on `Order’  Order (d) concept replaced by physical space criterion in practice (`at least half-full’).  Index pages can typically hold many more entries than leaf pages.  Variable sized records and search keys mean different nodes will contain different numbers of entries.  Even with fixed length fields, multiple records with the same search key value (duplicates) can lead to variable-sized data entries (if we use Alternative (3)).
  • 55. Summary  Tree-structured indexes are ideal for range-searches, also good for equality searches.  B+ tree is a dynamic structure.  Inserts/deletes leave tree height-balanced; log F N cost.  High fanout (F) means depth rarely more than 3 or 4.  Almost always better than maintaining a sorted file.  Typically, 67% occupancy on average.  Usually preferable to ISAM, modulo locking considerations; adjusts to growth gracefully.  If data entries are data records, splits can change rids!  Key compression increases fanout, reduces height.  Bulk loading can be much faster than repeated inserts for creating a B+ tree on a large data set.  Most widely used index in database management systems because of its versatility. One of the most optimized components of a DBMS.
  • 57. Introduction  As for any index, 3 alternatives for data entries k*:  Data record with key value k  <k, rid of data record with search key value k>  <k, list of rids of data records with search key k>  Choice orthogonal to the indexing technique  Hash-based indexes are best for equality selections. Cannot support range searches.  Static and dynamic hashing techniques exist.
  • 58. Static Hashing  # primary pages fixed, allocated sequentially, never de-allocated; overflow pages if needed.  h(k) mod M = bucket to which data entry with key k belongs. (M = # of buckets) h(key) mod N h key Primary bucket pages Overflow pages 2 0 N-1
  • 59. Static Hashing (Contd.)  Buckets contain data entries.  Hash fn works on search key field of record r. Must distribute values over range 0 ... M-1.  h(key) = (a * key + b) usually works well.  a and b are constants; lots known about how to tune h.  Long overflow chains can develop and degrade performance.  Extendible and Linear Hashing: Dynamic techniques to fix this problem.
  • 60. Extendible Hashing  Situation: Bucket (primary page) becomes full. Why not re-organize file by doubling # of buckets?  Reading and writing all pages is expensive!  Idea: Use directory of pointers to buckets, double # of buckets by doubling the directory, splitting just the bucket that overflowed!  Directory much smaller than file, so doubling it is much cheaper. Only one page of data entries is split. No overflow page!  Trick lies in how hash function is adjusted!
  • 61. Example  Directory is array of size 4.  To find bucket for r, take last `global depth’ # bits of h(r); we denote r by h(r).  If h(r) = 5 = binary 101, it is in bucket pointed to by 01.  Insert: If bucket is full, split it (allocate new page, re-distribute).  If necessary, double the directory. (As we will see, splitting a bucket does not always require doubling; we can tell by comparing global depth with local depth for the split bucket.) 13* 00 01 10 11 2 2 2 2 2 LOCAL DEPTH GLOBAL DEPTH DIRECTORY Bucket A Bucket B Bucket C Bucket D DATA PAGES 10* 1* 21* 4* 12* 32* 16* 15* 7* 19* 5*
  • 62. Insert h(r)=20 (Causes Doubling) 20* 00 01 10 11 2 2 2 2 LOCAL DEPTH 2 2 DIRECTORY GLOBAL DEPTH Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 1* 5* 21*13* 32*16* 10* 15* 7* 19* 4* 12* 19* 2 2 2 000 001 010 011 100 101 110 111 3 3 3 DIRECTORY Bucket A Bucket B Bucket C Bucket D Bucket A2 (`split image' of Bucket A) 32* 1* 5* 21*13* 16* 10* 15* 7* 4* 20* 12* LOCAL DEPTH GLOBAL DEPTH
  • 63. Points to Note  20 = binary 10100. Last 2 bits (00) tell us r belongs in A or A2. Last 3 bits needed to tell which.  Global depth of directory: Max # of bits needed to tell which bucket an entry belongs to.  Local depth of a bucket: # of bits used to determine if an entry belongs to this bucket.  When does bucket split cause directory doubling?  Before insert, local depth of bucket = global depth. Insert causes local depth to become > global depth; directory is doubled by copying it over and `fixing’ pointer to split image page.
  • 64. Comments on Extendible Hashing  If directory fits in memory, equality search answered with one disk access; else two.  100MB file, 100 bytes/rec, 4K pages contains 1,000,000 records (as data entries) and 25,000 directory elements; chances are high that directory will fit in memory.  Directory grows in spurts, and, if the distribution of hash values is skewed, directory can grow large.  Multiple entries with same hash value cause problems!  Delete: If removal of data entry makes bucket empty, can be merged with `split image’. If each directory element points to same bucket as its split image, can halve directory.
  • 65. Summary  Hash-based indexes: best for equality searches, cannot support range searches.  Static Hashing can lead to long overflow chains.  Extendible Hashing avoids overflow pages by splitting a full bucket when a new data entry is to be added to it. (Duplicates may require overflow pages.)  Directory to keep track of buckets, doubles periodically.  Can get large with skewed data; additional I/O if this does not fit in main memory.  For hash-based indexes, a skewed data distribution is one in which the hash values of data entries are not uniformly distributed!
  • 66. Choosing a File Organization
  • 67. Understanding the Workload  For each query in the workload:  Which relations does it access?  Which attributes are retrieved?  Which attributes are involved in selection/join conditions? How selective are these conditions likely to be?  For each update in the workload:  Which attributes are involved in selection/join conditions? How selective are these conditions likely to be?  The type of update (INSERT/DELETE/UPDATE), and the attributes that are affected.
  • 68. Choice of Indexes  What indexes should we create?  Which relations should have indexes? What field(s) should be the search key? Should we build several indexes?  For each index, what kind of an index should it be?  Clustered? Hash/tree?
  • 69. Choice of Indexes (Contd.)  One approach: Consider the most important queries in turn. Consider the best plan using the current indexes, and see if a better plan is possible with an additional index. If so, create it.  Obviously, this implies that we must understand how a DBMS evaluates queries and creates query evaluation plans!  For now, we discuss simple 1-table queries.  Before creating an index, must also consider the impact on updates in the workload!  Trade-off: Indexes can make queries go faster, updates slower. Require disk space, too.
  • 70. System Catalogs  For each index:  structure (e.g., B+ tree) and search key fields  For each relation:  name, file name, file structure (e.g., Heap file)  attribute name and type, for each attribute  index name, for each index  integrity constraints  For each view:  view name and definition  Plus statistics, authorization, buffer pool size, etc.  Catalogs are themselves stored as relations!
  • 71. Index Selection Guidelines  Attributes in WHERE clause are candidates for index keys.  Exact match condition suggests hash index.  Range query suggests tree index.  Clustering is especially useful for range queries; can also help on equality queries if there are many duplicates.  Multi-attribute search keys should be considered when a WHERE clause contains several conditions.  Order of attributes is important for range queries.  Such indexes can sometimes enable index-only strategies for important queries.  For index-only strategies, clustering is not important!  Try to choose indexes that benefit as many queries as possible. Since only one index can be clustered per relation, choose it based on important queries that would benefit the most from clustering.
  • 72. Examples of Clustered Indexes  B+ tree index on E.age can be used to get qualifying tuples.  How selective is the condition?  Is the index clustered?  Consider the GROUP BY query.  If many tuples have E.age > 10, using E.age index and sorting the retrieved tuples may be costly.  Clustered E.dno index may be better!  Equality queries and duplicates:  Clustering on E.hobby helps! SELECT E.dno FROM Emp E WHERE E.age>40 SELECT E.dno, COUNT (*) FROM Emp E WHERE E.age>10 GROUP BY E.dno SELECT E.dno FROM Emp E WHERE E.hobby=Stamps
  • 73. Indexes with Composite Search Keys  Composite Search Keys: Search on a combination of fields.  Equality query: Every field value is equal to a constant value. E.g. wrt <sal,age> index:  age=20 and sal =75  Range query: Some field value is not a constant. E.g.:  age =20; or age=20 and sal > 10  Data entries in index sorted by search key to support range queries.  Lexicographic order, or  Spatial order. sue 13 75 bob cal joe 12 10 20 80 11 12 name age sal <sal, age> <age, sal> <age> <sal> 12,20 12,10 11,80 13,75 20,12 10,12 75,13 80,11 11 12 12 13 10 20 75 80 Data records sorted by name Data entries in index sorted by <sal,age> Data entries sorted by <sal> Examples of composite key indexes using lexicographic order.
  • 74. Composite Search Keys  To retrieve Emp records with age=30 AND sal=4000, an index on <age,sal> would be better than an index on age or an index on sal.  Choice of index key orthogonal to clustering etc.  If condition is: 20<age<30 AND 3000<sal<5000:  Clustered tree index on <age,sal> or <sal,age> is best.  If condition is: age=30 AND 3000<sal<5000:  Clustered <age,sal> index much better than <sal,age> index!  Composite indexes are larger, updated more often.
  • 75. Index-Only Plans  A number of queries can be answered without retrieving any tuples from one or more of the relations involved if a suitable index is available. SELECT E.dno, COUNT(*) FROM Emp E GROUP BY E.dno SELECT E.dno, MIN(E.sal) FROM Emp E GROUP BY E.dno SELECT AVG(E.sal) FROM Emp E WHERE E.age=25 AND E.sal BETWEEN 3000 AND 5000 <E.dno> <E.dno,E.sal> Tree index! <E. age,E.sal> or <E.sal, E.age> Tree index!
  • 76. Index-Only Plans (Contd.)  Index-only plans are possible if the key is <dno,age> or we have a tree index with key <age,dno>  Which is better?  What if we consider the second query? SELECT E.dno, COUNT (*) FROM Emp E WHERE E.age=30 GROUP BY E.dno SELECT E.dno, COUNT (*) FROM Emp E WHERE E.age>30 GROUP BY E.dno
  • 77. Index-Only Plans (Contd.)  Index-only plans can also be found for queries involving more than one table; more on this later. SELECT D.mgr FROM Dept D, Emp E WHERE D.dno=E.dno SELECT D.mgr, E.eid FROM Dept D, Emp E WHERE D.dno=E.dno <E.dno> <E.dno,E.eid>
  • 78. Summary  Many alternative file organizations exist, each appropriate in some situation.  If selection queries are frequent, sorting the file or building an index is important.  Hash-based indexes only good for equality search.  Sorted files and tree-based indexes best for range search; also good for equality search. (Files rarely kept sorted in practice; B+ tree index is better.)  Index is a collection of data entries plus a way to quickly find entries with given key values.
  • 79. Summary (Contd.)  Data entries can be actual data records, <key, rid> pairs, or <key, rid-list> pairs.  Choice orthogonal to indexing technique used to locate data entries with a given key value.  Can have several indexes on a given file of data records, each with a different search key.  Indexes can be classified as clustered vs. unclustered, primary vs. secondary, and dense vs. sparse. Differences have important consequences for utility/performance.
  • 80. Summary (Contd.)  Understanding the nature of the workload for the application, and the performance goals, is essential to developing a good design.  What are the important queries and updates? What attributes/relations are involved?  Indexes must be chosen to speed up important queries (and perhaps some updates!).  Index maintenance overhead on updates to key fields.  Choose indexes that can help many queries, if possible.  Build indexes to support index-only strategies.  Clustering is an important decision; only one index on a given relation can be clustered!  Order of fields in composite index key can be important.