Nothing Special   »   [go: up one dir, main page]

Hashing RPK

Download as pptx, pdf, or txt
Download as pptx, pdf, or txt
You are on page 1of 61

Analysis of Algorithms

Hashing
Hashing
• Is an effective way to store the elements in some
data structure.
• It allows to reduce the no. of comparisons.
• It can be used to access the stored record
directly.
• 2 Concepts of hashing
– Hash Table
– Hash function

2
Hash Table
• Data structure used for storing and
retrieving the data very quickly.
Empl Record
• Insertion of the data using key value. oyee
Hence every entry in the hash table is ID

associated with some key. Name Phon email Sala


e No.
• Eg: storing the employee record in the hash 1000
table, the employee ID will work as key. 1001
• Using hash key the required data can be 1022
searched in the hash table by few or more 1023
key comparisons. Searching time is then
depended upon the size of the hash table.
• Effective representation of dictionary can
be done using hash table.
• Dictionary entries (key &value pair)can be
placed in the hash table using hash
functions.
• Find items with keys matching a given
3
search key
Hash Function
• Is function which is used to put the data in the hash table.
• Hence one can use the same hash key to retrieve the data from the
hash table.
• Thus hash function is used to implement the hash table.
• The integer returned by the hash function is called as hash key.
• H(key)=key %10

Employee ID Record

Array Name Phone email Salary


Index No.

[0] 1000
[1] 1001
[2] 1022
[3] 1023

4
Hashing Techniques
• Various types of hash functions that are used to
place the record in the hash table.
• 1.Division method
• 2.Mid square method
• 3.Multiplicative hash function
• 4.Digit folding
• 5. Digit analysis

5
1.Division method
• The hash function depend upon the remainder
of division.
• {54,72,89,37} is to be placed in the hash table.
• H(key)=record % table_size Array Numbers
Index
• 4=54%10, 2=72%10, 9=89%10,
[1]
• 7=37%10 [2] 72
[3]
[4] 54
Array Employee ID
Index [5]
[6]
[0] 1000 [7]
[1] 1001 [8] 37
[2] 1022 [9] 89
[3] 1023 [10]
6
2.Mid square method

• The key is squared and the middle or mid part of


the result is used as the index.
• If the key is a string, it has to be processed to
produce the number.
• Eg:if we want to place the record
• 11 then 11^2=121 for the hash table of size 10,
H(11)=2 (the middle one digit)
• 3111 then 3111^2=9678321 for the hash table of
size 1000, H(3111)=783 (the middle three digit)

7
3.Multiplicative hash function

• The given record is multiplied by some constant value.


• The formula for computing the hash key is
• H(key)=floor(p*(fractional part*A))
Where,
– p is the integer constant
– A=constant real number
– A=0.61803398987(Donal Knuth’s constant)
– If key=107 and p=50
• H(key)=floor(50*(107*0.61803398987))
• = floor(3306.481845)=3306
• At 3306 location in the hash table the record 107 will be placed.

8
4.Digit folding

• The key is divided into separate parts.


• Using some simple operation, these parts are
combined to produce the hash key.
• Eg: Consider the record 12365412. then it is
divided into separate parts as 123 654 12 and
these are added together.
• H(key)= 123 + 654 + 12 =789
• The record will be placed at the location 789 in
the hash table.

9
5. Digit analysis

• Is used in a situation when all the identifiers are known in advance.


• First transform the identifiers into numbers using some radix,r.
• Then examine the digit of each identifier.
• Some digits having most skewed distributions are deleted.
• This deleting of digits is continued until the number of remaining
digits is small enough to give an address in the range of the hash
table.
• Then these digits are used to calculate the hash address.

10
Applications
• Keeping track of customer account information at
a bank
– Search through records to check balances and perform
transactions
• Keep track of reservations on flights
– Search to find empty seats, cancel/modify reservations
• Search engine
– Looks for all documents containing a given word

11
Special Case: Dictionaries
• Dictionary = data structure that supports mainly
two basic operations: insert a new item and
return an item with a given key
• Queries: return information about the set S:
– Search (S, k)
– Minimum (S), Maximum (S)
– Successor (S, x), Predecessor (S, x)
• Modifying operations: change the set
– Insert (S, k)
– Delete (S, k) – not very often

12
Direct Addressing
• Assumptions:
– Key values are distinct
– Each key is drawn from a universe U = {0, 1, . . . , m - 1}
• Idea:
– Store the items in an array, indexed by keys

• Direct-address table representation:


– An array T[0 . . . m - 1]
– Each slot, or position, in T corresponds to a key in U
– For an element x with key k, a pointer to x (or x itself) will be placed
in location T[k]
– If there are no elements with key k in the set, T[k] is empty,
represented by NIL

13
Direct Addressing (cont’d)

14
Operations

Alg.: DIRECT-ADDRESS-SEARCH(T, k)
return T[k]

Alg.: DIRECT-ADDRESS-INSERT(T, x)
T[key[x]] ← x

Alg.: DIRECT-ADDRESS-DELETE(T, x)
T[key[x]] ← NIL
• Running time for these operations: O(1)
15
Comparing Different Implementations

• Implementing dictionaries using:


– Direct addressing
– Ordered/unordered arrays
– Ordered/unordered linked lists

Insert Search
direct addressing O(1) O(1)
ordered array O(N) O(lgN)
ordered list O(N) O(N)
unordered array O(1) O(N)
unordered list O(1) O(N)
16
Examples Using Direct Addressing
Example 1:

Example 2:

17
Hash Tables
• When K is much smaller than U, a hash table
requires much less space than a direct-address
table
– Can reduce storage requirements to |K|
– Can still get O(1) search time, but on the average
case, not the worst case

18
Hash Tables
Idea:
– Use a function h to compute the slot for each key
– Store the element in slot h(k)

• A hash function h transforms a key into an index in a


hash table T[0…m-1]:
h : U → {0, 1, . . . , m - 1}
• We say that k hashes to slot h(k)
• Advantages:
– Reduce the range of array indices handled: m instead of |U|
– Storage is also reduced

19
Example: HASH TABLES
0

U
(universe of keys) h(k1)
h(k4)

K k1
h(k2) = h(k5)
(actual k4 k2
keys)
k5 k3 h(k3)

m-1

20
Revisit Example 2

21
Do you see any problems
with this approach?
0

U
(universe of keys) h(k1)
h(k4)

K k1
h(k2) = h(k5)
(actual k4 k2
keys) Collisions!
k5 k3 h(k3)

m-1

22
Collisions
• Two or more keys hash to the same slot!!
• For a given set K of keys
– If |K| ≤ m, collisions may or may not happen,
depending on the hash function
– If |K| > m, collisions will definitely happen (i.e., there
must be at least two keys that have the same hash
value)
• Avoiding collisions completely is hard, even with
a good hash function

23
Handling Collisions
• We will review the following methods:
– Chaining
– Open addressing
• Linear probing
• Quadratic probing
• Double hashing

• We will discuss chaining first, and ways to


build “good” functions.

24
Handling Collisions Using Chaining
• Idea:
– Put all elements that hash to the same slot into a
linked list

25
Collision with Chaining - Discussion
• Choosing the size of the table
– Small enough not to waste space
– Large enough such that lists remain short
– Typically 1/5 or 1/10 of the total number of elements

• How should we keep the lists: ordered or not?


– Not ordered!
• Insert is fast
• Can easily remove the most recently inserted elements

26
Insertion in Hash Tables
Alg.: CHAINED-HASH-INSERT(T, x)
insert x at the head of list T[h(key[x])]

• Worst-case running time is O(1)


• Assumes that the element being inserted isn’t
already in the list
• It would take an additional search to check if it
was already inserted
27
Deletion in Hash Tables
Alg.: CHAINED-HASH-DELETE(T, x)
delete x from the list T[h(key[x])]

• Need to find the element to be deleted.


• Worst-case running time:
– Deletion depends on searching the corresponding list

28
Searching in Hash Tables

Alg.: CHAINED-HASH-SEARCH(T, k)

search for an element with key k in list T[h(k)]

• Running time is proportional to the length of the

list of elements in slot h(k)

29
Analysis of Hashing with Chaining:
Worst Case
• How long does it take to T
0
search for an element with a
given key?
• Worst case:
– All n keys hash to the same slot
– Worst-case time to search is
chain
(n), plus time to compute the
hash function m-1

30
Analysis of Hashing with Chaining:
Average Case
• Average case
– depends on how well the hash function
distributes the n keys among the m slots T
• Simple uniform hashing assumption: n0 = 0
– Any given element is equally likely to hash
into any of the m slots (i.e., probability of n2
collision Pr(h(x)=h(y)), is 1/m) n3
• Length of a list:
nj
T[j] = nj, j = 0, 1, . . . , m – 1
• Number of keys in the table: nk

n = n0 + n1 +· · · + nm-1
nm – 1 = 0
• Average value of nj:
E[nj] = α = n/m
31
Load Factor of a Hash Table
• Load factor of a hash table T: T

 = n/m 0

– n = # of elements stored in the table


chain
– m = # of slots in the table = # of linked lists chain

•  encodes the average number of chain


elements stored in a chain
chain
•  can be <, =, > 1
m-1

32
Case 1: Unsuccessful Search
(i.e., item not stored in the table)
Theorem
An unsuccessful search in a hash table takes expected time )
(1under
the assumption of simple uniform hashing
(i.e., probability of collision Pr(h(x)=h(y)), is 1/m)
Proof
• Searching unsuccessfully for any key k
– need to search to the end of the list T[h(k)]
• Expected length of the list:
– E[nh(k)] = α = n/m
• Expected number of elements examined in an unsuccessful search is α
• Total time required is:
– O(1) (for computing the hash function) + α  (1   )
33
Case 2: Successful Search

34
Analysis of Search in Hash Tables

• If m (# of slots) is proportional to n (# of
elements in the table):
• n = O(m)
• α = n/m = O(m)/m = O(1)
 Searching takes constant time on average

35
Hash Functions
• A hash function transforms a key into a table
address
• What makes a good hash function?
(1) Easy to compute
(2) Approximates a random function: for every input,
every output is equally likely (simple uniform hashing)
• In practice, it is very hard to satisfy the simple
uniform hashing property
– i.e., we don’t know in advance the probability
distribution that keys are drawn from

36
Good Approaches for Hash Functions

• Minimize the chance that closely related keys


hash to the same slot
– Strings such as pt and pts should hash to
different slots
• Derive a hash value that is independent from
any patterns that may exist in the distribution
of the keys

37
The Division Method
• Idea:
– Map a key k into one of the m slots by taking
the remainder of k divided by m
h(k) = k mod m
• Advantage:
– fast, requires only one operation
• Disadvantage:
– Certain values of m are bad, e.g.,
• power of 2
• non-prime numbers

38
Example - The Division Methodm
m
97 100
• If m = 2 , then h(k) is just the least
p

significant p bits of k
– p=1m=2
 h(k) = {0, 1} , least significant 1 bit of k
– p=2m=4
 h(k) ={0, 1, 2, 3} , least significant 2 bits of
k
 Choose m to be a prime, not close to a
power of 2 k mod 97
 Column 2:
k mod 100
 Column 3:
39
The Multiplication Method
Idea:
• Multiply key k by a constant A, where 0 < A < 1
• Extract the fractional part of kA
• Multiply the fractional part by m
• Take the floor of the result
h(k) = = m (k A mod 1)

fractional part of kA = kA - kA

• Disadvantage: Slower than division method


• Advantage: Value of m is not critical, e.g., typically 2p
40
Example – Multiplication Method

41
Universal Hashing
• In practice, keys are not randomly distributed
• Any fixed hash function might yield Θ(n) time
• Goal: hash functions that produce random
table indices irrespective of the keys
• Idea:
– Select a hash function at random, from a designed
class of functions at the beginning of the execution

42
Universal Hashing

(at the beginning


of the execution)

43
Definition of Universal Hash Functions

H={h(k): U(0,1,..,m-1)}

44
How is this property useful?

Pr(h(x)=h(y))=

45
Universal Hashing – Main Result

With universal hashing the chance of collision

between distinct keys k and l is no more than the

1/m chance of collision if locations h(k) and h(l)

were randomly and independently chosen from

the set {0, 1, …, m – 1}

46
Designing a Universal Class
of Hash Functions
• Choose a prime number p large enough so that every
possible key k is in the range [0 ... p – 1]
Zp = {0, 1, …, p - 1} and Zp* = {1, …, p - 1}
• Define the following hash function

ha,b(k) = ((ak + b) mod p) mod m,


The class Hp,m of hash
 a  Zp* and b  Zp functions is universal

• The family of all such hash functions is

Hp,m = {ha,b: a  Zp* and b  Zp}


• a , b: chosen randomly at the beginning of execution 47
Example: Universal Hash Functions

E.g.: p = 17, m = 6

ha,b(k) = ((ak + b) mod p) mod m

h3,4(8) = ((38 + 4) mod 17) mod 6

= (28 mod 17) mod 6

= 11 mod 6
=5

48
Advantages of Universal Hashing

• Universal hashing provides good results on


average, independently of the keys to be stored
• Guarantees that no input will always elicit the
worst-case behavior
• Poor performance occurs only when the random
choice returns an inefficient hash function – this
has small probability

49
Open Addressing
• If we have enough contiguous memory to store all the keys
(m > N)  store the keys in the table itself e.g., insert 14

• No need to use linked lists anymore


• Basic idea:
– Insertion: if a slot is full, try another one,
until you find an empty one
– Search: follow the same sequence of probes
– Deletion: more difficult ... (we’ll see why)

• Search time depends on the length of the


probe sequence!
50
Generalize hash function notation:
• A hash function contains two arguments now:
(i) Key value, and (ii) Probe number insert 14

h(k,p), p=0,1,...,m-1

• Probe sequences
<h(k,0), h(k,1), ..., h(k,m-1)>

– Must be a permutation of <0,1,...,m-1>


– There are m! possible permutations
– Good hash functions should be able to
produce all m! probe sequences Example
<1, 5, 9>
51
Common Open Addressing Methods

• Linear probing
• Quadratic probing
• Double hashing

• Note: None of these methods can generate


more than m2 different probing sequences!

52
Linear probing: Inserting a key
• Idea: when there is a collision, check the next available
position in the table (i.e., probing)

h(k,i) = (h1(k) + i) mod m


i=0,1,2,...
• First slot probed: h1(k)
• Second slot probed: h1(k) + 1
• Third slot probed: h1(k)+2, and so on
probe sequence: < h1(k), h1(k)+1 , h1(k)+2 , ....>

• Can generate m probe sequences maximum, why?


wrap around
53
Linear probing: Searching for a key
• Three cases:
(1) Position in table is occupied with an
element of equal key 0

(2) Position in table is empty


h(k1)
(3) Position in table occupied with a h(k4)
different element
• Case 2: probe the next higher index h(k2) = h(k5)

until the element is found or an


h(k3)
empty position is found
• The process wraps around to the m-1

beginning of the table

54
Linear probing: Deleting a key
• Problems
– Cannot mark the slot as empty 0
– Impossible to retrieve keys inserted
after that slot was occupied
• Solution
– Mark the slot with a sentinel value
DELETED
• The deleted slot can later be used
for insertion m-1
• Searching will be able to find all the
keys
55
Primary Clustering Problem
• Some slots become more likely than others
• Long chunks of occupied slots are created
 search time increases!!

initially, all slots have probability 1/m

Slot b:
2/m

Slot d:
4/m

Slot e:
5/m

56
Quadratic probing

i=0,1,2,...

57
Double Hashing
(1) Use one hash function to determine the first slot
(2) Use a second hash function to determine the
increment for the probe sequence
h(k,i) = (h1(k) + i h2(k) ) mod m, i=0,1,...
• Initial probe: h1(k)
• Second probe is offset by h2(k) mod m, so on ...
• Advantage: avoids clustering
• Disadvantage: harder to delete an element
• Can generate m2 probe sequences maximum

58
Double Hashing: Example
h1(k) = k mod 13 0
1 79
h2(k) = 1+ (k mod 11) 2
h(k,i) = (h1(k) + i h2(k) ) mod 13 3
4 69
• Insert key 14: 5 98
6
h1(14,0) = 14 mod 13 = 1 72
7
h(14,1) = (h1(14) + h2(14)) mod 13 8
9 14
= (1 + 4) mod 13 = 5 10
h(14,2) = (h1(14) + 2 h2(14)) mod 13 11 50
12
= (1 + 8) mod 13 = 9
59
Analysis of Open Addressing

a (load factor)
1 a

k=0

60
Analysis of Open Addressing (cont’d)

Example (similar to Exercise 11.4-4, page 244)

Unsuccessful retrieval:
a=0.5 E(#steps) = 2
a=0.9 E(#steps) = 10

Successful retrieval:
a=0.5 E(#steps) = 3.387
a=0.9 E(#steps) = 3.670
61

You might also like