SlideShare a Scribd company logo
tech talk @ ferret
Andrii Gakhov
PROBABILISTIC DATA STRUCTURES
ALL YOU WANTED TO KNOW BUT WERE AFRAID TO ASK
PART 4: SIMILARITY
SIMILARITY
Agenda:
▸ Locality-Sensitive Hashing (LSH)
▸ MinHash
▸ SimHash
• To find clusters of similar documents from the
document set
• To find duplicates of the document in the
document set
THE PROBLEM
LOCALITY SENSITIVE HASHING
LOCALITY SENTSITIVE HASHING
• Locality Sentsitive Hashing (LSH) was introduced by Indyk and
Motwani in 1998 as a family of functions with the following property:
• similar input objects (from the domain of such functions) have a
higher probability of colliding in the range space than non-similar
ones
• LSH differs from conventional and cryptographic hash functions because
it aims to maximize the probability of a “collision” for similar
items.
• Intuitively, LSH is based on the simple idea that, if two points are close
together, then after a “projection” operation these two points will remain
close together.
LSH: PROPERTIES
• One general approach to LSH is to “hash” items several
times, in such a way that similar items are more likely to
be hashed to the same bucket than dissimilar items are.
We then consider any pair that hashed to the same
bucket for any of the hashings to be a candidate pair.
LSH: PROPERTIES
• Examples of LSH functions are known for resemblance,
cosine and hamming distances.
• LSH functions do not exist for certain commonly used
similarity measures in information retrieval, the Dice
coefficient and the Overlap coefficient (Moses S. Charikar,
2002)
LSH: PROBLEM
• Given a query point, we wish to find the points in a large
database that are closest to the query. We wish to guarantee
with a high probability equal to 1 − δ that we return the
nearest neighbor for any query point.
• Conceptually, this problem is easily solved by iterating through each point in the
database and calculating the distance to the query object. However, our database
may contain billions of objects — each object described by a vector that contains
hundreds of dimensions.
• LSH proves useful to identify nearest neighbors quickly even
when the database is very large.
LSH: FINDING DUPLICATE PAGES ON THE WEB
How to select features for the webpage?
• document vector from page content
• Compute “document-vector” by case-folding, stop-word removal, stemming, computing
term-frequencies and finally, weighting each term by its inverse document frequency (IDF)
• shingles from page content
• Consider a document as an sequence of words.A shingle is a hash value of a k-gram
which is sub-sequence of k consequent words.
• Hashes for shingles can be computed using Rabin’s fingerprints
• connectivity information
• The idea that similar web pages have several incoming links in common.
• anchor text and anchor window
• The idea that similar documents should have similar anchor text.The words in the anchor
text or anchor window are folded into the document-vector itself.
• phrases
• Idea is to identify phrases and compute a document-vector that includes phrases as terms.
LSH: FINDING DUPLICATE PAGES ON THE WEB
• The Web contains many duplicate pages, partly because content is duplicated
across sites and partly because there is more than one URL that points to the
same page.
• If we use shingles as features, each shingle represents a portion of a Web page
and is computed by forming a histogram of the words found within that portion
of the page.
• We can test to see if a portion of the page is duplicated elsewhere on the Web
by looking for other shingles with the same histogram. If the shingles of the
new page match shingles from the database, then it is likely that the new page
bears a strong resemblance to an existing page.
• Because Web pages are surrounded by navigational and other information that
changes from site to site it is useful to use nearest-neighbor solution that can
employ LSH functions.
LSH: RETRIEVING IMAGES
• LSH can be used in image retrieval as an object recognition
tool. We compute a detailed metric for many different
orientations and configurations of an object we wish to
recognize.
• Then, given a new image we simply check our database to see
if a precomputed object’s metrics are close to our query. This
database contains millions of poses and LSH allows us to
quickly check if the query object is known.
LSH: RETRIEVING MUSIC
• In music retrieval conventional hashes and robust features are typically used to find
musical matches.
• The features can be fingerprints, i.e., representations of the audio signal that are robust to
common types of abuse that are performed to audio before it reaches our ears.
• Fingerprints can be computed, for instance, by noting the peaks in the spectrum (because they are
robust to noise) and encoding their position in time and space. User then just has to query the database
for the same fingerprint.
• However, to find similar songs we cannot use fingerprints because these are
different when a song is remixed for a new audience or when a different artist performs the
same song. Instead, we can use several seconds of the song— a snippet—as a shingle.
• To determine if two songs are similar, we need to query the database and see if a large enough number of
the query shingles are close to one song in the database.
• Although closeness depends on the feature vector, we know that long shingles provide
specificity. This is particularly important because we can eliminate duplicates to improve
search results and to link recommendation data between similar songs.
LSH: PYTHON
• https://github.com/ekzhu/datasketch

datasketch gives you probabilistic data structures that
can process vary large amount of data
• https://github.com/simonemainardi/LSHash

LSHash is a fast Python implementation of locality
sensitive hashing with persistence support
• https://github.com/go2starr/lshhdc

LSHHDC: Locality-Sensitive Hashing based High
Dimensional Clustering
LSH: READ MORE
• http://www.cs.princeton.edu/courses/archive/spr04/cos598B/
bib/IndykM-curse.pdf
• http://infolab.stanford.edu/~ullman/mmds/book.pdf
• http://www.cs.princeton.edu/courses/archive/spr04/cos598B/
bib/CharikarEstim.pdf
• http://www.matlabi.ir/wp-content/uploads/bank_papers/
g_paper/g15_Matlabi.ir_Locality-
Sensitive%20Hashing%20for%20Finding%20Nearest%20Nei
ghbors.pdf
• https://en.wikipedia.org/wiki/Locality-sensitive_hashing
MINHASH
MINHASH: RESEMBLANCE SIMILARITY
• While talking about relations of two documents A and B, we mostly interesting in such
concept as "roughly the same", that could be mathematically formalized as their
resemblance (or Jaccard similarity).
• Every document can be seen as a set of features and such issue could be reduced to a set
intersection problem (find similarity between sets).
• The resemblance r(A,B) of two documents is a number between 0 and 1, such that it is
close to 1 for the documents that are "roughly the same”:
• For duplicate detection problem we can reformulate is as a task to find pairs of
documents whose resemblance r(A,B) exceeds certain threshold R.
• For the clustering problem we can use d(A,B) = 1 - r(A,B) as a metric to design
algorithms intended to cluster a collection of documents into sets of closely resembling
documents.
J(A,B) = r(A,B) =
A∩ B
A∪ B
MINHASH
• MinHash (minwise hashing algorithm) has been proposed by Andrei
Broder in 1997
• The minwise hashing family applies a random permutation π: Ω → Ω,
on the given set S, and stores only the minimum values after the
permutation mapping.
• Formally MinHash is defined as:
hπ
min
S( )= min π S( )( )= min π1 S( ),…πk S( )( )
MINHASH: MATRIX REPRESENTATION
• Collection of sets could be visualised as characteristic matrix CM.
• The columns of the matrix correspond to the sets, and the rows
correspond to elements of the universal set from which elements of the
sets are drawn.
• CM(r, c) =1 if the element for row r is a member of the set for
column c, otherwise CM(r,c) =0
• NOTE: The characteristic matrix is unlikely to be the way the data is
stored, but it is useful as a way to visualize the data.
• For one reason not to store data as a matrix, these matrices are almost always sparse
(they have many more 0’s than 1’s) in practice.
MINHASH: MATRIX REPRESENTATION
• Consider the following sets documents (represented as bag of words):
• S1 = {python, is, a, programming, language}
• S2 = {java, is, a, programming, language}
• S3 = {a, programming, language}
• S4 = {python, is, a, snake}
• We can assume that our universal set limits to words from these 4 documents.The characteristic
matrix of these sets is
row S1 S2 S3 S4
a 0 1 1 1 1
is 1 1 1 0 1
java 2 0 1 0 0
language 3 1 1 1 0
programming 4 1 1 1 0
python 5 1 0 0 1
snake 6 0 0 0 1
MINHASH: INTUITION
• The minhash value of any column of the characteristic
matrix is the number of the first row, in the
permuted order, in which the column has a 1.
• To minhash a set represented by such columns, we need
to pick a permutation of the rows.
MINHASH: ONE STEP EXAMPLE
row S1 S2 S3 S4 permuted row
1 1 0 1 0 2
2 1 0 0 0 5
3 0 0 1 1 6
4 1 0 0 1 1
5 0 0 1 1 4
6 0 1 1 0 3
• Consider the characteristic matrix:
• Record the first “1” in each column:
m(S1) = 2, m(S2) = 3, m(S3) = 2, m(S4) = 6
• Estimate the Jaccard similarity: J(Si,Sj) = 1 if m(Si) = m(Sj), or 0 otherwise
J(S1,S3) = 1, J(S1, S4) = 0
MINHASH: INTUITION
• It is not feasible to permute a large characteristic
matrix explicitly.
• Even picking a random permutation of millions of rows is time-
consuming, and the necessary sorting of the rows would take even
more time.
• Fortunately, it is possible to simulate the effect of a
random permutation by a random hash function that
maps row numbers to as many buckets as there are rows.
• So, instead of picking n random permutations of rows, we pick n
randomly chosen hash functions h1, h2, . . . , hn on the rows
MINHASH: PROPERTIES
• Connection between minhash and resemblance (Jaccard)
similarity of the sets that are minhashed:
• The probability that the minhash function for a random permutation
of rows produces the same value for two sets equals the Jaccard
similarity of those sets
• Minhash(π) of a set is the number of the row (element) with first non-
zero in the permuted order π.
• MinHash is an LSH for resemblance (Jaccard) similarity, which defined
over binary vectors.
MINHASH: ALGORITHM
• Pick k randomly chosen hash functions h1, h2, . . . , hk
• From the column representing set S, construct the minhash signature for S - the
vector [h1(S), h2(S), . . . , hk(S)].
• Build the characteristic matrix CM for the set S.
• Construct a signature matrix SIG, where SIG(i, c) corresponds to the i
th
hash
function and column c. Initially, set SIG(i, c) = ∞ for each i and c.
• On each row r:
• Compute h1(r), h2(r), . . . , hk(r)
• For each column c:
• If CM(r,c) = 1, then set SIG(i, c) = min{SIG(i,c), hi(r)} for each i = 1. . . n
• Estimate the resemblance (Jaccard) similarities of the underlying sets from the
final signature matrix
MINHASH: EXAMPLE
• Consider 2 hash functions:
• h1(x) = x + 1 mod 7
• h2(x) = 3x + 1 mod 7
row S1 S2 S3 S4 h1(row) h2 (row)
a 0 1 1 1 1 1 1
is 1 1 1 0 1 2 4
java 2 0 1 0 0 3 0
language 3 1 1 1 0 4 3
programming 4 1 1 1 0 5 6
python 5 1 0 0 1 6 2
snake 6 0 0 0 1 0 5
• Consider the same set of 4 documents and universal set that consists of 7 words
MINHASH: EXAMPLE
S1 S2 S3 S4
h1 ∞ ∞ ∞ ∞
h2 ∞ ∞ ∞ ∞
• Initially the signature matrix will have all ∞:
• First, consider row 0, all values h1(0) and h2(0) are both 1. So, for all sets with 1 in
the row 0 we set value h1(0)=h2(0)=1 in the signature matrix:
r S1 S2 S3 S4 h1 h2
0 1 1 1 1 1 1
1 1 1 0 1 2 4
2 0 1 0 0 3 0
3 1 1 1 0 4 3
4 1 1 1 0 5 6
5 1 0 0 1 6 2
6 0 0 0 1 0 5
S1 S2 S3 S4
h1 1 1 1 1
h2 1 1 1 1
• Next, consider row 1, its values h1(1) = 2 and h2(1) = 4. In this row only S1, S2
and S4 have 1’s, so we set the appropriate cells with the minimum between
existing values and 2 for h1 or 4 for h2:
S1 S2 S3 S4
h1 1 1 1 1
h2 1 1 1 1
MINHASH: EXAMPLE
S1 S2 S3 S4
h1 1 1 1 1
h2 1 0 1 1
• Continue with other rows and after row 5 the signature matrix is:
• Finally, consider row 6, its values h1(6) = 0 and h2(6) = 5. In this row only S4
has 1’s, so we set the appropriate cells with the minimum between existing
values and 0 for h1 or 5 for h2.The final signature matrix is:
S1 S2 S3 S4
h1 1 1 1 0
h2 1 0 1 1
r S1 S2 S3 S4 h1 h2
0 1 1 1 1 1 1
1 1 1 0 1 2 4
2 0 1 0 0 3 0
3 1 1 1 0 4 3
4 1 1 1 0 5 6
5 1 0 0 1 6 2
6 0 0 0 1 0 5
• Document S1 is similar to S3 (identical columns), so similarity is 1 (the true
value J ≈ 0.75)
• Documents S2 and S4 have no rows in common, so similarity is 0 (the true
value J ≈ 0.28)
MINHASH: PYTHON
• https://github.com/ekzhu/datasketch

datasketch gives you probabilistic data structures that
can process vary large amount of data
• https://github.com/anthonygarvan/MinHash

MinHash is an effective pure python implementation of
Minhash
MINHASH: READ MORE
• http://infolab.stanford.edu/~ullman/mmds/book.pdf
• http://www.cs.cmu.edu/~guyb/realworld/slidesS13/
minhash.pdf
• https://en.wikipedia.org/wiki/MinHash
• http://www2007.org/papers/paper570.pdf
• https://www.cs.utah.edu/~jeffp/teaching/cs5955/L4-
Jaccard+Shingle.pdf
B-BIT MINHASH
B-BIT MINHASH
• A modification of minwise hashing, called b-bit minwise
hashing, was proposed by Ping Li and Arnd Christian König
in 2010.
• The method of b-bit minwise hashing provides a simple
solution by storing only the lowest b bits of each hashed
data, reducing the dimensionality of the (expanded)
hashed data matrix to just 2b
•k.
• b-bit minwise hashing is simple and requires only minimal
modification to the original minwise hashing algorithm.
B-BIT MINHASH
• Intuitively, using fewer bits per sample will increase the estimation
variance, compared to the original MinHash, at the same "sample size" k.
Thus, we have to increase k to maintain the same accuracy.
• Interestingly, the theoretical results demonstrate that, when resemblance is not
too small (e.g. R ≥ 0.5 as a common threshold), we do not have to increase
k much.
• This means that, compared to the earlier approach, the b-bit minwise hashing can be used to improve
estimation accuracy and significantly reduce storage requirement at the same time.
• For example, for b=1 and R=0.5, the estimation variance will increase at
most by factor of 3.
• So, in order not to lose accuracy, we have to increase the sample size k by factor 3.
• If we originally stored each hashed value using 64 bits, the improvement by using
b=1 will be 64/3 = 21.3
B-BIT MINHASH: READ MORE
• https://www.microsoft.com/en-us/research/publication/b-
bit-minwise-hashing/
• https://www.microsoft.com/en-us/research/publication/
theory-and-applications-of-b-bit-minwise-hashing/
SIMHASH
SIMHASH
• SimHash (a sign normal random projection) algorithm has been
proposed by Moses S. Charikar in 2002.
• SimHash originate form the concept of sign random projections (SRP):
• In short, for a given vector x SRP utilizes a random vector w with each component
generated from i.i.d normal, i.e. wi ~ N(0,1), and only stores the sign of the
projected data.
• Currently, SimHash is the only known LSH for cosine similarity
sim(A,B) =
A⋅ B
A ⋅ B
=
aibi
i=1
n
∑
ai
2
i=1
n
∑ bi
2
i=1
n
∑
SIMHASH
• In fact, it is a dimensionality reduction technique that maps high-
dimensional vectors to ƒ-bit fingerprints, where ƒ is small (e.g. 32 or 64).
• SimHash algorithm consider a fingerprint as a "hash" of its properties
and assumes that similar documents have similar hash values.
• This assumption requires the hash functions to be LSH and, as we already know, it
isn't trivial as it could be seen for the first time.
• If we consider 2 documents that are different just in a single byte and apply such
popular hash functions as SHA-1 or MD5, they will definitely get two completely
different hash-values, that aren't close.This is why such approach requires a
special rule how to define hash functions that could have such required property.
• An attractive feature of the SimHash hash functions is that the output is
a single bit (the sign)
SIMHASH: FINGERPRINTS
• to generate an ƒ-bit fingerprint, maintain an ƒ-dimensional vector V (all
dimensions are 0 at the beginning).
• a feature is hashed into an ƒ-bit hash value (using a hash function) and can be
seen as a binary vector of ƒ bits
• these ƒ bits (unique to the feature) increment/decrement the ƒ components of
the vector by the weight of that feature as follows:
• if the ith
bit of the hash value is 1 => the ith
component of V is incremented by the
weight of that feature
• if the ith
bit of the hash value is 0 => the ith
component of V is decremented by the
weight of that feature.
• When all features have been processed, components of V are positive or
negative.
• The signs of components determine the corresponding bits of the final fingerprint.
SIMHASH: FINGERPRINTS EXAMPLE
• Consider a document after POS tagging: {(“ferret”, NOUN), (“go”, VERB)}
• decide to have weights for features based on their POS tags: {“NOUN”: 1.0, “VERB”: 0.9}
• h(x) - some hash function, and we will calculate 16-bit (ƒ=16) fingerprint F
• Maintain V = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) for the document
• h(“hello”) = 0101010101010111
• h(“hello”)[1] == 0 => V[1] = V[1] - 1.0 = -1.0
• h(“hello”)[2] == 1 => V[2] = V[2] + 1.0 = 1.0
• …
• V = (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, 1.0, 1.0)
• h(“go”) = 0111011100100111
• h(“go”)[1] == 0 => V[1] = V[1] - 0.9 = -1.9
• h(“go”)[2] == 1 => V[2] = V[2] + 0.9 = 1.9
• …
• V = (-1.9, 1.9, -0.1, 1.9, -1.9, 1.9, -0.1, 1.9, -1.9, 0.1, -0.1, 0.1, -1.9, 1.9, 1.9, 1.9)
• sign(V) = (-, +, -, +, -, +, -, +, -, +, -, +, -, +, +, +)
• fingerprint F = 0101010101010111
SIMHASH: FINGERPRINTS
After converting document into simhash fingerprints, we face the following
design problem:
• Given a ƒ-bit fingerprint of a document, how quickly discover other
fingerprints that differ in at most k bit-positions?
SIMHASH: DATA STRUCTURE
• SimHash data structure consists of t tables: T1, T2, … Tt.With each table
Ti is associated 2 quantities:
• an integer pi (number of top bit-positions used in comparisons)
• a permutation πi over the ƒ bit-positions
• Table Ti is constructed by applying permutation πi to each existing
fingerprint F and the resulting set of permuted ƒ-bit fingerprints is sorted.
• To optimize such structure to store it in the main-memory, it is possible to compress
each such table (that can decrease it size by half approximately).
• Example:
DF = {0110, 0010, 0010, 1001, 1110}

πi = {1→2, 2→4, 3→3, 4→1 }
• πi (0110)= 0011, πi (0010)= 0010,

πi (1001)= 1100, πi (1110)= 0111
Ti =
0 0 1 0
0 0 1 1
0 0 1 1
0 1 1 1
1 1 0 0
⎡
⎣
⎢
⎢
⎢
⎢
⎢
⎢
⎤
⎦
⎥
⎥
⎥
⎥
⎥
⎥
SIMHASH: ALGORITHM
• Actually, the algorithm follows the general approach we saw for LSH
• Given a fingerprint F and an integer k we probe tables from the data structure in
parallel:
• For each table Ti we identify all permuted fingerprints in Ti whose top pi bit-
positions match the top pi bit-positions of πi(F).
• After that, for each such permuted fingerprints, we check if it differs from the
πi(F) in at most k bit-positions (in fact, comparing the Hamming distance
between such fingerprints).
• Identification of the first fingerprint in table Ti whose top pi bit-positions match
the top pi bit-positions of πi(F) (Step 1) can be done in O(pi) steps by binary
search.
• If we assume that each fingerprint were truly a random bit sequence, it is possible
to shrinks the run time to O(log pi) steps in expectation (interpolation search).
SIMHASH: PARAMETERS
• For a repository of 8B web-pages, ƒ=64 (64-bit simhash fingerprints)
and k = 3 are reasonable. (Gurmeet Singh Manku et al., 2007)
• For the given ƒ and k, to find a reasonable combination of t (number of
tables) and pi (number of top bit-positions used in comparisons) we have
to note that they are not independent from each other:
• Increasing the number of tables t increases pi and hence reduces the query time.
(large values for various pi let to avoid checking too many fingerprints in Step 2)
• Decreasing the number of tables t reduces storage requirements, but reduces pi
and hence increases the query time (small set of permutations let to avoid space
blowup)
SIMHASH: PARAMETERS
• Consider ƒ = 64 (64-bit fingerprints) and k = 3, so documents whose fingerprints
differ at most 3 bit-positions will be similar for us (duplicates).
• Assume we have 8B = 234
existing fingerprints and decide to use t = 10 tables:
• For instance, t = 10 = (5
2) , so we can use 5 blocks and choose 2 out of them in
10 ways.
• Split 64 bits into 5 blocks having 13, 13, 13, 13 and 12 bits respectively.
• For each such choice, permutation π corresponds to making the bits lying in the
chosen blocks the leading bits.
• pi is the total number of bits in the chosen blocks, thus pi = 25 or 26.
• On average, a probe retrieves at most 234−25
= 512 (permuted) fingerprints.
• if we seek all (permuted) fingerprints which match the top pi bits of a given (permuted)
fingerprint, we expect 2d−pi
fingerprints as matches
• The total number of probes for 64-bit fingerprints and k = 3 is (64
3) = 41664 probes
SIMHASH: PYTHON
• https://github.com/leonsunliang/simhash

A Python implementation of Simhash.
• http://bibliographie-trac.ub.rub.de/browser/simhash.py

Implementation of Charikar's simhashes in Python
SIMHASH: READ MORE
• http://www.cs.princeton.edu/courses/archive/spr04/
cos598B/bib/CharikarEstim.pdf
• http://jmlr.org/proceedings/papers/v33/
shrivastava14.pdf
• http://www.wwwconference.org/www2007/papers/
paper215.pdf
• http://ferd.ca/simhashing-hopefully-made-simple.html
▸ @gakhov
▸ linkedin.com/in/gakhov
▸ www.datacrucis.com
THANK YOU

More Related Content

PPTX
Projection In Computer Graphics
PPT
32 dynamic linking nd overlays
PDF
Hadoop & MapReduce
PPTX
Daa unit 2
PPTX
Map reduce presentation
PPTX
Scan line method
PPTX
Introduction to HDFS
PPT
5 Data Modeling for NoSQL 1/2
Projection In Computer Graphics
32 dynamic linking nd overlays
Hadoop & MapReduce
Daa unit 2
Map reduce presentation
Scan line method
Introduction to HDFS
5 Data Modeling for NoSQL 1/2

What's hot (20)

PPT
Association rule mining
PPTX
Hadoop introduction , Why and What is Hadoop ?
PPTX
File organization and introduction of DBMS
PDF
Interconnection Network
PDF
DBMS NOTES.pdf
PPT
Analysis modeling & scenario based modeling
PPTX
Major issues in data mining
PPT
Goal stack planning.ppt
PDF
Perspective in Informatics 3 - Assignment 2 - Answer Sheet
PPTX
Translation of expression(copmiler construction)
PPTX
Curve clipping
PPT
Android - Values folder
PPTX
Gradient Boosted trees
PPTX
Dynamic Programming Code-Optimization Algorithm (Compiler Design)
PPTX
COMPUTER GRAPHICS-"Projection"
PPTX
Assistive device for blind , deaf and dumb.pptx
PPTX
Attributes of output primitive(line attributes)
PDF
Web technology lab manual
PPTX
Data models in NoSQL
Association rule mining
Hadoop introduction , Why and What is Hadoop ?
File organization and introduction of DBMS
Interconnection Network
DBMS NOTES.pdf
Analysis modeling & scenario based modeling
Major issues in data mining
Goal stack planning.ppt
Perspective in Informatics 3 - Assignment 2 - Answer Sheet
Translation of expression(copmiler construction)
Curve clipping
Android - Values folder
Gradient Boosted trees
Dynamic Programming Code-Optimization Algorithm (Compiler Design)
COMPUTER GRAPHICS-"Projection"
Assistive device for blind , deaf and dumb.pptx
Attributes of output primitive(line attributes)
Web technology lab manual
Data models in NoSQL
Ad

Viewers also liked (20)

PDF
Probabilistic data structures. Part 3. Frequency
PDF
Probabilistic data structures. Part 2. Cardinality
PDF
Bloom filter
PPTX
Bloom filters
PDF
Data Mining - lecture 7 - 2014
KEY
爬虫点滴
PDF
14 Skip Lists
PDF
Вероятностные структуры данных
PDF
Big Data with Semantics - StampedeCon 2012
PDF
Thinking in MapReduce - StampedeCon 2013
PPTX
[Vldb 2013] skyline operator on anti correlated distributions
PPTX
Learning a nonlinear embedding by preserving class neibourhood structure 최종
PPTX
Dsh data sensitive hashing for high dimensional k-nn search
PDF
Economic Development and Satellite Images
PPTX
An optimal and progressive algorithm for skyline queries slide
PPTX
Implementing a Fileserver with Nginx and Lua
PPT
Bloom filter
PDF
Bloom filter
PPTX
Locality sensitive hashing
PPT
skip list
Probabilistic data structures. Part 3. Frequency
Probabilistic data structures. Part 2. Cardinality
Bloom filter
Bloom filters
Data Mining - lecture 7 - 2014
爬虫点滴
14 Skip Lists
Вероятностные структуры данных
Big Data with Semantics - StampedeCon 2012
Thinking in MapReduce - StampedeCon 2013
[Vldb 2013] skyline operator on anti correlated distributions
Learning a nonlinear embedding by preserving class neibourhood structure 최종
Dsh data sensitive hashing for high dimensional k-nn search
Economic Development and Satellite Images
An optimal and progressive algorithm for skyline queries slide
Implementing a Fileserver with Nginx and Lua
Bloom filter
Bloom filter
Locality sensitive hashing
skip list
Ad

Similar to Probabilistic data structures. Part 4. Similarity (20)

PDF
Local sensitive hashing & minhash on facebook friend
PPTX
Mining of massive datasets using locality sensitive hashing (LSH)
PPT
similarity1 (6).ppt
PPTX
3 - Finding similar items
PDF
OpenLSH - a framework for locality sensitive hashing
PDF
Locality sensitive hashing
PDF
Open LSH - september 2014 update
PDF
04-lsh_theory.pdfCS246: Mining Massive Datasets Jure Leskovec, Stanford Univ...
PDF
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
PDF
Finding similar items in high dimensional spaces locality sensitive hashing
PPTX
Probabilistic data structure
PPT
20140327 - Hashing Object Embedding
PPTX
Shingling of documents , business intelligence
PDF
Locality Sensitive Hashing By Spark
PPTX
Data Mining Lecture_6.pptx
PDF
large_scale_search.pdf
PDF
Building graphs to discover information by David Martínez at Big Data Spain 2015
PPT
20140702 xu jiaming hashinglearning - lite
PPTX
Search4similars
PDF
Locality Sensitive Hashing
Local sensitive hashing & minhash on facebook friend
Mining of massive datasets using locality sensitive hashing (LSH)
similarity1 (6).ppt
3 - Finding similar items
OpenLSH - a framework for locality sensitive hashing
Locality sensitive hashing
Open LSH - september 2014 update
04-lsh_theory.pdfCS246: Mining Massive Datasets Jure Leskovec, Stanford Univ...
Дмитрий Селиванов, OK.RU. Finding Similar Items in high-dimensional spaces: L...
Finding similar items in high dimensional spaces locality sensitive hashing
Probabilistic data structure
20140327 - Hashing Object Embedding
Shingling of documents , business intelligence
Locality Sensitive Hashing By Spark
Data Mining Lecture_6.pptx
large_scale_search.pdf
Building graphs to discover information by David Martínez at Big Data Spain 2015
20140702 xu jiaming hashinglearning - lite
Search4similars
Locality Sensitive Hashing

More from Andrii Gakhov (20)

PDF
Let's start GraphQL: structure, behavior, and architecture
PDF
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
PDF
Too Much Data? - Just Sample, Just Hash, ...
PDF
DNS Delegation
PPTX
Pecha Kucha: Ukrainian Food Traditions
PDF
Recurrent Neural Networks. Part 1: Theory
PDF
Apache Big Data Europe 2015: Selected Talks
PDF
Swagger / Quick Start Guide
PDF
API Days Berlin highlights
PDF
ELK - What's new and showcases
PDF
Apache Spark Overview @ ferret
PDF
Data Mining - lecture 8 - 2014
PDF
Data Mining - lecture 6 - 2014
PDF
Data Mining - lecture 5 - 2014
PDF
Data Mining - lecture 4 - 2014
PDF
Data Mining - lecture 3 - 2014
PDF
Decision Theory - lecture 1 (introduction)
PDF
Data Mining - lecture 2 - 2014
PDF
Data Mining - lecture 1 - 2014
PDF
Buzzwords 2014 / Overview / part2
Let's start GraphQL: structure, behavior, and architecture
Exceeding Classical: Probabilistic Data Structures in Data Intensive Applicat...
Too Much Data? - Just Sample, Just Hash, ...
DNS Delegation
Pecha Kucha: Ukrainian Food Traditions
Recurrent Neural Networks. Part 1: Theory
Apache Big Data Europe 2015: Selected Talks
Swagger / Quick Start Guide
API Days Berlin highlights
ELK - What's new and showcases
Apache Spark Overview @ ferret
Data Mining - lecture 8 - 2014
Data Mining - lecture 6 - 2014
Data Mining - lecture 5 - 2014
Data Mining - lecture 4 - 2014
Data Mining - lecture 3 - 2014
Decision Theory - lecture 1 (introduction)
Data Mining - lecture 2 - 2014
Data Mining - lecture 1 - 2014
Buzzwords 2014 / Overview / part2

Recently uploaded (20)

PPTX
MYSQL Presentation for SQL database connectivity
PDF
KodekX | Application Modernization Development
PDF
Approach and Philosophy of On baking technology
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
GamePlan Trading System Review: Professional Trader's Honest Take
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PPTX
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
PDF
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
PDF
Advanced Soft Computing BINUS July 2025.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
MYSQL Presentation for SQL database connectivity
KodekX | Application Modernization Development
Approach and Philosophy of On baking technology
NewMind AI Monthly Chronicles - July 2025
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Dropbox Q2 2025 Financial Results & Investor Presentation
GamePlan Trading System Review: Professional Trader's Honest Take
NewMind AI Weekly Chronicles - August'25 Week I
[발표본] 너의 과제는 클라우드에 있어_KTDS_김동현_20250524.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Advanced methodologies resolving dimensionality complications for autism neur...
Shreyas Phanse Resume: Experienced Backend Engineer | Java • Spring Boot • Ka...
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
breach-and-attack-simulation-cybersecurity-india-chennai-defenderrabbit-2025....
solutions_manual_-_materials___processing_in_manufacturing__demargo_.pdf
Network Security Unit 5.pdf for BCA BBA.
GDG Cloud Iasi [PUBLIC] Florian Blaga - Unveiling the Evolution of Cybersecur...
Advanced Soft Computing BINUS July 2025.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”

Probabilistic data structures. Part 4. Similarity

  • 1. tech talk @ ferret Andrii Gakhov PROBABILISTIC DATA STRUCTURES ALL YOU WANTED TO KNOW BUT WERE AFRAID TO ASK PART 4: SIMILARITY
  • 3. • To find clusters of similar documents from the document set • To find duplicates of the document in the document set THE PROBLEM
  • 5. LOCALITY SENTSITIVE HASHING • Locality Sentsitive Hashing (LSH) was introduced by Indyk and Motwani in 1998 as a family of functions with the following property: • similar input objects (from the domain of such functions) have a higher probability of colliding in the range space than non-similar ones • LSH differs from conventional and cryptographic hash functions because it aims to maximize the probability of a “collision” for similar items. • Intuitively, LSH is based on the simple idea that, if two points are close together, then after a “projection” operation these two points will remain close together.
  • 6. LSH: PROPERTIES • One general approach to LSH is to “hash” items several times, in such a way that similar items are more likely to be hashed to the same bucket than dissimilar items are. We then consider any pair that hashed to the same bucket for any of the hashings to be a candidate pair.
  • 7. LSH: PROPERTIES • Examples of LSH functions are known for resemblance, cosine and hamming distances. • LSH functions do not exist for certain commonly used similarity measures in information retrieval, the Dice coefficient and the Overlap coefficient (Moses S. Charikar, 2002)
  • 8. LSH: PROBLEM • Given a query point, we wish to find the points in a large database that are closest to the query. We wish to guarantee with a high probability equal to 1 − δ that we return the nearest neighbor for any query point. • Conceptually, this problem is easily solved by iterating through each point in the database and calculating the distance to the query object. However, our database may contain billions of objects — each object described by a vector that contains hundreds of dimensions. • LSH proves useful to identify nearest neighbors quickly even when the database is very large.
  • 9. LSH: FINDING DUPLICATE PAGES ON THE WEB How to select features for the webpage? • document vector from page content • Compute “document-vector” by case-folding, stop-word removal, stemming, computing term-frequencies and finally, weighting each term by its inverse document frequency (IDF) • shingles from page content • Consider a document as an sequence of words.A shingle is a hash value of a k-gram which is sub-sequence of k consequent words. • Hashes for shingles can be computed using Rabin’s fingerprints • connectivity information • The idea that similar web pages have several incoming links in common. • anchor text and anchor window • The idea that similar documents should have similar anchor text.The words in the anchor text or anchor window are folded into the document-vector itself. • phrases • Idea is to identify phrases and compute a document-vector that includes phrases as terms.
  • 10. LSH: FINDING DUPLICATE PAGES ON THE WEB • The Web contains many duplicate pages, partly because content is duplicated across sites and partly because there is more than one URL that points to the same page. • If we use shingles as features, each shingle represents a portion of a Web page and is computed by forming a histogram of the words found within that portion of the page. • We can test to see if a portion of the page is duplicated elsewhere on the Web by looking for other shingles with the same histogram. If the shingles of the new page match shingles from the database, then it is likely that the new page bears a strong resemblance to an existing page. • Because Web pages are surrounded by navigational and other information that changes from site to site it is useful to use nearest-neighbor solution that can employ LSH functions.
  • 11. LSH: RETRIEVING IMAGES • LSH can be used in image retrieval as an object recognition tool. We compute a detailed metric for many different orientations and configurations of an object we wish to recognize. • Then, given a new image we simply check our database to see if a precomputed object’s metrics are close to our query. This database contains millions of poses and LSH allows us to quickly check if the query object is known.
  • 12. LSH: RETRIEVING MUSIC • In music retrieval conventional hashes and robust features are typically used to find musical matches. • The features can be fingerprints, i.e., representations of the audio signal that are robust to common types of abuse that are performed to audio before it reaches our ears. • Fingerprints can be computed, for instance, by noting the peaks in the spectrum (because they are robust to noise) and encoding their position in time and space. User then just has to query the database for the same fingerprint. • However, to find similar songs we cannot use fingerprints because these are different when a song is remixed for a new audience or when a different artist performs the same song. Instead, we can use several seconds of the song— a snippet—as a shingle. • To determine if two songs are similar, we need to query the database and see if a large enough number of the query shingles are close to one song in the database. • Although closeness depends on the feature vector, we know that long shingles provide specificity. This is particularly important because we can eliminate duplicates to improve search results and to link recommendation data between similar songs.
  • 13. LSH: PYTHON • https://github.com/ekzhu/datasketch
 datasketch gives you probabilistic data structures that can process vary large amount of data • https://github.com/simonemainardi/LSHash
 LSHash is a fast Python implementation of locality sensitive hashing with persistence support • https://github.com/go2starr/lshhdc
 LSHHDC: Locality-Sensitive Hashing based High Dimensional Clustering
  • 14. LSH: READ MORE • http://www.cs.princeton.edu/courses/archive/spr04/cos598B/ bib/IndykM-curse.pdf • http://infolab.stanford.edu/~ullman/mmds/book.pdf • http://www.cs.princeton.edu/courses/archive/spr04/cos598B/ bib/CharikarEstim.pdf • http://www.matlabi.ir/wp-content/uploads/bank_papers/ g_paper/g15_Matlabi.ir_Locality- Sensitive%20Hashing%20for%20Finding%20Nearest%20Nei ghbors.pdf • https://en.wikipedia.org/wiki/Locality-sensitive_hashing
  • 16. MINHASH: RESEMBLANCE SIMILARITY • While talking about relations of two documents A and B, we mostly interesting in such concept as "roughly the same", that could be mathematically formalized as their resemblance (or Jaccard similarity). • Every document can be seen as a set of features and such issue could be reduced to a set intersection problem (find similarity between sets). • The resemblance r(A,B) of two documents is a number between 0 and 1, such that it is close to 1 for the documents that are "roughly the same”: • For duplicate detection problem we can reformulate is as a task to find pairs of documents whose resemblance r(A,B) exceeds certain threshold R. • For the clustering problem we can use d(A,B) = 1 - r(A,B) as a metric to design algorithms intended to cluster a collection of documents into sets of closely resembling documents. J(A,B) = r(A,B) = A∩ B A∪ B
  • 17. MINHASH • MinHash (minwise hashing algorithm) has been proposed by Andrei Broder in 1997 • The minwise hashing family applies a random permutation π: Ω → Ω, on the given set S, and stores only the minimum values after the permutation mapping. • Formally MinHash is defined as: hπ min S( )= min π S( )( )= min π1 S( ),…πk S( )( )
  • 18. MINHASH: MATRIX REPRESENTATION • Collection of sets could be visualised as characteristic matrix CM. • The columns of the matrix correspond to the sets, and the rows correspond to elements of the universal set from which elements of the sets are drawn. • CM(r, c) =1 if the element for row r is a member of the set for column c, otherwise CM(r,c) =0 • NOTE: The characteristic matrix is unlikely to be the way the data is stored, but it is useful as a way to visualize the data. • For one reason not to store data as a matrix, these matrices are almost always sparse (they have many more 0’s than 1’s) in practice.
  • 19. MINHASH: MATRIX REPRESENTATION • Consider the following sets documents (represented as bag of words): • S1 = {python, is, a, programming, language} • S2 = {java, is, a, programming, language} • S3 = {a, programming, language} • S4 = {python, is, a, snake} • We can assume that our universal set limits to words from these 4 documents.The characteristic matrix of these sets is row S1 S2 S3 S4 a 0 1 1 1 1 is 1 1 1 0 1 java 2 0 1 0 0 language 3 1 1 1 0 programming 4 1 1 1 0 python 5 1 0 0 1 snake 6 0 0 0 1
  • 20. MINHASH: INTUITION • The minhash value of any column of the characteristic matrix is the number of the first row, in the permuted order, in which the column has a 1. • To minhash a set represented by such columns, we need to pick a permutation of the rows.
  • 21. MINHASH: ONE STEP EXAMPLE row S1 S2 S3 S4 permuted row 1 1 0 1 0 2 2 1 0 0 0 5 3 0 0 1 1 6 4 1 0 0 1 1 5 0 0 1 1 4 6 0 1 1 0 3 • Consider the characteristic matrix: • Record the first “1” in each column: m(S1) = 2, m(S2) = 3, m(S3) = 2, m(S4) = 6 • Estimate the Jaccard similarity: J(Si,Sj) = 1 if m(Si) = m(Sj), or 0 otherwise J(S1,S3) = 1, J(S1, S4) = 0
  • 22. MINHASH: INTUITION • It is not feasible to permute a large characteristic matrix explicitly. • Even picking a random permutation of millions of rows is time- consuming, and the necessary sorting of the rows would take even more time. • Fortunately, it is possible to simulate the effect of a random permutation by a random hash function that maps row numbers to as many buckets as there are rows. • So, instead of picking n random permutations of rows, we pick n randomly chosen hash functions h1, h2, . . . , hn on the rows
  • 23. MINHASH: PROPERTIES • Connection between minhash and resemblance (Jaccard) similarity of the sets that are minhashed: • The probability that the minhash function for a random permutation of rows produces the same value for two sets equals the Jaccard similarity of those sets • Minhash(π) of a set is the number of the row (element) with first non- zero in the permuted order π. • MinHash is an LSH for resemblance (Jaccard) similarity, which defined over binary vectors.
  • 24. MINHASH: ALGORITHM • Pick k randomly chosen hash functions h1, h2, . . . , hk • From the column representing set S, construct the minhash signature for S - the vector [h1(S), h2(S), . . . , hk(S)]. • Build the characteristic matrix CM for the set S. • Construct a signature matrix SIG, where SIG(i, c) corresponds to the i th hash function and column c. Initially, set SIG(i, c) = ∞ for each i and c. • On each row r: • Compute h1(r), h2(r), . . . , hk(r) • For each column c: • If CM(r,c) = 1, then set SIG(i, c) = min{SIG(i,c), hi(r)} for each i = 1. . . n • Estimate the resemblance (Jaccard) similarities of the underlying sets from the final signature matrix
  • 25. MINHASH: EXAMPLE • Consider 2 hash functions: • h1(x) = x + 1 mod 7 • h2(x) = 3x + 1 mod 7 row S1 S2 S3 S4 h1(row) h2 (row) a 0 1 1 1 1 1 1 is 1 1 1 0 1 2 4 java 2 0 1 0 0 3 0 language 3 1 1 1 0 4 3 programming 4 1 1 1 0 5 6 python 5 1 0 0 1 6 2 snake 6 0 0 0 1 0 5 • Consider the same set of 4 documents and universal set that consists of 7 words
  • 26. MINHASH: EXAMPLE S1 S2 S3 S4 h1 ∞ ∞ ∞ ∞ h2 ∞ ∞ ∞ ∞ • Initially the signature matrix will have all ∞: • First, consider row 0, all values h1(0) and h2(0) are both 1. So, for all sets with 1 in the row 0 we set value h1(0)=h2(0)=1 in the signature matrix: r S1 S2 S3 S4 h1 h2 0 1 1 1 1 1 1 1 1 1 0 1 2 4 2 0 1 0 0 3 0 3 1 1 1 0 4 3 4 1 1 1 0 5 6 5 1 0 0 1 6 2 6 0 0 0 1 0 5 S1 S2 S3 S4 h1 1 1 1 1 h2 1 1 1 1 • Next, consider row 1, its values h1(1) = 2 and h2(1) = 4. In this row only S1, S2 and S4 have 1’s, so we set the appropriate cells with the minimum between existing values and 2 for h1 or 4 for h2: S1 S2 S3 S4 h1 1 1 1 1 h2 1 1 1 1
  • 27. MINHASH: EXAMPLE S1 S2 S3 S4 h1 1 1 1 1 h2 1 0 1 1 • Continue with other rows and after row 5 the signature matrix is: • Finally, consider row 6, its values h1(6) = 0 and h2(6) = 5. In this row only S4 has 1’s, so we set the appropriate cells with the minimum between existing values and 0 for h1 or 5 for h2.The final signature matrix is: S1 S2 S3 S4 h1 1 1 1 0 h2 1 0 1 1 r S1 S2 S3 S4 h1 h2 0 1 1 1 1 1 1 1 1 1 0 1 2 4 2 0 1 0 0 3 0 3 1 1 1 0 4 3 4 1 1 1 0 5 6 5 1 0 0 1 6 2 6 0 0 0 1 0 5 • Document S1 is similar to S3 (identical columns), so similarity is 1 (the true value J ≈ 0.75) • Documents S2 and S4 have no rows in common, so similarity is 0 (the true value J ≈ 0.28)
  • 28. MINHASH: PYTHON • https://github.com/ekzhu/datasketch
 datasketch gives you probabilistic data structures that can process vary large amount of data • https://github.com/anthonygarvan/MinHash
 MinHash is an effective pure python implementation of Minhash
  • 29. MINHASH: READ MORE • http://infolab.stanford.edu/~ullman/mmds/book.pdf • http://www.cs.cmu.edu/~guyb/realworld/slidesS13/ minhash.pdf • https://en.wikipedia.org/wiki/MinHash • http://www2007.org/papers/paper570.pdf • https://www.cs.utah.edu/~jeffp/teaching/cs5955/L4- Jaccard+Shingle.pdf
  • 31. B-BIT MINHASH • A modification of minwise hashing, called b-bit minwise hashing, was proposed by Ping Li and Arnd Christian König in 2010. • The method of b-bit minwise hashing provides a simple solution by storing only the lowest b bits of each hashed data, reducing the dimensionality of the (expanded) hashed data matrix to just 2b •k. • b-bit minwise hashing is simple and requires only minimal modification to the original minwise hashing algorithm.
  • 32. B-BIT MINHASH • Intuitively, using fewer bits per sample will increase the estimation variance, compared to the original MinHash, at the same "sample size" k. Thus, we have to increase k to maintain the same accuracy. • Interestingly, the theoretical results demonstrate that, when resemblance is not too small (e.g. R ≥ 0.5 as a common threshold), we do not have to increase k much. • This means that, compared to the earlier approach, the b-bit minwise hashing can be used to improve estimation accuracy and significantly reduce storage requirement at the same time. • For example, for b=1 and R=0.5, the estimation variance will increase at most by factor of 3. • So, in order not to lose accuracy, we have to increase the sample size k by factor 3. • If we originally stored each hashed value using 64 bits, the improvement by using b=1 will be 64/3 = 21.3
  • 33. B-BIT MINHASH: READ MORE • https://www.microsoft.com/en-us/research/publication/b- bit-minwise-hashing/ • https://www.microsoft.com/en-us/research/publication/ theory-and-applications-of-b-bit-minwise-hashing/
  • 35. SIMHASH • SimHash (a sign normal random projection) algorithm has been proposed by Moses S. Charikar in 2002. • SimHash originate form the concept of sign random projections (SRP): • In short, for a given vector x SRP utilizes a random vector w with each component generated from i.i.d normal, i.e. wi ~ N(0,1), and only stores the sign of the projected data. • Currently, SimHash is the only known LSH for cosine similarity sim(A,B) = A⋅ B A ⋅ B = aibi i=1 n ∑ ai 2 i=1 n ∑ bi 2 i=1 n ∑
  • 36. SIMHASH • In fact, it is a dimensionality reduction technique that maps high- dimensional vectors to ƒ-bit fingerprints, where ƒ is small (e.g. 32 or 64). • SimHash algorithm consider a fingerprint as a "hash" of its properties and assumes that similar documents have similar hash values. • This assumption requires the hash functions to be LSH and, as we already know, it isn't trivial as it could be seen for the first time. • If we consider 2 documents that are different just in a single byte and apply such popular hash functions as SHA-1 or MD5, they will definitely get two completely different hash-values, that aren't close.This is why such approach requires a special rule how to define hash functions that could have such required property. • An attractive feature of the SimHash hash functions is that the output is a single bit (the sign)
  • 37. SIMHASH: FINGERPRINTS • to generate an ƒ-bit fingerprint, maintain an ƒ-dimensional vector V (all dimensions are 0 at the beginning). • a feature is hashed into an ƒ-bit hash value (using a hash function) and can be seen as a binary vector of ƒ bits • these ƒ bits (unique to the feature) increment/decrement the ƒ components of the vector by the weight of that feature as follows: • if the ith bit of the hash value is 1 => the ith component of V is incremented by the weight of that feature • if the ith bit of the hash value is 0 => the ith component of V is decremented by the weight of that feature. • When all features have been processed, components of V are positive or negative. • The signs of components determine the corresponding bits of the final fingerprint.
  • 38. SIMHASH: FINGERPRINTS EXAMPLE • Consider a document after POS tagging: {(“ferret”, NOUN), (“go”, VERB)} • decide to have weights for features based on their POS tags: {“NOUN”: 1.0, “VERB”: 0.9} • h(x) - some hash function, and we will calculate 16-bit (ƒ=16) fingerprint F • Maintain V = (0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) for the document • h(“hello”) = 0101010101010111 • h(“hello”)[1] == 0 => V[1] = V[1] - 1.0 = -1.0 • h(“hello”)[2] == 1 => V[2] = V[2] + 1.0 = 1.0 • … • V = (-1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, -1.0, 1.0, 1.0, 1.0) • h(“go”) = 0111011100100111 • h(“go”)[1] == 0 => V[1] = V[1] - 0.9 = -1.9 • h(“go”)[2] == 1 => V[2] = V[2] + 0.9 = 1.9 • … • V = (-1.9, 1.9, -0.1, 1.9, -1.9, 1.9, -0.1, 1.9, -1.9, 0.1, -0.1, 0.1, -1.9, 1.9, 1.9, 1.9) • sign(V) = (-, +, -, +, -, +, -, +, -, +, -, +, -, +, +, +) • fingerprint F = 0101010101010111
  • 39. SIMHASH: FINGERPRINTS After converting document into simhash fingerprints, we face the following design problem: • Given a ƒ-bit fingerprint of a document, how quickly discover other fingerprints that differ in at most k bit-positions?
  • 40. SIMHASH: DATA STRUCTURE • SimHash data structure consists of t tables: T1, T2, … Tt.With each table Ti is associated 2 quantities: • an integer pi (number of top bit-positions used in comparisons) • a permutation πi over the ƒ bit-positions • Table Ti is constructed by applying permutation πi to each existing fingerprint F and the resulting set of permuted ƒ-bit fingerprints is sorted. • To optimize such structure to store it in the main-memory, it is possible to compress each such table (that can decrease it size by half approximately). • Example: DF = {0110, 0010, 0010, 1001, 1110}
 πi = {1→2, 2→4, 3→3, 4→1 } • πi (0110)= 0011, πi (0010)= 0010,
 πi (1001)= 1100, πi (1110)= 0111 Ti = 0 0 1 0 0 0 1 1 0 0 1 1 0 1 1 1 1 1 0 0 ⎡ ⎣ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎤ ⎦ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥
  • 41. SIMHASH: ALGORITHM • Actually, the algorithm follows the general approach we saw for LSH • Given a fingerprint F and an integer k we probe tables from the data structure in parallel: • For each table Ti we identify all permuted fingerprints in Ti whose top pi bit- positions match the top pi bit-positions of πi(F). • After that, for each such permuted fingerprints, we check if it differs from the πi(F) in at most k bit-positions (in fact, comparing the Hamming distance between such fingerprints). • Identification of the first fingerprint in table Ti whose top pi bit-positions match the top pi bit-positions of πi(F) (Step 1) can be done in O(pi) steps by binary search. • If we assume that each fingerprint were truly a random bit sequence, it is possible to shrinks the run time to O(log pi) steps in expectation (interpolation search).
  • 42. SIMHASH: PARAMETERS • For a repository of 8B web-pages, ƒ=64 (64-bit simhash fingerprints) and k = 3 are reasonable. (Gurmeet Singh Manku et al., 2007) • For the given ƒ and k, to find a reasonable combination of t (number of tables) and pi (number of top bit-positions used in comparisons) we have to note that they are not independent from each other: • Increasing the number of tables t increases pi and hence reduces the query time. (large values for various pi let to avoid checking too many fingerprints in Step 2) • Decreasing the number of tables t reduces storage requirements, but reduces pi and hence increases the query time (small set of permutations let to avoid space blowup)
  • 43. SIMHASH: PARAMETERS • Consider ƒ = 64 (64-bit fingerprints) and k = 3, so documents whose fingerprints differ at most 3 bit-positions will be similar for us (duplicates). • Assume we have 8B = 234 existing fingerprints and decide to use t = 10 tables: • For instance, t = 10 = (5 2) , so we can use 5 blocks and choose 2 out of them in 10 ways. • Split 64 bits into 5 blocks having 13, 13, 13, 13 and 12 bits respectively. • For each such choice, permutation π corresponds to making the bits lying in the chosen blocks the leading bits. • pi is the total number of bits in the chosen blocks, thus pi = 25 or 26. • On average, a probe retrieves at most 234−25 = 512 (permuted) fingerprints. • if we seek all (permuted) fingerprints which match the top pi bits of a given (permuted) fingerprint, we expect 2d−pi fingerprints as matches • The total number of probes for 64-bit fingerprints and k = 3 is (64 3) = 41664 probes
  • 44. SIMHASH: PYTHON • https://github.com/leonsunliang/simhash
 A Python implementation of Simhash. • http://bibliographie-trac.ub.rub.de/browser/simhash.py
 Implementation of Charikar's simhashes in Python
  • 45. SIMHASH: READ MORE • http://www.cs.princeton.edu/courses/archive/spr04/ cos598B/bib/CharikarEstim.pdf • http://jmlr.org/proceedings/papers/v33/ shrivastava14.pdf • http://www.wwwconference.org/www2007/papers/ paper215.pdf • http://ferd.ca/simhashing-hopefully-made-simple.html
  • 46. ▸ @gakhov ▸ linkedin.com/in/gakhov ▸ www.datacrucis.com THANK YOU