Tabulation hashing
In computer science, tabulation hashing is a method for constructing universal families of hash functions by combining table lookup with exclusive or operations. It was first studied in the form of Zobrist hashing for computer games; later work by Carter and Wegman extended this method to arbitrary fixed-length keys. Generalizations of tabulation hashing have also been developed that can handle variable-length keys such as text strings.
Despite its simplicity, tabulation hashing has strong theoretical properties that distinguish it from some other hash functions. In particular, it is 3-independent: every 3-tuple of keys is equally likely to be mapped to any 3-tuple of hash values. However, it is not 4-independent. More sophisticated but slower variants of tabulation hashing extend the method to higher degrees of independence.
Because of its high degree of independence, tabulation hashing is usable with hashing methods that require a high-quality hash function, including linear probing, cuckoo hashing, and the MinHash technique for estimating the size of set intersections.
Method
Let p denote the number of bits in a key to be hashed, and q denote the number of bits desired in an output hash function. Choose another number r, less than or equal to p; this choice is arbitrary, and controls the tradeoff between time and memory usage of the hashing method: smaller values of r use less memory but cause the hash function to be slower. Compute t by rounding p/r up to the next larger integer; this gives the number of r-bit blocks needed to represent a key. For instance, if r = 8, then an r-bit number is a byte, and t is the number of bytes per key. The key idea of tabulation hashing is to view a key as a vector of t r-bit numbers, use a lookup table filled with random values to compute a hash value for each of the r-bit numbers representing a given key, and combine these values with the bitwise binary exclusive or operation.[1] The choice of r should be made in such a way that this table is not too large; e.g., so that it fits into the computer's cache memory.[2]
The initialization phase of the algorithm creates a two-dimensional array T of dimensions 2r by t, and fills the array with random q-bit numbers. Once the array T is initialized, it can be used to compute the hash value h(x) of any given key x. To do so, partition x into r-bit values, where x0 consists of the low order r bits of x, x1 consists of the next r bits, etc. For example, with the choice r = 8, xi is just the ith byte of x. Then, use these values as indices into T and combine them with the exclusive or operation:[1]
- h(x) = T[0][x0] ⊕ T[1][x1] ⊕ T[2][x2] ⊕ ...
History
The first instance of tabulation hashing is Zobrist hashing, a method for hashing positions in abstract board games such as chess named after Albert Lindsey Zobrist, who published it in 1970.[3] In this method, a random bitstring is generated for each game feature such as a combination of a chess piece and a square of the chessboard. Then, to hash any game position, the bitstrings for the features of that position are combined by a bitwise exclusive or. The resulting hash value can then be used as an index into a transposition table. Because each move typically changes only a small number of game features, the Zobrist value of the position after a move can be updated quickly from the value of the position before the move, without needing to loop over all of the features of the position.[4]
Tabulation hashing in greater generality, for arbitrary binary values, was later rediscovered by Carter & Wegman (1979) and studied in more detail by Pătraşcu & Thorup (2012).
Universality
Carter & Wegman (1979) define a randomized scheme for generating hash functions to be universal if, for any two keys, the probability that they collide (that is, they are mapped to the same value as each other) is 1/m, where m is the number of values that the keys can take on. They defined a stronger property in the subsequent paper Wegman & Carter (1981): a randomized scheme for generating hash functions is k-independent if, for every k-tuple of keys, and each possible k-tuple of values, the probability that those keys are mapped to those values is 1/mk. 2-independent hashing schemes are automatically universal, and any universal hashing scheme can be converted into a 2-independent scheme by storing a random number x as part of the initialization phase of the algorithm and adding x to each hash value. Thus, universality is essentially the same as 2-independence. However, k-independence for larger values of k is a stronger property, held by fewer hashing algorithms.
As Pătraşcu & Thorup (2012) observe, tabulation hashing is 3-independent but not 4-independent. For any single key x, T[x0,0] is equally likely to take on any hash value, and the exclusive or of T[x0,0] with the remaining table values does not change this property. For any two keys x and y, x is equally likely to be mapped to any hash value as before, and there is at least one position i where xi ≠ yi; the table value T[yi,i] is used in the calculation of h(y) but not in the calculation of h(x), so even after the value of h(x) has been determined, h(y) is equally likely to be any valid hash value. Similarly, for any three keys x, y, and z, at least one of the three keys has a position i where its value zi differs from the other two, so that even after the values of h(x) and h(y) are determined, h(z) is equally likely to be any valid hash value.[5]
However, this reasoning breaks down for four keys because there are sets of keys w, x, y, and z where none of the four has a byte value that it does not share with at least one of the other keys. For instance, if the keys have two bytes each, and w, x, y, and z are the four keys that have either zero or one as their byte values, then each byte value in each position is shared by exactly two of the four keys. For these four keys, the hash values computed by tabulation hashing will always satisfy the equation h(w) ⊕ h(x) ⊕ h(y) ⊕ h(z) = 0, whereas for a 4-independent hashing scheme the same equation would only be satisfied with probability 1/m. Therefore, tabulation hashing is not 4-independent.[5]
Application
Because tabulation hashing is a universal hashing scheme, it can be used in any hashing-based algorithm in which universality is sufficient. For instance, in hash chaining, the expected time per operation is proportional to the sum of collision probabilities, which is the same for any universal scheme as it would be for truly random hash functions, and is constant whenever the load factor of the hash table is constant. Therefore, tabulation hashing can be used to compute hash functions for hash chaining with a theoretical guarantee of constant expected time per operation.[6]
However, universal hashing is not strong enough to guarantee the performance of some other hashing algorithms. For instance, for linear probing, 5-independent hash functions are strong enough to guarantee constant time operation, but there are 4-independent hash functions that fail.[7] Nevertheless, despite only being 3-independent, tabulation hashing provides the same constant-time guarantee for linear probing.[8]
Cuckoo hashing, another technique for implementing hash tables, guarantees constant time per lookup (regardless of the hash function). Insertions into a cuckoo hash table may fail, causing the entire table to be rebuilt, but such failures are sufficiently unlikely that the expected time per insertion (using either a truly random hash function or a hash function with logarithmic independence) is constant. With tabulation hashing, on the other hand, the best bound known on the failure probability is higher, high enough that insertions cannot be guaranteed to take constant expected time. Nevertheless, tabulation hashing is adequate to ensure the linear-expected-time construction of a cuckoo hash table for a static set of keys that does not change as the table is used.[8]
Extensions
Although tabulation hashing as described above ("simple tabulation hashing") is only 3-independent, variations of this method can be used to obtain hash functions with much higher degrees of independence. Siegel (2004) uses the same idea of using exclusive or operations to combine random values from a table, with a more complicated algorithm based on expander graphs for transforming the key bits into table indices, to define hashing schemes that are k-independent for any constant or even logarithmic value of k. However, the number of table lookups needed to compute each hash value using Siegel's variation of tabulation hashing, while constant, is still too large to be practical, and the use of expanders in Siegel's technique also makes it not fully constructive. Thorup (2013) provides a scheme based on tabulation hashing that reaches high degrees of independence more quickly, in a more constructive way. He observes that using one round of simple tabulation hashing to expand the input keys to six times their original length, and then a second round of simple tabulation hashing on the expanded keys, results in a hashing scheme whose independence number is exponential in the parameter r, the number of bits per block in the partition of the keys into blocks.
Simple tabulation is limited to keys of a fixed length, because a different table of random values needs to be initialized for each position of a block in the keys. Lemire (2012) studies variations of tabulation hashing suitable for variable-length keys such as character strings. The general type of hashing scheme studied by Lemire uses a single table T indexed by the value of a block, regardless of its position within the key. However, the values from this table may be combined by a more complicated function than bitwise exclusive or. Lemire shows that no scheme of this type can be 3-independent. Nevertheless, he shows that it is still possible to achieve 2-independence. In particular, a tabulation scheme that interprets the values T[xi] (where xi is, as before, the ith block of the input) as the coefficients of a polynomial over a finite field and then takes the remainder of the resulting polynomial modulo another polynomial, gives a 2-independent hash function.
Notes
- 1 2 Morin (2014); Mitzenmacher & Upfal (2014).
- ↑ Mitzenmacher & Upfal (2014).
- ↑ Thorup (2013).
- ↑ Zobrist (1970).
- 1 2 Pătraşcu & Thorup (2012); Mitzenmacher & Upfal (2014).
- ↑ Carter & Wegman (1979).
- ↑ For the sufficiency of 5-independent hashing for linear probing, see Pagh, Pagh & Ružić (2009). For examples of weaker hashing schemes that fail, see Pătraşcu & Thorup (2010).
- 1 2 Pătraşcu & Thorup (2012).
References
- Secondary sources
- Morin, Pat (February 22, 2014), "Section 5.2.3: Tabulation hashing", Open Data Structures (in pseudocode) (0.1Gβ ed.), pp. 115–116, retrieved 2016-01-08.
- Mitzenmacher, Michael; Upfal, Eli (2014), "Some practical randomized algorithms and data structures", in Tucker, Allen; Gonzalez, Teofilo; Diaz-Herrera, Jorge, Computing Handbook: Computer Science and Software Engineering (3rd ed.), CRC Press, pp. 11-1 – 11-23, ISBN 9781439898529. See in particular Section 11.1.1: Tabulation hashing, pp. 11-3 – 11-4.
- Primary sources
- Carter, J. Lawrence; Wegman, Mark N. (1979), "Universal classes of hash functions", Journal of Computer and System Sciences, 18 (2): 143–154, doi:10.1016/0022-0000(79)90044-8, MR 532173.
- Lemire, Daniel (2012), "The universality of iterated hashing over variable-length strings", Discrete Applied Mathematics, 160: 604–617, arXiv:1008.1715, doi:10.1016/j.dam.2011.11.009, MR 2876344.
- Pagh, Anna; Pagh, Rasmus; Ružić, Milan (2009), "Linear probing with constant independence", SIAM Journal on Computing, 39 (3): 1107–1120, doi:10.1137/070702278, MR 2538852.
- Pătraşcu, Mihai; Thorup, Mikkel (2010), "On the k-independence required by linear probing and minwise independence" (PDF), Proceedings of the 37th International Colloquium on Automata, Languages and Programming (ICALP 2010), Bordeaux, France, July 6-10, 2010, Part I, Lecture Notes in Computer Science, 6198, Springer, pp. 715–726, doi:10.1007/978-3-642-14165-2_60, MR 2734626.
- Pătraşcu, Mihai; Thorup, Mikkel (2012), "The power of simple tabulation hashing", Journal of the ACM, 59 (3): Art. 14, arXiv:1011.5200, doi:10.1145/2220357.2220361, MR 2946218.
- Siegel, Alan (2004), "On universal classes of extremely random constant-time hash functions", SIAM Journal on Computing, 33 (3): 505–543, doi:10.1137/S0097539701386216, MR 2066640.
- Thorup, M. (2013), "Simple tabulation, fast expanders, double tabulation, and high independence", Proceedings of the 54th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2013), pp. 90–99, doi:10.1109/FOCS.2013.18, MR 3246210.
- Wegman, Mark N.; Carter, J. Lawrence (1981), "New hash functions and their use in authentication and set equality", Journal of Computer and System Sciences, 22 (3): 265–279, doi:10.1016/0022-0000(81)90033-7, MR 633535.
- Zobrist, Albert L. (April 1970), A New Hashing Method with Application for Game Playing (PDF), Tech. Rep. 88, Madison, Wisconsin: Computer Sciences Department, University of Wisconsin.