Quantcast
Channel: //YorHel's utterly useless weblog
Viewing all articles
Browse latest Browse all 10

Hash substring collisions

$
0
0
In my never-ending quest to figure out how to make things faster and more efficient, I've been playing around with an (arguably) terrible idea. A common scenario: You have to uniquely identify some data within an application. A common and perfectly fine solution: Use a secure hash function. SHA-1 is fairly popular and, while more secure alternatives exist, still seems to be doing just fine even in amazingly large systems.

However, secure hash functions tend to have a downside: The hashes are 160 or 192 bits wide, or even wider for the SHA-2 family. When programming in C, you can easily embed such hashes in structs and other data structures. But passing around a hash is done by reference, and that comes down to passing a 64 bit pointer to a 160 bit hash. More than efficient enough for most purposes, but not quite optimal. Things get much, much worse when you start looking at higher-level languages: To efficiently store a hash in PostgreSQL, for example, you'd use a byte array. Such a field is variable-length, so in addition to being passed around by reference, it also includes with it a length specification. Totally unnecessary, because a hash always has a fixed width. Things get worse in even higher-level languages, but those tend to be quite slow anyway so I'll skip that.

Anyway, back to my point of optimizing everything. In practice, you only work with a relatively small data set. Say, some million items at most. Assuming there are no collisions, a 32 bit integer is more than sufficient to identify every piece of data in such a case. Why, then, are we working with 160 or 192 bits wide hash data? Can't we just use the first few bits of the hash function as identification, and throw everything else away?

Well, yes, of course we can. It is, however, quite tricky, since every bit you remove increases the likelihood of getting a hash collision somewhere. But what probabilities are we talking about in practice? I've been experimenting with that a bit.

Take Manned.org. It has a little over one million unique man pages, identified internally with a SHA-1 hash. I've compiled some statistics on the number of hash collisions within the first few bytes of the hashes (query):

    #bytes | collisions    --------+------------         0 |    1095303         1 |    1095303         2 |    1095303         3 |      69052         4 |        304         5 |          0         6 |          0         7 |          0         8 |          0

While a 32 bit integer wouldn't have worked for unique identification, I could have safely used a 64 bit integer in this case. But what are the actual probabilities of getting a collision if I use a 64 bit integer? That can be answered by looking at the Birthday problem. I'll leave the calculations as an exercise to the reader, but suffice it to say that even in that case the probability of a collision is quite small, yet significantly higher than with a full 160 bit hash. Is it worth it? I suppose that depends on your application. For Manned.org, probably not.

(there's another problem that I've happily glossed over: If the data you're hashing originates from an untrusted source, an attacker can, with modern hardware, intentionally generate a hash substring collision. Using a secret salt may help a bit, but this is still a serious issue you should consider before throwing away precious bits from your hashes.)


Viewing all articles
Browse latest Browse all 10

Latest Images

Trending Articles





Latest Images