# Probability of getting a duplicate value when calling GetHashCode() on strings

I want to know the probability of getting duplicate values when calling the GetHashCode() method on string instances. For instance, according to this blog post, blair and brainlessness have the same hashcode (1758039503) on an x86 machine.

## Answers

**Large.**

(Sorry Jon!)

The probability of getting a hash collision among short strings is *extremely large*. Given a set of only ten thousand distinct short strings drawn from common words, the probability of there being at least one collision in the set is approximately 1%. If you have eighty thousand strings, the probability of there being at least one collision is over 50%.

For a graph showing the relationship between set size and probability of collision, see my article on the subject:

http://blogs.msdn.com/b/ericlippert/archive/2010/03/22/socks-birthdays-and-hash-collisions.aspx

Small - if you're talking about the chance of any two arbitrary unequal strings having a collision. (It will depend on just how "arbitrary" the strings are, of course - different contexts will be using different strings.)

Large - if you're talking about the chance of there being *at least one* collision in a large pool of arbitrary strings. The small individual probabilities are no match for the birthday problem.

That's about all you need to know. There are definitely cases where there will be collisions, and there *have* to be given that there are only 232 possible hash codes, and more than that many strings - so the pigeonhole principle proves that at least one hash code must have more than one string which generates it. However, you should trust that the hash has been designed to be pretty reasonable.

You **can** rely on it as a pretty good way of narrowing down the possible matches for a particular string. It would be an unusual set of naturally-occurring strings which generated a *lot* of collisions - and even when there are *some* collisions, obviously if you can narrow a candidate search set down from 50K to fewer than 10 strings, that's a pretty big win. But you **must not** rely on it as a unique value for any string.

Note that the algorithm used in .NET 4 differs between x86 and x64, so that example probably *isn't* valid on both platforms.

I think all that's possible to say is "small, but finite and definitely not zero" -- in other words you *must not* rely on GetHashCode() ever returning unique values for two different instances.

To my mind, hashcodes are best used when you want to tell quickly if two instances are different -- not if they're the same.

In other words, if two objects have different hash codes, you *know* they are different and need not do a (possibly expensive) deeper comparison.

However, if the hash codes for two objects are the same, you *must* go on to compare the objects themselves to see if they're actually the same.

I ran a test on a database of 466k English words and got 48 collisions with string.GetHashCode(). MurmurHash gives slightly better results. More results are here: https://github.com/jitbit/MurmurHash.net

Just in case your question is meant to be what is the probability of a collision in a group of strings,

For n available slots and m occupying items: Prob. of no collision on first insertion is 1. Prob. of no collision on 2nd insertion is ( n - 1 ) / n Prob. of no collision on 3rd insertion is ( n - 2 ) / n Prob. of no collision on mth insertion is ( n - ( m - 1 ) ) / n

The probability of no collision after m insertions is the product of the above values: (n - 1)!/((n - m)! * n^(m - 1)).

which simplifies to ( n choose k ) / ( n^m ).

And everybody is right, you can't assume 0 collisions, so, saying the probability is "low" may be true but doesn't allow you to assume that there will be no collisions. If you're looking at a hashtable, I think the standard is you begin to have trouble with significant collisions when you're hashtable is about 2/3rds full.

The probability of a collision between two randomly chosen strings is 1 / 2^(bits in hash code), if the hash is perfect, which is unlikely or impossible.