Why probabilistic matching is not a black box

by Matthew Harris on 27th July 2017

What is probabilistic matching and who cares?

If you are a developer then you probably (!) already know that it is possible to trace back everything that happens in a probabilistic algorithm and why it happens. If you are a business lead interested in business outcomes only then perhaps you don’t care. However, if you are an architect or technical lead responsible for project delivery you may be unclear on what probabilistic matching entails. This means you may be a little distrustful of the outcomes as a result. If that is the case then this blog is for you and for anyone else who wants to understand the principles of how this type of matching works.

Probabilistic matching operates on the basis that if a pair of records from a larger set share a certain amount of data, then they are the same.

Probabilistic matching in the context of person data

For the purposes of this article, we’ll be using person data as the example of choice. Within a large group of people, two of them may be deemed to be the same if they have the same name, date of birth and address, for instance. However, there may be many other attributes for the person, such as phone number, nickname, email, passport, driving license and biometric information held on the record.

Why not use deterministic matching?

One could go through all exhaustive combinations of these attributes and create deterministic rules, however, these rules wouldn’t be able to cope with things such as edit distance differences, phonetic matches or nickname lookups. (For more information on the difference between probabilistic and deterministic matching see this IBM article). The deterministic rules could, in theory, be expanded to include provisions for these abnormalities, but the number of combinations would balloon out rapidly and become intractable.

Probabilistic matching & information theory

Another approach to ascertaining whether two records match or not is to look at the amount of shared information the records have, and deciding whether that amount of shared (or not shared) information is strong enough to match, or not. Information Theory, a relatively new branch of computer science, has developed key concepts that allow us to extract the amount of information each piece of data within a dataset contains. By using Self Information we can work out how much information an outcome with a known probability has.

The method to the probabilistic madness

One way to explain Self Information is to use letter frequency in the English language and how that relates to words. There are approximately 170,000 words in the English language. if you randomly chose two words, then the chance that they are the same is:

1/1×1/170,000=0.0005882%

This is really, really unlikely. (Note that the probability of the first word is 1, since it doesn’t matter what that word it, you need only match the second word to it).

If, however, we know that both words have five letters, then we can use the table below to find that our chances of the words being the same are:

1/1×(1/170,000×5.2%)=0.0113122%

A significant improvement on the purely random chance (although still very small).

Table 1:

What if, instead of knowing the length, we just knew that both words contained the letter ‘d’?

We can use the frequency table below (given as the frequency of letters across all words) as a substitute for the percentage of words that simply contain each letter. If we chose two words randomly but were told that both contained the letter ‘d’, the chance of them being the same word would be:

1/1×(1/170,000×4.25%)=0.0138408%

This is better than our five letter words because the number of potential candidates is smaller. However it is still a very small chance

Table 2:

Table 2

 

What about if we know that both of the  words are five letters long and both contain the letter ‘d’? This would be:

1/1×(1/170,000×5.2%×4.25%)=0.2661698%

This is much more likely than either of the two options individually. It is still small though – roughly the same chance as choosing two days of the year.

Some information components are more valuable than others

The extra information we were getting, the length of the word or the fact that is contained a certain letter, helped us to evaluate the likelihood that any two words were a match. You could extend this to include more examples, such as “the second letter is an ‘f’”, which would further narrow down the search space. With enough of these pieces of information, you could say with certainty that two words are the same (or at least are spelt the same, let’s forget about homonyms).

The interesting part here is that the total value of information derived in terms of being able to match is dependent on the value of each individual piece of it. For instance, knowing the word begins with ‘a’ compared to knowing it begins with ‘z’ gives you a lot less information, since there are many fewer words beginning with ‘z’ than ‘a’.

Self-Information, I, for outcome w that has probability P(w), is formalised as:
I(w)= -log⁡(P(w))

A graph (with exponentially decreasing probabilities on the horizontal axis) is given below. The amount of self information tends to 0 when an event is certain – since there is no information if the event takes place because you are certain it will anyway – and increases linearly as the probability gets exponentially less likely.

Self Information Graph

How logarithms can help

A logarithm function can be used to calculate independently the value of the information assigned to each of two events that take place. For instance, the probability of choosing the a five letter word and a word containing the letter ‘d’ is:

P(length 5 and〖(_^’)d〗^’ )=P(length 5)×P(‘d^’ )
=5.2% ×4.25%
=0.221%

The information within this event can easily be calculated since:
log⁡(x.y)=log⁡(x)+log⁡(y)

So you only need to know the probability of each of the events occurring individually, not them both occurring at once to assign a value to the information. In this case, the probabilistic value of the information is:
-log⁡(5.2%×4.25%)=-(log⁡(0.052)+log⁡(0.0425) )
=2.655607726

Scoring

Now that we’ve seen how probability can be used to assign value to the information associated with an event, we can introduce the estimation of match weights for particular attributes.

There are 2 concepts involved (and more details on those can be found here):

Match probability (M) – probability that a field agrees given that the pair of records is a true match
Unmatch probability (U) – probability that a field agrees given that the pair of records is NOT a true match. (Often simplified as the chance that two records will randomly match).

The match probability (M-probability)

This is field specific, and will apply to all values within a field. For instance, M(surname) = 95%, which indicates that when two records match, 95% of the time they have the same surname. (The reason why is may not be 100% is because of issues such as poor data quality, missing data or married people changing names). However, while surname may be high, phone number may be less high, since people may use different phone numbers on different records, depending on the context e.g. is it a work record or a personal record. M-probabilities are found through Research and Development per algorithm, and are normally internal to the algorithm implementation.

The unmatch probability (U-probability)

This will be value specific, as in a simple form, it is simply the chance that 2 records will randomly match. It is this value where the example from above becomes important, as it is those probabilities that would be used to determine the U-probability. For instance, if one of the parts of the algorithm scored matching words on length, then the U-probability for the value of 5 would be 5.2%. When used in an actual algorithm, the U-probability is generated per data set, which means that the entire data set has a frequency analysis performed on it to find the probability of each value that the algorithm will use.

In practice, such a frequency list is impractical, and so an 80-20 rule is applied – 20% of the distinct values will make up 80% of the total records, so only the frequencies for those 20% of distinct values are used. The remaining 80% of distinct values are grouped together and given a single probability.

Putting M and U together

With the M- and U-probabilities, weights can then be generated for each criterion in the algorithm that is used to give scores. The values for the weights are:

For agreeing weights: log⁡(M/U)
For disagreeing weights: log⁡((1-M)/(1-U))

We can now use the log formula from above (log⁡(x)+log⁡(y)) to add independent scores together to get a single value for how similar, or different, 2 records are. Going back to our word example, it’s clear that for two words to match they must have the same length and be spelled the same, which would give the M-probability for both as 100% (1.00).

Instead, let’s assume that these are words typed by a human, so only have a 99.9% (0.999) M-probability on both length and whether they contain the letter, since a human may mistype them. The weights for a pair of words, each of length 5 (5.2%) and containing the letter ‘d’ (4.25%) is given as:

log⁡(0.99/0.052)+log⁡(0.99/0.425)=log⁡(19.038)+log⁡(23.294)
=1.280+1.367
=2.647

On the other hand, let’s say we do not have a match on either of those criteria, but instead have some other word that is of length 4 and does not contain any ‘d’. The weight between such words would be:

log⁡((1-0.99)/(1-0.052))+log⁡((1-0.99)/(1-0.425))=log⁡(0.011)+log⁡(0.010)
=-1.977-1.981
=-3.958

So our examples give a clear example of a positive score when the algorithm criteria match and a negative score when they do not match.

Additional note

It should be noted that the ability to add each scoring component (the log components from above) is valid because the criteria that we are adding are independent. If, on the other hand, there was some dependency between parts of the algorithm, for instance that house address and landline numbers were related, then it would not be valid to add these scores together, since they would be, in essence, double counting the same piece of information. In this case, the engine/algorithm should remove this relationship ‘behind the scenes’.

Final thoughts

There is additional proprietary Research and Development that goes into any Probabilistic Matching Engine that aims to make the engine and algorithm more readable to humans. For instance, the above centres around things matching, but doesn’t speak to when you have disagreeing attributes, such as different names. Most PMEs will penalise such differences, so that differences can offset any potential matching data (e.g. different name may offset matching address between a mother and son).

Hopefully this has been helpful – it’s based on lots of practical experience of using these PMEs and figuring out how they work. If you want to know more about how Probabilistic Matching could help achieve valuable outcomes for your business please come and talk to us at Entity Group.