본문 바로가기
Statistics

Kullback-Leibler Divergence

by 임은천 2015. 1. 8.

from: stackexchange.com


The Kullback-Leibler Divergence is not a metric proper, since it is not symmegtric and also, it does not satisfy the triangle inequality. So the "roles" played by the two distributions are different, and it is important to distribute these roles according to the real-world phenomenon under study.

When we write (the OP has calculated the expression using base-2 logarithms)

K(P||Q)=ilog2(pi/qi)pi

we consider the P distribution to be the "target distribution" (usually considered to be the true distribution), which we approximate by using the Q distribution.

Now,

ilog2(pi/qi)pi=ilog2(pi)piilog2(qi)pi=H(P)EP(ln(Q))

where H(P) is the Shannon entropy of distribution P and EP(ln(Q)) is called the "cross-entropy of P and Q" -also non-symmetric.

Writing

K(P||Q)=H(P,Q)H(P)

(here too, the order in which we write the distributions in the expression of the cross-entropy matters, since it too is not symmetric), permits us to see that KL-Divergence reflects an increase in entropy over the unavoidable entropy of distribution P.

So, no, KL-divergence is better not to be interpreted as a "distance measure" between distributions, but rather as a measure of entropy increase due to the use of an approximation to the true distribution rather than the true distribution itself.

So we are in Information Theory land. To hear it from the masters (Cover & Thomas) "

...if we knew the true distribution P of the random variable, we could construct a code with average description length H(P). If, instead, we used the code for a distribution Q, we would need H(P)+K(P||Q) bits on the average to describe the random variable.

The same wise people say

...it is not a true distance between distributions since it is not symmetric and does not satisfy the triangle inequality. Nonetheless, it is often useful to think of relative entropy as a “distance” between distributions.

But this latter approach is useful mainly when one attempts to minimize KL-divergence in order to optimize some estimation procedure. For the interpretation of its numerical value per se, it is not useful, and one should prefer the "entropy increase" approach.

For the specific distributions of the question (always using base-2 logarithms)

K(P||Q)=0.49282,H(P)=1.9486

In other words, you need 25% more bits to describe the situation if you are going to use Q while the true distribution is P. This means longer code lines, more time to write them, more memory, more time to read them, higher probability of mistakes etc... it is no accident that Cover & Thomas say thatKL-Divergence (or "relative entropy") "measures the inefficiency caused by the approximation."

댓글