Inverse Transform Justification

Pareto Distribution Example. Suppose that we would like to generate observations from a Pareto distribution with parameters (alpha) and (theta) so that (F(x) = 1 – left(frac{theta}{x+theta} right)^{alpha}). To compute the inverse transform, we can use the following steps:
begin{eqnarray*}
y = F(x) &Leftrightarrow& 1-y = left(frac{theta}{x+theta} right)^{alpha} \
&Leftrightarrow& left(1-yright)^{-1/alpha} = frac{x+theta}{theta} = frac{x}{theta} +1 \
&Leftrightarrow& theta left((1-y)^{-1/alpha} – 1right) = x = F^{-1}(y) .
end{eqnarray*}
Thus, (X = theta left((1-U)^{-1/alpha} – 1right)) has a Pareto distribution with parameters (alpha) and (theta).

Inverse Transform Justification

Why does the random variable (X = F^{-1}(U)) have a distribution function “(F)”? This is easy to establish in the continuous case.

Because (U) is a Uniform random variable on (0,1), we know that (Pr(U le y) = y), for (0 le y le 1). Thus,
begin{eqnarray*}
Pr(X le x) &=& Pr(F^{-1}(U) le x) \
&=& Pr(F(F^{-1}(U)) le F(x)) \
&=& Pr(U le F(x)) = F(x)
end{eqnarray*}
as required.

The key step is that ( F(F^{-1}(u)) = u) for each (u), which is clearly true when (F) is strictly increasing.

[raw] [/raw]