Change of Random Variables

Sources:

  1. Jeseph K. Blitzstein & Jessica Hwang. (2019). Conditional propability. Introduction to Probability (2nd ed., pp. 369-375). CRC Press.

Change of Random Variables

Notation

Symbol Type Description
X Random variable Original continuous random variable
fX(x) Function Probability density function (PDF) of X
Y Random variable Transformed continuous random variable
fY(y) Function Probability density function (PDF) of Y
g(x) Function Transform function, assumed to be differentiable and strictly monotonic
g1(y) Function Inverse of the transformation function g(x)
X Vector Original random vector (X1,X2,,Xn)
fX(x) Function Joint PDF of X
Y Vector Transformed random vector (Y1,Y2,,Yn)
fY(y) Function Joint PDF of Y
xy Matrix Jacobian matrix of partial derivatives of g1
det(xy) Scalar Determinant of the Jacobian matrix
A0,B0 Sets Open subsets of Rn, representing domain and range of g, respectively

Abbreviations

Abbreviation Description
PDF Probability Density Function
CDF Cumulative Distribution Function
r.v. Random variable
Jacobian Matrix of partial derivatives

Change of variables in one dimension

Theorem: Change of variables in one dimension Let X be a continuous r.v. with PDF fX, and let Y=g(X), where g is differentiable and strictly increasing (or strictly decreasing). Then the PDF of Y is given by fY(y)=fX(x)|dxdy|,

where x=g1(y). The support of Y is all g(x) with x in the support of X.

Proof:

Let g be strictly increasing. The CDF of Y is FY(y)=P(Yy)=P(g(X)y)=P(Xg1(y))=FX(g1(y))=FX(x), so by the chain rule, the PDF of Y is fY(y)=FY(y)y=FX(x)y=FX(x)xxy=fX(x)dxdy The proof for g strictly decreasing is analogous. In that case the PDF ends up as fX(x)dxdy, which is nonnegative since dxdy<0 if g is strictly decreasing. Using |dxdy|, as in the statement of the theorem, covers both cases.

Change of variables in multiple dimensions

Theorem: Let X=(X1,,Xn) be a continuous random vector with joint PDF fX(x). Let g:A0B0 be an invertible function, where A0 and B0 are open subsets of Rn. Suppose A0 contains the support of X, and B0 is the range of g.

Define Y=g(X), with y=g(x). Since g is invertible, we have:

X=g1(Y) and x=g1(y)

Assuming all partial derivatives xiyj exist and are continuous, the Jacobian matrix of the transformation is:

xy=[x1y1x1y2x1ynxny1xny2xnyn]

If det(xy)0 everywhere, the joint PDF of Y is:

fY(y)=fX(g1(y))|det(xy)|.

This holds for all yB0.