Probability

The Analysis of Data, volume 1

Random Variables: Notes

2.6. Notes

Our exposition focuses on random variables that are either discrete or continuous. Random variables that are neither discrete nor continuous (for example the sum of a discrete RV and a continuous RV) require more careful treatment. They do not have a pdf function and they do not have a pmf function (all RVs have a cdf function though). A unified exposition is possible through the use of measure theory and Lebesgue integration (see Chapters E and F).

Note that we derive the expectation from the probability function. The indicator function $I_A(x)$ (equals 1 if $x\in A$ and 0 otherwise) may be used to reverse this since $\E(I_A)=1\cdot P(X\in A)+0 \cdot P(X\not\in A)=P(X\in A)$. Specifically, given an expectation operator $\E$, we can define $P(X\in A)$ to be equal to $\E(I_A)$.

More information on random variables is available in nearly any probability textbook. Elementary exposition that avoids measure theory is available in most undergraduate probability textbooks, for example (Weiss, 2005) or (Dasgupta, 2010). A rigorous measure-theoretic description is available in (Breiman, 1992), (Billingsley, 1995), (Ash, 1999) or (Resnick, 1999).