Sunday, June 19, 2011

Section 2.5 - Random Variables, Mean and the Expected Value

A random variable maps outcomes from a sample space to numbers, making the outcomes easier to manipulate. For example, there isn’t much to be done with outcomes from the coin-flip sample space { H, T }, but mapping outcomes to integers with the random variable X = { (H, 0), (T,1) } allows mathematical manipulations. The mapping used depends on what’s being done. X's previous definition might be useful when counting the number of tails, while the definition X = { (H, 1), (T, -1) } is more suited for keeping track of wins and losses.

Given a random variable X over a sample space, the probability that Xequals a particular value k is the sum of the probabilities of the outcomes that X maps to k: $Pr(X = k) = \sum_{X(o) = k} Pr(o)$ For example, let the random variable X map the outcome i of a single die roll to i. The probability that X is even is $Pr(even(X)) = Pr(2)+ Pr(4) + Pr(6) = 1/6 + 1/6 + 1/6 = 1/2$Computing the mean (or average or expected value) of a random value is a common mathematical manipulation. A random variable’s mean is the sum of the values in the range, with each value weighted by its probability: $\sum_{k\in range(X)} kPr(X = k)$ The mean is often represented by $$\mu$$. To continue the die-roll example, X's mean is
$\begin{eqnarray} & & 1Pr(X = 1) + 2Pr(X = 2) + \cdots + 6Pr(X = 6) \\ &=& 1\cdot1/6 + 2\cdot1/6 + \cdots + 6\cdot1/6 \\ &=& (1 + 2 + \cdots + 6)1/6 \\ &=& 21\cdot1/6 \\ &=& 3.5 \end{eqnarray}$A random variable’s mean is not necessarily a member of its range.

Friday, June 17, 2011

Exercise 2.2-16

How many distinct three-letter combinations (“words”) can you make from the letters of Mississippi?

Tuesday, May 31, 2011

Exercise 2.2-3

What is the probability of cutting two cards of the same rank from a well-shuffled deck?

Monday, May 30, 2011

Section 2.4 - The Binomial Distribution—Bernoulli Trials

A Bernoulli trial produces an outcome which satisfies a criterion with probability $$p$$, and so fails to satisfy the criterion with compliment probability $$q = 1 - p$$. What is the probability that $$n$$ successive, independent Bernoulli trials results in exactly $$k$$ satisfactory outcomes?

The compound outcome is an arrangement of $$k$$ satisfactory outcomes and $$n - k$$ unsatisfactory outcomes. Because the Bernoulli-trial outcomes are independent, the outcome probabilities are multiplied to get the combined-outcome probability, and because multiplication is commutative, any individual compound outcome has probability $$p^kq^{n-k}$$. There are $$C(n, k)$$ such compound outcomes, and the probability that $$n$$ successive, independent Bernoulli trials results in exactly $$0 \leq k \leq n$$ satisfactory outcomes when satisfactory outcomes have probability $$p$$ is $b(k; n, p) = C(n, k)p^kq^{n - k}$ The compound-outcome sample space is not uniform. Even if $$p = 1/2$$,? $$C(n, k)$$ varies with $$k$$; the variation becomes more complex when $$p\neq q$$.

Sunday, May 29, 2011

Section 2.3 - Combinations

Given $$n$$ distinct items, there are $$P(n, k)$$ permutations of $$0 \leq k \leq n$$ items. Each permutation of $$k$$ items can be re-arranged $$k!$$ ways. If item order doesn’t matter, there are
$\frac{P(n, k)}{k!} = \frac{n!}{k!(n-k)!} = C(n, k)$
orderless permutations, or combinations. Sanity check: $$C(n, n) = n!/(n!(n - n)!) = 1/0! = 1$$.

Combinations are fun to play with. First, $$C(n, k) = C(n, n - k)$$.? Then
$\begin{eqnarray} C(n, k + 1) & = & \frac{n!}{(k + 1)!(n - k - 1)!} \\ & = & \frac{n!(n - k)}{(k + 1)k!(n - k)(n - k - 1)!} \\ & = & \frac{n - k}{k + 1}C(n, k) \end{eqnarray}$
$$C(n, k + 1)$$ is largest when $$(n - k)/(k + 1)$$ is closest to 1, which occurs when $$k$$ is closest to $$(n - 1)/2$$. Similarly
$\begin{eqnarray} C(n + 1, k) & = & \frac{(n + 1)!}{k!(n + 1 - k)!} \\ & = & \frac{(n + 1)n!}{k!(n + 1 - k)(n - k)!} \\ & = & \frac{n + 1}{n + 1 - k}C(n, k) \end{eqnarray}$

Saturday, May 28, 2011

Section 2.2 - Permutations

How to count the outcomes in a sample space.

N distinct items sampled with replacement k times produces a sample space of size nk. Ordered sampling without replacement is a permutation, and produces a sample space of size
P(n, k) = n(n - 1)(n - 2)· · ·(n - k + 1)? = (n)k?
As a sanity check, P(n, 1) is n, and P(n, n + 1) is 0.

Multiplying the falling factorial (n)k by (n - k)! turns it into the full factorial n! Multiplying P(n, k) by 1 in the form n!/n! gives an alternate formula for permutation counting:
P(n, k) = n!/(n - k)!
For more sanity P(n, n) = n!/(n - n)! = n!/0! = n!? and P(n, 0) = n!/(n - 0)! = n!/n! = 1.

Friday, May 27, 2011

Section 2.1 - Introduction

We avoid Peirce’s observation? through careful modeling, consistent methods, developed intuition, result analysis, and reasonableness tests. We must also prepare for non-rational probabilities and non-uniform probability distributions.

Uniform probability assignments allow computations to be either the ratio of satisfactory outcomes over all outcomes, or the sum of individual satisfactory outcome probabilities. Non-uniform probability calculations can only be carried out by summing individual probabilities. Where you end up depends on the direction you start walking. “Random” without modifier is the code-word for uniform probability assignments.

We now discover tools for computing probabilities over finite discrete sample spaces, the related uniform solution methods?, and the intuition to guide their use, in part by appealing to the frequency interpretation of probability.