Advertisement

Probability and Random Variables

We define the probability of an event A to be its “long-run frequency.”


Example: The Truth about Cats and Dogs.  Suppose we conduct a very large survey of EMBA students and their state of dog/cat ownership.  The results of one such study are summarized in the 2×2 contingency table shown below. The position of each cell indicates the particular state, and the percentage in the cell indicates the fraction of EMBAs who fell in that state.  What must these four percentages add up to?


Cats
No Cats

Dogs

7.5%

41%


No Dogs

11.5%


40%



To simplify the notation, let C = event an EMBA owns a Cat, D = event an EMBA owns a Dog. 

1.  What is the probability that an EMBA owns a cat (= Pr(C))?                                (Ans. 19%)
2.  What is the probability that an EMBA owns a dog (= Pr(D))?                              (Ans. 48.5%)
3.  What is the probability that a person owns both a cat and a dog (= Pr(C∩D))? 
(Note: the symbol ∩ means “and” so C ∩ D means “cats and dogs”)            (Ans. 7.5%)
4.  Given the person is a cat owner, what is the probability that they own a dog?  
                                                                                                            (Ans. 7.5/[7.5+11.5] = 39.5%)
5.  Given the person is a dog owner, what is the probability that they own a cat?
                                                                                                            (Ans. 7.5/[7.5 + 41] = 15.5%)

The first probability is called the marginal probability of owning a cat (=19%).  The second is called the marginal probability of owning a dog (= 48.5%).  The third probability is called the joint probability of owning a dog and a cat because it depends on two random events occurring jointly (dog ownership and cat ownership).  In general, marginal probabilities capture the probability of one random event (e.g., cat ownership) without reference to any other random event (e.g., dog ownership).  In contrast, a joint probability captures the likelihood of two (or more) random events occurring jointly.

The fourth and fifth probabilities are called conditional probabilities (or posterior probabilities) because they are based on or “conditioned” on some other set of information.  A conditional probability can be thought of as the relative probability of something happening restricted to a particular subset of possibilities.  For example, the probability of owning a dog given the person is a cat owner, denoted by Pr(D‌|C), is the relative percentage of dog owners among cat owners (the percentage of D’s out of the C’s).  You could probably figure this out by brute force; it’s 7.5/[7.5+11.5] = 39.5%.  The general formula is Pr(D‌|C) = Pr(D∩C)/Pr(C) = 7.5/[7.5+11.5] = 39.5%.  Similarly, the probability of owning a cat given the person owns a dog is Pr(C‌|D) = Pr(C∩D)/Pr(D).  This is the relative percentage of cat owners among dog owners (the percentage of C’s out of the D’s).  Using either the formula or brute force, you can calculate this to be 7.5/[7.5 + 41] = 15.5%.  Conditional probabilities are important tools in marketing, especially when you are trying to identify consumers who are more apt to buy a product.