Probability and Computing: Chapter 2 Exercises

Exercise 2.5: If X is a B(n,1/2) random variable with n \geq 1, show that the probability that X is even is 1/2.

Pr(X=j) = \binom{n}{j}p^{j}(1-p)^{n-j}=\binom{n}{j}p^{n} \qquad (p=1-p).

Since Pr(X = j) only depends on \binom{n}{j}, all we need to prove is : \sum_{\text{i is even}}\binom{n}{i} = \sum_{\text{j is odd}}\binom{n}{j}.

Indeed, we have 0= (1-1)^n=\sum_{k=0}^{n}\binom{n}{k}(-1)^{k}=\sum_{\text{i is even}}\binom{n}{i} - \sum_{\text{j is odd}}\binom{n}{j}.

Exercise 2.6: Suppose that we independently roll two standard six-sided dice. Let X_{1} be the number that shows on the first dice, X_{2} the number on the second dice, and X the sum of the numbers of two dice.

(a) E[X\mid X_{1} \text{ is even}] = \sum_{i}iPr( (X=i)\mid (X_{1} \text{ is even}))

= 3\cdot\frac{1}{18}+4\cdot\frac{1}{18}+5\cdot \frac{2}{18}+6\cdot\frac{2}{18}+7\cdot\frac{3}{18}+8\cdot\frac{3}{18}+9\cdot\frac{2}{18}+10\cdot \frac{2}{18}+

11\cdot\frac{1}{18} +12\cdot\frac{1}{18} = 7.5

(b) E[X\mid X_{1} = X_{2}] = \sum_{i}iPr(X=i \mid X_{1} = X_{2})


(c) E[X_{1}\mid X=9] = \sum_{i}iPr(X_{1}=i \mid X=9)

= (3+4+5+6) \cdot \frac{1}{4}=4.5.

(d) E[X_{1}-X_{2}\mid X=k]= 0 \qquad (k \in [2,12]).

Exercise 2.7: Let X and Y be independent geometric random variables, where X has parameter p and Y has parameter q.

(a) Pr(X=Y)=\sum_{i=1}^{\infty}Pr(X=Y=i) = \sum_{i=1}^{\infty}(1-p)^{i-1}p(1-q)^{i-1}q

= pq\sum_{i=0}^{\infty}[(1-p)(1-q)]^{i}=pq\cdot \frac{1}{1-(1-p)(1-q)} = \frac{pq}{p+q-pq}.

(b) E[max(X,Y)] =?

First we  will calculate Pr(max(X,Y)=i) and Pr(max(X,Y) \geq i).

Pr(max(X,Y) = i) = Pr(X=i)Pr(Y \leq i) + Pr(Y=i)Pr(X <i)

= (1-p)^{i-1}p\sum_{j=1}^{i}(1-q)^{j-1}q + (1-q)^{i-1}q\sum_{j=1}^{i-1}(1-p)^{j-1}p

= (1-p)^{i-1}pq\sum_{j=0}{i-1}(1-q)^{j}+(1-q)^{i-1}pq\sum_{j=0}^{i-2}(1-p)^{j}


=(1-p)^{i-1}p[1-(1-q)^{i}] +(1-q)^{i-1}q[1-(1-p)^{i-1}]


= (1-p)^{i-1}p+(1-q)^{i-1}q- (1-p)^{i-1}(1-q)^{i-1}[p(1-q)+q]

= (1-p)^{i-1}p+(1-q)^{i-1}q- (1-p)^{i-1}(1-q)^{i-1}(p+q-pq) .

Pr(max(X,Y) \geq i) = \sum_{j=i}^{\infty}(1-p)^{i-1}p +\sum_{j=i}^{\infty}(1-q)^{i-1}q+

= p\sum_{j=i-1}^{\infty}(1-p)^{j} + q\sum_{j=i-1}^{\infty}(1-q)^{j}+

= p\cdot \frac{(1-p)^{i-1}}{p} + q\cdot \frac{(1-q)^{i-1}}{q} +(p+q-pq)\cdot \frac{[(1-p)(1-q)]^{i-1}}{p+q-pq}

= (1-p)^{i-1} + (1-q)^{i-1}+[(1-p)(1-q)]^{i-1}.

Because max(X,Y) is a discrete random variable takes on only nonnegative integer value, we have:

E[max(X,Y)] = \sum_{i=1}^{\infty}Pr(max(X,Y) \geq i)

= \sum_{i=1}^{\infty}[(1-p)^{i-1}+(1-q)^{i-1}+(1-p-q+pq)^{i-1}]

= \sum_{i=0}^{\infty}[(1-p)^{i}+(1-q)^{i}+(1-p-q+pq)^{i}]

= \frac{1}{1-(1-p)}+\frac{1}{1-(1-q)}+\frac{1}{1-(1-p-q+pq)}

= \frac{1}{p}+\frac{1}{q}+\frac{1}{p+q-pq}.

(c) Pr(min(X,Y)=k)= Pr(X= k)Pr(Y \geq k) + Pr(Y=k)Pr(X > k)

= (1-p)^{k-1}p\sum_{i=k-1}^{\infty}(1-q)^{i}q+(1-q)^{k-1}q\sum_{i=k}^{\infty}(1-p)^{i}p

= (1-p)^{k-1}pq\frac{(1-q)^{k-1}}{q}+(1-q)^{k-1}pq\frac{(1-p)^{k}}{p}


= (1-p)^{k-1}(1-q)^{k-1}(p+q-pq).

(d) E[X\mid X \leq Y] =

Exercise 2.13:

(a) Consider the following variation of the coupon collector’ s problem. Each box of cereal contains one of 2n different coupons. The coupons are organized into n pairs, so that coupons 1 and 2 are a pair, coupons 3 and 4 are a pair, and so on. Once you obtained one coupon from every pair, you can obtain a prize. Assuming that the coupon in each box is chosen independently and uniformly at random from the 2n possibilites, what is the expected number of boxes you must buy before you can claim the prize?

(b) Generalize the result of the problem in part (a) for the case where there are kn different coupons, organized into n disjoint sets of k coupons, so that you need one coupon from every set.

(a) Let X be the total number of boxes we must buy before we have at least one coupon from every pair, and X the number of boxes we buy until we get a coupon from ith pair when we already had coupons from exactly i-1 pair.
Certainly we have X = \sum_{i=1}^{n}X_{i}.
The probability to get a coupon from a new pair when already had coupons from i-1 pairs is p_{i}=\frac{1}{2n}\cdot (2n-2(i-1))=\frac{n-i+1}{n}
Since each X_{i} is a geometric random variable with parameter p_{i}=\frac{n-i+1}{n}, their expected values are: E[X_{i}]=\frac{1}{p_{i}}=\frac{n}{n-i+1}.

Thus E[X] = E[\sum_{i=1}^{n}X_{i}]= n\sum_{i=1}^{n}\frac{1}{i}.

Use the same technique described in the text, we can prove E[X] = \text{ln}n+\Theta (1).

(b) Things are essentially the same when we have kn coupons in k different sets. The expected number of boxes we must buy before having at least one coupon from every set is \text{ln}n +\Theta (1).


One response to this post.

  1. Posted by nitesh on April 7, 2012 at 11:53 am

    If you have solution for 2.10 please let me know


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: