Conditional expectations

• February 20th, 2008, 07:13 AM
EconMax
Conditional expectations
If X and Y are two independent random variables that take real values, is it true that
E(X) <= E(X|X>=Y) ?
Independence must be crucial, because if X and Y are not independent it is easy to build examples in which the statement is false.
Can anyone help me?
• February 20th, 2008, 10:39 AM
Demen Tolden
Is your notation correct? You didn't make a typo did you?
• February 20th, 2008, 12:22 PM
EconMax
I think I didn't. The question is:
If X and Y are two independent random variables that take real values, is it true that:
E(X) is less or equal to E(X given that X is larger or equal to Y) ?
With several distribution (e.g. Frechet, Weibull, Pareto, Uniform) it is very easy to find out that it is true (just by making the computation). I am wondering whether it is also true in general, given independence of X and Y. (If they are not independent, in fact, I am sure that the statement is false.)
[E is the expectations operator]
• February 20th, 2008, 04:19 PM
Demen Tolden
bit4bit or serpico may have to take this question. I don't know what an expectations operator is, and my web searches have revealed nothing.
• February 20th, 2008, 08:19 PM
bit4bit
Its beyond me too. Something to do with stats/probability I think. serpicojr probably knows....
• February 21st, 2008, 10:52 AM
serpicojr
I think I found a way. My probability is shaky (partly because I'm a pure mathematician and so use real analysis terms when doing probability, although I'm going to refrain from such now), so let me know if I cross the line in any way.

So I'm starting with the assumption that:

(1) E(X|X≥Y) = E(X*χ<sub>{X≥Y}</sub>)/P(X≥Y)

where χ<sub>{X≥Y}</sub> is the indicator function of the set {X≥Y}. We're trying to show:

(2) E(X|X≥Y) ≥ E(X)

Which is the same as showing, by my assumption in (1):

(3) E(X*χ<sub>{X≥Y}</sub>) ≥ E(X) P(X≥Y)

We can show this if, for each real number y, we can show:

(4) E(X*χ<sub>{X≥y}</sub>) ≥ E(X) P(X≥y)

If this is true, then we obtain (3) by integrating (4) over y against the probability density function of Y. This fact seems pretty intuitive. There are three cases:

i. P(X≥y) = 0. Then both sides are 0.

ii. P(X≥y) = 1. Then both sides are equal to E(X).

iii. P(X≥y) and P(X<y) are both nonzero. Then we certainly have:

(5) E(X|X≥y) ≥ y ≥ E(X|X<y)

Using an analog to (1) with Y replaced by y, we have:

(6) E(X*χ<sub>{X≥y}</sub>)/P(X≥y) ≥ E(*χ<sub>{X<y}</sub>)/P(X<y)

Doing some algebra:

(7) E(X*χ<sub>{X≥y}</sub>) P(X<y) ≥ E(X*χ<sub>{X<y}</sub>) P(X≥y)

And adding E(X*χ<sub>{X≥y}</sub>) P(X≥y) to both sides:

(8) E(X*χ<sub>{X≥y}</sub>)(P(X<y) + P(X≥y)) ≥ (E(X*χ<sub>{X≥y}</sub>) + E(X*χ<sub>{X<y}</sub>))P(X≥y)

And since P(X<y) + P(X≥y) = 1 and E(X*χ<sub>{X≥y}</sub>) + E(X*χ<sub>{X<y}</sub>) = E(X), we have (4).

Does this look good?
• February 22nd, 2008, 08:20 AM
EconMax
Looks wrong. The result in fact is not true if X and Y are not independent and it seems to me you have never used independence there. Let me make a counterexample.
Suppose that X takes values 3 and 5 with equal probability and Y takes values 2 and 6 with equal probability. Then E(X)=4. Now suppose that if X=3 then Y=2 and if X=5 then Y=6 (note, X and Y are not independent). Now there is only one case in which X>Y which is (3,2); therefore E(X)=4 < E(X|X>=Y)=3, contrary to your result.
If you tell me how to write integrals or how to upload files I will show where I am stuck with the proof. So far, I have been able to prove the result only in two special cases: if Y is a constant (i.e. degenerate random variable) and if E(X)>E(Y) ), but I think the result should always hold (provided that X and Y are independent).
Serpico is there anyone teaching probability theory in your department?
• February 22nd, 2008, 08:50 AM
serpicojr
I did use independence, although not explicitly, and I apologize for this. I used it in justifying that we can derive (3) from (4). For example, consider the left hand sides. (Since I don't want to format things, I'm going to let a set denote its own indicator function.)

E(X{X≥y})

is an expectation against X's probability distribution. Integrating over y against Y's probability distribution, we get:

E(E(X{X≥y}))

The outer E is an integral over Y's distribution, the inner over X's. Independence allows me to conclude that this is equal to:

E(X{X≥Y})

where this is an expectation against the joint probability distribution of X and Y. Independence is equivalent to the fact that the joint distribution is the product of the individual distributions. Independence also allows me to calculate that E(P(X≥y)) = P(X≥Y).

If this is your only complaint, I've addressed it, and I think the rest stands.

------------------------------

How did you prove it in the specific cases? It seems like it shouldn't be too hard to construct a general proof from a few specific arguments--there must be a common thread amongst them. In any case, you can do some rudimentary formatting by using < sup > and < sub >, and you can find the integral sign floating around in some of our other topics (e.g., the one called "Chapter 8...").
• May 22nd, 2008, 10:19 AM
EconMax
Serpico thanks a lot for your help and sorry for the late reply. Your argument looks correct. Indeed it can be even simplified somewhat. I am only missing one key point:
how do you prove that

E(E(X{X≥y}))=E(X{X≥Y}) for X and Y independent?

I get lost when I specify and try to solve the two integrals. Could you go over all the passages? Sorry for bothering you again and thanks in advance.
• May 22nd, 2008, 11:53 AM
serpicojr
So independence means that the joint probability distribution of X and Y is just the product of the probability distributions of X and of Y. So the expression:

E(E(X{X≥y})

really means:

http://math.b3co.com/img/9b70e169018...17ee31b338.png

Now let A be the set in the plane {(x,y): x ≥ y}. Then:

http://math.b3co.com/img/fc31e5cbce8...1bf2a73918.png

By independence:

http://math.b3co.com/img/cb9665dfa21...342f4996e1.png