The Gauss-Markov Theorem (pt. 3)

Today, we will finish up proving the \hat \beta_2 case of the Gauss-Markov Theorem. Recall that this theorem goes as follows:

Theorem: By respecting assumptions described in this post, the estimators \hat \beta_1 and \hat \beta_2, in the class of unbiased linear estimators, are best linear unbiased estimators of \beta_1 and \beta_2 respectively. That is, they are linear unbiased estimators that have minimium variance.

So where are we in our journey of proving this theorem? I have shown that \hat \beta_2 is linear and that it is unbiased–that the expected value of \hat \beta_2 is equal to \beta_2. Today, I show that it has minimum variance.

Proof: Suppose we have some unbiased linear estimator not necessarily \hat \beta_2—Let’s call it \beta*. Well, since \beta* is a linear estimator, \beta* = \sum w_i Y_i for some weights w_i. Additionally, since \beta* is an unbiased estimator of \beta_2, E(\beta*) = E(\sum w_i Y_i) = \sum w_i E(Y_i) = \sum w_i E(\beta_1 + \beta_2X_i + u_i) = \sum w_i( \beta_1 + \beta_2X_i + E(u_i)). Since E(u_i)=0 is one of our assumptions listed here, it follows that E(\beta*) = \beta_1\sum w_i + \beta_2\sum w_iX_i. This means that if \beta* is unbiased, then \sum w_i = 0 and \sum w_iX_i = 1.

So what about the variance?

var(\beta*) = var(\sum w_i Y_i) = \sum var(w_i Y_i) = \sum w_i^2 var(Y_i) = \sum w_i^2 \sigma^2, since cov(Y_i, Y_j) = 0 for i \neq j—the independence condition for linear regression—and var(Y_i) = var(u_i) = \sigma^2–the constant variance condition for linear regression.

So, var(\beta*) = \sigma^2\sum w_i^2 = \sigma^2\sum (w_i - \frac{x_i}{\sum x_i^2} + \frac{x_i}{\sum x_i^2})^2 = \sigma^2\sum ((w_i - \frac{x_i}{\sum x_i^2}) + \frac{x_i}{\sum x_i^2})^2 = \sigma^2\sum (w_i-\frac{x_i}{\sum x_i^2})^2 + 2\sigma^2 \sum (w_i - \frac{x_i}{\sum x_i^2})(\frac{x_i}{\sum x_i^2}) + \sigma^2 \sum (\frac{x_i}{\sum x_i ^2})^2 = \sigma^2\sum (w_i-\frac{x_i}{\sum x_i^2})^2 + 2\sigma^2 \sum (w_i - \frac{x_i}{\sum x_i^2})(\frac{x_i}{\sum x_i^2}) + \sigma^2 \frac{\sum x_i^2}{(\sum x_i ^2)^2}.

Fascinatingly, 2\sigma^2 \sum (w_i - \frac{x_i}{\sum x_i^2})(\frac{x_i}{\sum x_i^2}) = 0: \sum(w_i - \frac{x_i}{\sum x_i^2})(\frac{x_i}{\sum x_i^2}) = (\frac{1}{\sum x_i^2})(\sum w_ix_i -\sum \frac{x_i^2}{\sum x_i ^2}) = (\frac{1}{\sum x_i^2})(\sum w_i(X_i - \bar X) - \frac{\sum x_i^2}{\sum x_i^2}) = (\frac{1}{\sum x_i^2})(\sum w_iX_i - \bar X \sum w_i - 1) = (\frac{1}{\sum x_i^2})(1-0-1) = 0.

So, var(\beta*) = \sigma^2\sum (w_i - \frac{x_i}{\sum x_i^2})^2 + \sigma^2 \frac{1}{\sum x_i^2}. But, the latter term of this sum is a constant. Thus, the only way to minimize the variance is to let w_i = \frac{x_i}{\sum x_i^2}—but, that is exactly what k_i is for \hat \beta_2. Thus, if there is a linear unbiased estimator of \beta_2 that has minimum variance, it has to be \hat \beta_2!

So there you have it! Proof that our estimation of \beta_2 is as close as we can get in our simple linear regression models.

The case for \beta_1 follows a similar train of thought, however I believe I’ve spent enough time posting about this theorem. If you’d like to show the \beta_1 case, feel free to comment!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s