Today, we will finish up proving the case of the Gauss-Markov Theorem. Recall that this theorem goes as follows:

**Theorem:** By respecting assumptions described in this post, the estimators and , in the class of unbiased linear estimators, are best linear unbiased estimators of and respectively. That is, they are linear unbiased estimators that have minimium variance.

So where are we in our journey of proving this theorem? I have shown that is linear and that it is unbiased–that the expected value of is equal to . Today, I show that it has minimum variance.

**Proof:** Suppose we have some unbiased linear estimator not necessarily —Let’s call it . Well, since is a linear estimator, for some weights . Additionally, since is an unbiased estimator of , . Since is one of our assumptions listed here, it follows that . This means that if is unbiased, then and .

So what about the variance?

, since for —the independence condition for linear regression—and –the constant variance condition for linear regression.

So,

Fascinatingly, :

So, . But, the latter term of this sum is a constant. Thus, the only way to minimize the variance is to let —but, that is exactly what is for . Thus, if there is a linear unbiased estimator of that has minimum variance, it has to be !

So there you have it! Proof that our estimation of is as close as we can get in our simple linear regression models.

The case for follows a similar train of thought, however I believe I’ve spent enough time posting about this theorem. If you’d like to show the case, feel free to comment!