# Chapter 5 - Properties of our Estimators

Lecture Slides:    Powerpoint     PDF

Learning Objectives

• Demonstrate the concept of sampling error
• Derive the Best, Linear and Unbiased properties of the ordinary least-squares (OLS) estimator
• Develop the formula for the standard error of the OLS coefficient
• Describe the consistency property of the OLS estimator

Example

What We Learned

• An unbiased estimator gets the right answer in an average sample.
• Larger samples produce more accurate estimates (smaller standard error) than smaller samples.
• Under assumptions CR1-CR3, OLS is the best, linear unbiased estimator — it is BLUE.
• We can use our sample data to estimate the accuracy of our sample coefficient as an estimate of the population coefficient.
• Consistency means that the estimator will get the right answer if applied to the whole population

Math Error in the Book (pg 73)

We specify a population model in which the X variables are uncorrelated with the errors, i.e., $$cov[X_i, \varepsilon_i] = 0$$. However, to prove unbiasedness of OLS, we need $$cov \left [ \frac{x_i}{\sum_{i=1}^n x^2_i}, \varepsilon_i \right ] = 0$$. In many practical settings, the condition $$cov[X_i, \varepsilon_i] = 0$$ implies $$cov \left [ \frac{x_i}{\sum_{i=1}^n x^2_i}, \varepsilon_i \right ] = 0$$ and therefore that OLS is unbiased. In many other settings, the condition $$cov[X_i, \varepsilon_i] = 0$$ implies the OLS is approximately unbiased. For example, if you have a large sample or if the model is correctly specified. Figure 5.2 in the book shows that OLS is unbiased in our schools example even with a sample size of 20. Nonetheless, $$cov[X_i, \varepsilon_i] = 0$$ doesn't guarantee unbiasedness in every setting, especially if the model is misspecified. Click here for the technical details.