2. MLE in Simple Regression Model

εiN(0,σ2)yiNormalE(Yi)=E[β0+β1xi+εi]=β0+β1xiV(Yi)=V[β0+β1xi+εi]=V(εi)=σ2

Likelihood for a single observation:

L(yi|β0,β1,σ2)=12πσe(yiβ0β1xi)2/2σ2=12πσeεi2/2σ2=L(εi)

Joint density for ε1,ε2,,εn or y1,y2,,yn:

(12πσ)nei=1nεi2/2σ2

Note: which is minimized when we minimize

i=1nεi2

Log-Likelihood
Note: can be used for σ2,β1,β0. Take partial wrt σ2 not σ.

=n2ln[12πσ2]12σ2[yiβ0β1xi]2β0=1σ2[yiβ0β1xi]=want0β1=1σ2[yiβ0β1xi]xi=want0σ2=n2σ2+12σ4[εi]2=want0

which gives us normal equations
So the least-squares estimator is the MLE

Note:

=n2ln[12πσ2]12σ2[yiβ0β1xi]2=n2ln[1]n2ln[2πσ2](σ2)112[yiβ0β1xi]2

After differentiation we get:

σ2=ε2n=SSEn

Biased estimate. Unbiased SSEn2

Gauss-Markov Theorem

If the error terms in a linear regression model are uncorrelated, have equal variances, and the expectation of 0 (homoscedastic) then the ordinary least squares estimator has the lowest sampling variance within the class of linear unbiased estimators.

E[εi]=0Var(εi)=σ2<iCov(εi,εj)=0,ij