截面与面板数据法分析-CH3.ppt

上传人:本田雅阁 文档编号:2902819 上传时间:2019-06-03 格式:PPT 页数:24 大小:199.52KB
返回 下载 相关 举报
截面与面板数据法分析-CH3.ppt_第1页
第1页 / 共24页
截面与面板数据法分析-CH3.ppt_第2页
第2页 / 共24页
截面与面板数据法分析-CH3.ppt_第3页
第3页 / 共24页
截面与面板数据法分析-CH3.ppt_第4页
第4页 / 共24页
截面与面板数据法分析-CH3.ppt_第5页
第5页 / 共24页
点击查看更多>>
资源描述

《截面与面板数据法分析-CH3.ppt》由会员分享,可在线阅读,更多相关《截面与面板数据法分析-CH3.ppt(24页珍藏版)》请在三一文库上搜索。

1、Chap 3. Multiple Regression Analysis:Estimation,Advantages of multiple regression analysis build better models for predicting the dependent variable. E.g. generalize functional form. Marginal propensity to consume Be more amenable to ceteris paribus analysis Key assumption: Implication: other factor

2、s affecting wage are not related on average to educ and exper. Multiple linear regression model:,OLS Estimator,OLS: Minimize ceteris paribus interpretations: Holding fixed, then Thus, we have controlled for the variables when estimating the effect of x1 on y.,Holding Other Factors Fixed,The power of

3、 multiple regression analysis is that it provides this ceteris paribus interpretation even though the data have not been collected in a ceteris paribus fashion. it allows us to do in non-experimental environments what natural scientists are able to do in a controlled laboratory setting: keep other f

4、actors fixed.,OLS and Ceteris Paribus Effects,measures the effect of x1 on y after x2, xk have been partialled or netted out. Two special cases in which the simple regression of y on x1 will produce the same OLS estimate on x1 as the regression of y on x1 and x2. -The partial effect of x2 on y is ze

5、ro in the sample. That is, - x1 and x2 are uncorrelated in the sample. -Example,data1: 1832 rural household reg consum laborage reg consum laborage financialK corr laborage financialK reg consum laborage reg consum laborage laboredu corr laborage laboredu,Goodness-of-fit,R-sq also equal the squared

6、correlation coef. between the actual and the fitted values of y. R-sq never decreases, and it usually increases when another independent variable is added to a regression. The factor that should determine whether an explanatory variable belongs in a model is whether the explanatory variable has a no

7、nzero partial effect on y in the population.,The Expectation of OLS Estimator,Assumption 1-4 Linear in parameters Random sampling Zero conditional mean No perfect co-linearity none of the independent variables is constant; and there are no exact linear relationships among the independent variables T

8、heorem (Unbiasedness) Under the four assumptions above, we have:,Notice 1: Zero conditional mean,Exogenous Endogenous Misspecification of function form (Chap 9) Omitting the quadratic term The level or log of variable Omitting important factors that correlated with any independent v. Measurement Err

9、or (Chap 15, IV) Simultaneously determining one or more x-s with y (Chap 16) Try to use exogenous variable! (Geography, History),Omitted Variable Bias: The Simple Case,Omitted Variable Bias The true population model: The underspecified OLS line: The expectation of : (46),前面3.2节中是x1对x2回归,The expectat

10、ion of , where the slope coefficient from the regression of x2 on x1, so then, Only two cases where is unbiased, , x2 does not appear in the true model; , x2 and x1 are uncorrelated in the sample;,前面3.2节中是x1对x2回归,Omitted variable bias:,Notice 2: No Perfect Collinearity,An assumption only about x-s,

11、nothing about the relationship between u and x-s Assumption MLR.4 does allow the independent variables to be correlated; they just cannot be perfectly correlated If we did not allow for any correlation among the independent variables, then multiple regression would not be very useful for econometric

12、 analysis How to deal with collinearity problem? Drop correlated variable, respectively. (corr=0.7),Notice 3: Over-Specification,Inclusion of an irrelevant variable: does not affect the unbiasedness of the OLS estimators. including irrelevant variables can have undesirable effects on the variances o

13、f the OLS estimators.,Variance of The OLS Estimators,Assumption 5 Homoskedasticity: Gauss-Markov Assumptions (for cross-sectional regression): Assumption 1-5 Linear in parameters Random sampling Zero conditional mean No perfect co-linearity Homoskedasticity,Theorem (Sampling variance of OLS estimato

14、rs) Under the five assumptions above:,More about,The statistical properties of y on x=(x1, x2, , xk) Error variance only one way to reduce the error variance: to add more explanatory variables not always possible and desirable (multi-collinearity) The total sample variations in xj: SSTj Increase the

15、 sample size,Multi-collinearity,The linear relationships among the independent v-s. 其他解释变量对xj的拟合优度(含截距项) If k=2: :the proportion of the total variation in xj that can be explained by the other independent variables High (but not perfect) correlation between two or more of the in dependent variables

16、is called multicollinearity.,Small sample size,Small sample size Low SSTj one thing is clear: everything else being equal, for estimating , it is better to have less correlation between xj and the other V-s.,Notice: The influence of multi-collinearity,A high degree of correlation between certain ind

17、ependent variables can be irrelevant as to how well we can estimate other parameters in the model. x2和x3之间的高相关性并不直接影响x1的回归系数的方差,极端的情形就是X1和x2、x3都不相关。同时前面我们知道,增加一个变量并不会改变无偏性。在多重共线性的情形下,估计仍然无偏,我们关心的变量系数的方差也与其他变量之间的共线性没有直接关系,尽管方差会变化,只要t值仍然显著,共线性不是大问题。 How to “solve” the multi-collinearity? Dropping some

18、 v.? 如果删除了总体模型中的一个变量,则可能会导致内生性。,参见注释,Estimating : Standard Errors of the OLS Estimators,参见注释,df=number of observations-number of estimated parameters Theorem 3.3 Unbiased estimation of Under the Gauss-Markov Assumption, MLR 1-5,While the presence of heteroskydasticity does not cause bias in the , it

19、 does lead to bias in the usual formula for , which when then invalidates the standard errors. This is important because any regression package compute 3.58 as the default standard error for each coefficient.,Gauss-Markov Assumptions (for cross-sectional regression): 1. Linear in parameters 2. Rando

20、m sampling 3. Zero conditional mean 4. No perfect co-linearity 5. Homoskedasticity 违反1-4中的任何一个假设将导致有偏的回归系数; 违反假设5不会导致有偏估计,但是会导致对回归系数标准差的计算出现偏差,从而影响对回归系数的显著性的统计推断; 多元回归中的其他几个问题: 6. 异方差问题: 无法计算回归系数的标准差 7. 小样本问题: SST 小,方差非最小 8. 多重共线性问题: Rj-sq大,方差非最小,Efficiency of OLS: The Gauss-Markov Theorem,BLUEs Bes

21、t: smallest variance Linear Unbiased Estimator,定理的含义: 1. 无需寻找其他线性组合的无偏估计量; 2. 如果G-M假设有一个不成立,则BLUE不成立。,Implications,Theory and right functional form Include the variables necessary, do not miss them, especially those included in existing literature! Get a good measure of the variables Use exogenous variables,

展开阅读全文
相关资源
猜你喜欢
相关搜索

当前位置:首页 > 其他


经营许可证编号:宁ICP备18001539号-1