next up previous
Next: Theory Up: Maximum Likelihood Estimation of Previous: Maximum Likelihood Estimation of

Introduction

Logistic regression is widely used to model the outcomes of a categorical dependent variable. For categorical variables it is inappropriate to use linear regression because the response values are not measured on a ratio scale and the error terms are not normally distributed. In addition, the linear regression model can generate as predicted values any real number ranging from negative to positive infinity, whereas a categorical variable can only take on a limited number of discrete values within a specified range.

The theory of generalized linear models of Nelder and Wedderburn [9] identifies a number of key properties that are shared by a broad class of distributions. This has allowed for the development of modeling techniques that can be used for categorical variables in a way roughly analogous to that in which the linear regression model is used for continuous variables. Logistic regression has proven to be one of the most versatile techniques in the class of generalized linear models.

Whereas linear regression models equate the expected value of the dependent variable to a linear combination of independent variables and their corresponding parameters, generalized linear models equate the linear component to some function of the probability of a given outcome on the dependent variable. In logistic regression, that function is the logit transform: the natural logarithm of the odds that some event will occur. In linear regression, parameters are estimated using the method of least squares by minimizing the sum of squared deviations of predicted values from observed values. This involves solving a system of $ N$ linear equations each having $ N$ unknown variables, which is usually an algebraically straightforward task. For logistic regression, least squares estimation is not capable of producing minimum variance unbiased estimators for the actual parameters. In its place, maximum likelihood estimation is used to solve for the parameters that best fit the data.

In the next section, we will specify the logistic regression model for a binary dependent variable and show how the model is estimated using maximum likelihood. Following that, the model will be generalized to a dependent variable having two or more categories. In the final section, we outline a generic implementation of the algorithm to estimate logistic regression models.


next up previous
Next: Theory Up: Maximum Likelihood Estimation of Previous: Maximum Likelihood Estimation of

Scott Czepiel
http://czep.net/contact.html