Poisson Regression in R

Spread the love

Regression models are pivotal in statistics for understanding the relationships between variables. One such regression model, designed specifically for count data, is the Poisson regression. This article provides an in-depth look into Poisson Regression, its assumptions, its application in R, and some challenges that one might encounter. We’ll also see how to evaluate the model’s goodness of fit.

What is Poisson Regression?

Poisson Regression is a type of generalized linear model (GLM) used for modeling count data. This is especially useful when the dependent variable is a count, such as the number of times an event occurs.

Assumptions of Poisson Regression

  1. Mean Equals Variance: The mean of the dependent variable equals its variance.
  2. Independence: Observations are independent of each other.
  3. Linearity in Log: The log of the expected counts is a linear combination of the predictors.

Setting Up in R

Before delving into running a Poisson Regression, you need to make sure you have R and RStudio installed. Additionally, you’ll need the MASS package which contains the function for Poisson regression.


Running Poisson Regression in R

Assuming you have a dataset named data and you want to predict the count Y based on predictor variables X1 and X2:

model <- glm(Y ~ X1 + X2, data=data, family=poisson(link="log"))

The output will provide you with coefficients, standard errors, z-values, and p-values for each predictor.

Interpretation of Coefficients

For Poisson regression, the coefficients describe the log change in the dependent variable for a one-unit change in the predictor. To interpret them in terms of change in the count, you would exponentiate the coefficient.

For example, if the coefficient for X1 is 0.2:

Change in count = exp(0.2) = 1.22. This indicates a 22% increase in the count for a one-unit increase in X1.

Checking Overdispersion

A common challenge with Poisson regression is overdispersion, where the variance is greater than the mean. This can invalidate standard errors and p-values. One way to check for overdispersion is to compare the residual deviance to the degrees of freedom:

overdispersion_stat <- sum(residuals(model, type="pearson")^2)
df <- df.residual(model)
overdispersion_stat / df

If this ratio is substantially greater than 1, it suggests overdispersion.

Addressing Overdispersion: Negative Binomial Regression

When overdispersion is present, one can turn to the Negative Binomial Regression. In R, it can be implemented using the glm.nb function in the MASS package.

model_nb <- glm.nb(Y ~ X1 + X2, data=data)

Model Evaluation

The summary() function provides most of the crucial statistics. But for further diagnostics:

1. Residual Plots: Plotting residuals can help identify patterns that might suggest issues like non-linearity.

plot(model$fitted.values, residuals(model, type="pearson"))

2. AIC and BIC: AIC and BIC values can be used for model selection, with lower values indicating a better fit.



Poisson regression provides a powerful means of analyzing count data, with R offering easy and robust tools for its execution. Understanding the assumptions and being aware of potential pitfalls like overdispersion can ensure your analysis is both accurate and meaningful. And as always, the key to a successful statistical analysis lies not just in the execution but in the interpretation. So, ensure you understand the implications of your findings and their relevance in the broader context of your research or problem domain.

Posted in RTagged

Leave a Reply