
Introduction
The Durbin-Watson (DW) test is a statistical test that is used to detect the presence of autocorrelation (a relationship between values separated from each other by a given time lag) in the residuals (prediction errors) of a regression analysis.
When developing a regression model, one of the key assumptions is that the residuals are not autocorrelated. Autocorrelation occurs when the residuals are not independent from each other. In other words, when the value of y(x+1) is not independent from the value of y(x). Autocorrelation can lead to unreliable and inefficient estimates of the regression coefficients.
The Durbin-Watson test can help us test this assumption by giving a test statistic, which is a number that can range from 0 to 4. The closer to 2 the statistic, the less evidence for autocorrelation; a value below 2 implies positive autocorrelation, while a value above 2 suggests negative autocorrelation.
This article will guide you on how to perform the Durbin-Watson test in Python using the StatsModels library.
Data Preparation
Let’s first load and prepare our data. In this example, we will use the Boston Housing dataset from the sklearn datasets:
from sklearn import datasets
import pandas as pd
# Load Boston housing dataset
boston = datasets.load_boston()
# Prepare DataFrame
boston_df = pd.DataFrame(boston.data, columns=boston.feature_names)
boston_df['MEDV'] = boston.target
This dataset contains information collected by the U.S Census Service concerning housing in the area of Boston Mass. It was originally a part of UCI Machine Learning Repository and has been used extensively throughout the literature to benchmark algorithms. However, these comparisons were primarily done outside of Delve and are thus somewhat suspect. The dataset has 506 instances, 13 numerical/categorical attributes, and a target variable MEDV
, which is the Median value of owner-occupied homes in $1000s.
Fitting a Linear Regression Model
Before performing the Durbin-Watson test, we first need to fit a linear regression model to our data. We’ll use the ‘RM’ feature (average number of rooms per dwelling) to predict ‘MEDV’ (Median value of owner-occupied homes).
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
# Feature and target
X = boston_df[['RM']]
y = boston_df['MEDV']
# Split data into training set and test set
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
# Train the model
lm = LinearRegression()
lm.fit(X_train, y_train)
Once we have fitted the model, we can predict values for the test set and calculate the residuals:
# Predicting values
y_pred = lm.predict(X_test)
# Calculating residuals
residuals = y_test - y_pred
Performing the Durbin-Watson Test
To perform the Durbin-Watson test in Python, we can use the durbin_watson
function from the StatsModels library:
from statsmodels.stats.stattools import durbin_watson
# Perform Durbin-Watson test
dw_result = durbin_watson(residuals)
print(f'Durbin-Watson statistic: {dw_result}')
The Durbin-Watson test statistic is approximately equal to 2*(1-r) where r is the sample autocorrelation of the residuals. Therefore, for r == 0, indicating no serial correlation, the test statistic equals 2. The statistic will always be between 0 and 4. The closer to 0 the statistic, the stronger the evidence for positive serial correlation. The closer to 4, the stronger the evidence for negative serial correlation.
Interpretation
The Durbin-Watson statistic will always lie between 0 and 4.
- A value of 2.0 means there is no autocorrelation detected in the sample.
- Values from 0 to less than 2 indicate positive autocorrelation and values from from 2 to 4 indicate negative autocorrelation.
- A value of 0 indicates perfect positive autocorrelation, while a value of 4 indicates perfect negative autocorrelation.
In the context of the Durbin-Watson statistic, “positive” autocorrelation is serial correlation where high (low) values follow high (low) values. “Negative” autocorrelation is serial correlation where high (low) values follow low (high) values.
For example, if the Durbin-Watson statistic is significantly less than 2, there may be evidence to suggest that there is a positive autocorrelation. On the other hand, if the Durbin-Watson statistic is significantly greater than 2, there may be evidence to suggest a negative autocorrelation.
Conclusion
The Durbin-Watson test is a vital tool in regression analysis to check the autocorrelation in the residuals. The residuals of a well-specified model will be randomly distributed and not show any patterns. Autocorrelation violates this randomness assumption and thus needs to be detected and addressed.
In this article, we have walked through how to perform the Durbin-Watson test in Python using the StatsModels library. In the world of data analysis and modeling, it is essential to understand your data thoroughly, and one of the ways to ensure this is by testing the underlying assumptions, such as the absence of autocorrelation in regression analysis.