Power regression, also known as a polynomial regression of degree one, is a type of regression analysis used to model curvilinear relationships that follow a power-law distribution. In situations where linear regression does not aptly capture the relationship between the dependent and independent variables, power regression can be a viable alternative.
In this detailed guide, we will introduce the concept of power regression, its applications, and step-by-step instructions on how to perform it in the R programming language.
Introduction to Power Regression
A power regression can be represented by the equation:

Where:
- Y is the dependent variable.
- X is the independent variable.
- a and b are constants.
To linearize this relationship and make it suitable for linear regression techniques, we often take the logarithm of both sides:

This transforms the power regression problem into a linear regression problem, where the slope b is the power to which X is raised.
When to Use Power Regression?
Power regression is ideal for:
- Modeling phenomena that follow a power-law distribution.
- Situations where the rate of change in the dependent variable is not constant.
- Curvilinear relationships that don’t fit well with other types of regression models.
Performing Power Regression in R
Sample Data:
For illustration purposes, let’s assume we have data that seems to follow a power-law relationship:
# Generate sample data
set.seed(123)
X <- seq(1, 100, by = 2)
Y <- 2 * X^1.5 + rnorm(length(X), 0, 10)
data <- data.frame(X, Y)
Visualizing the Data:
Use ggplot2
to visualize the data and understand its structure:
library(ggplot2)
ggplot(data, aes(x = X, y = Y)) +
geom_point() +
ggtitle("Scatterplot of Y against X")

Transforming the Data:
Since power regression is inherently non-linear, we’ll first transform our data to make it linear:
data$ln_X <- log(data$X)
data$ln_Y <- log(data$Y)
Fitting a Linear Regression Model to the Transformed Data:
Now, fit a linear regression model to the transformed data:
linear_model <- lm(ln_Y ~ ln_X, data = data)
summary(linear_model)
The coefficient of ln_X
provides an estimate for bb in our power regression equation.
Back-transforming to Obtain the Power Regression Equation:
From the linear regression output, determine the constants a and b:
b is the coefficient of ln_X
.

Visualizing the Power Regression Fit:
Overlay the power regression curve on the original scatterplot:
ggplot(data, aes(x = X, y = Y)) +
geom_point() +
stat_function(fun = function(x) exp(coef(linear_model)[1]) * x^coef(linear_model)[2],
color = "red") +
ggtitle("Power Regression of Y on X")

Making Predictions:
To predict Y for new values of X:
new_data <- data.frame(X = c(105, 110))
new_data$Y_pred <- exp(predict(linear_model, newdata = data.frame(ln_X = log(new_data$X))))
print(new_data)
Caveats and Considerations:
- Assumptions: Ensure that the transformed data meets the assumptions of linear regression – linearity, independence, homoscedasticity, and normality of residuals.
- Overfitting: As with any regression model, there’s a risk of overfitting, especially with a limited data set.
- Model Validity: Always validate the model with out-of-sample data to ensure its reliability and robustness.
Conclusion:
Power regression is a valuable tool in the statistician’s toolbox, especially when modeling curvilinear relationships that follow a power-law distribution. R, with its powerful suite of functions and libraries, makes the implementation of power regression straightforward. However, as with all statistical models, understanding the underlying assumptions and potential pitfalls ensures that you derive meaningful and valid insights from your data.