## SVM Regression –

SVM algorithm is versatile. Not only does it support linear and non linear classification but it also supports linear and nonlinear regression. To use SVMs for regression instead of classification, the trick is to reverse the objective. Instead of trying to fit the largest possible street between two classes while limiting margin violations, SVM Regression tries to fit as many instances as possible on the street while limiting margin violations( i.e. instances off the street). The width of the street is controlled by a hyperparameter epsilon.

## How to train a SVM model for Regression?

Let’s read a dataset to work with.

```
import pandas as pd
import numpy as np
from sklearn import datasets
housing = datasets.fetch_california_housing()
X = pd.DataFrame(housing.data, columns=housing.feature_names)
y = housing.target
X.head()
```

Now split the data into a training and test set.

```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```

Next we will use the scikit-learn’s** LinearSVR** class to perform linear SVM Regression. We will also scale the features using StandardScaler as SVM is sensitive to the scale of the features.

```
from sklearn.svm import LinearSVR
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.pipeline import make_pipeline
# create a Linear SVM Regression model with feature scaling
svm_reg = make_pipeline(StandardScaler(), LinearSVR())
# train it on the training set
svm_reg.fit(X_train, y_train)
# make predictions on the test set
y_pred = svm_reg.predict(X_test)
# measure error
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
rmse
```

```
# output
0.767459121315096
```

To tackle nonlinear regression tasks, you can use the Kernelized SVM Model. We can do this using the scikit-learn’s **SVR** class which supports the kernel trick.

```
from sklearn.svm import SVR
from sklearn.preprocessing import StandardScaler
from sklearn.metrics import mean_squared_error
from sklearn.pipeline import make_pipeline
# create a SVR model with feature scaling
svm_reg = make_pipeline(StandardScaler(), SVR(kernel='poly', degree=3))
# train it on the training set
svm_reg.fit(X_train, y_train)
# make predictions on the test set
y_pred = svm_reg.predict(X_test)
# measure error
mse = mean_squared_error(y_test, y_pred)
rmse = np.sqrt(mse)
rmse
```

```
# output
1.002334866987825
```