## Kernel Trick in SVM –

In our previous post we learned how to train a Linear SVM model and although linear SVM classifiers are efficient and work really well in many cases but many datasets are not even close to being linearly separable. One approach to handling nonlinear dataset is to add more features such as polynomial features. Adding polynomial features works great with all kinds of Machine Learning algorithms but there are few problems with this. At a low polynomial degree this method cannot deal with very complex datasets and with a high polynomial degree it creates a huge number of features making the model too slow.

We can solve this problem using the kernel trick in SVM. The kernel trick makes it possible to get the same result as if you had added many polynomial features, even with very high-degree polynomials without actually having to add them. So there is no combinatorial explosion of the number of features because you don’t actually add any features. This trick is implemented by the SVC class.

### How to apply the Kernel Trick in SVM ?

Let’s read a dataset to illustrate it.

```
import pandas as pd
import numpy as np
url = 'https://raw.githubusercontent.com/bprasad26/lwd/master/data/breast_cancer.csv'
df = pd.read_csv(url)
df.head()
```

Next split the data into a training and test set.

```
from sklearn.model_selection import train_test_split
X = df.drop('diagnosis', axis=1)
y = df['diagnosis']
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```

Next we will train a SVM classifier using a third-degree polynomial kernel.

```
from sklearn.svm import SVC
from sklearn.preprocessing import StandardScaler
from sklearn.pipeline import make_pipeline
from sklearn.metrics import accuracy_score
# create a SVC model with Polynomial kernel
svm_clf = make_pipeline(StandardScaler(), SVC(kernel='poly', degree=3))
# train it on training set
svm_clf.fit(X_train, y_train)
# make predictions on the test set
y_pred = svm_clf.predict(X_test)
# measure accuracy
accuracy_score(y_test, y_pred)
```

```
# output
0.868421052631579
```