## What is Randomized Search ?

In our previous post, we learned how to do hyperparameter optimization with grid search which is good when you are exploring few combinations but when the hyperparameter search space is large, it is often better to use RandomizedSearchCV instead. You can use RandomizedSearchCV just like GridSearchCV but instead of trying out all possible combinations, it evaluates a given number of random combinations by selecting a random value for each hyperparameter at each iteration.

### How to do Hyperparameter Tuning or Optimization with Randomized Search in Scikit-Learn ?

Let’s read a dataset to work with.

```
import pandas as pd
import numpy as np
url = 'https://raw.githubusercontent.com/bprasad26/lwd/master/data/breast_cancer.csv'
df = pd.read_csv(url)
df.head()
```

Now, split the data into training and test set.

```
from sklearn.model_selection import train_test_split
X = df.drop('diagnosis', axis=1).copy()
y = df['diagnosis'].copy()
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)
```

Perform Randomized Search.

```
from sklearn.svm import SVC
from sklearn.model_selection import RandomizedSearchCV
from scipy.stats import reciprocal, uniform
# create a SVC model
svm_clf = SVC()
# create hyperparameter distributions
param_distributions = {"gamma": reciprocal(0.001, 0.1), "C": uniform(1, 10)}
# create randomized search
rnd_search_cv = RandomizedSearchCV(svm_clf, param_distributions, cv=5, n_iter=50)
rnd_search_cv.fit(X_train, y_train)
```

Let’s check the best estimator.

`rnd_search_cv.best_estimator_`

```
# output
SVC(C=1.3700329477896391, gamma=0.0010837735334468638)
```

We can also check the best score.

`rnd_search_cv.best_score_`

```
# output
0.9164835164835164
```

We can also make predictions using the best estimator like this.

```
# make predictions on test set
y_pred = rnd_search_cv.best_estimator_.predict(X_test)
```