Voiced by Amazon Polly |
Overview
Optimizing hyperparameters is a crucial step in building machine learning models. Hyperparameters are a key component of machine learning models that play a key role in determining model performance. Hyperparameters are typically set before training a model. Therefore, the process of optimizing hyperparameters is important for best model performance. There are different ways of Optimizing the Hyper Parameters. This blog will see the most commonly used Optimization techniques, Grid Search, and Random Search.
Pioneers in Cloud Consulting & Migration Services
- Reduced infrastructural costs
- Accelerated application deployment
Introduction
Hyperparameters include parameters such as Learning rate, Number of Iteration, Number of Hidden Layers, Number of Hidden Units, Batch size, regularization type, choice of activation function, and Model Optimizer. The choice of these parameters depends on the type of use case and the dataset. This got the name Hyper Parameter because it controls the model parameters like Weight, Bias, etc.
On the other hand, Model Parameters are adjusted by the model based on the training. Initially, we set these model parameters to random, zero, or some scientific initialization methods. Some examples of model parameters are Weights, Bias, cluster centroids, etc.
Grid Search
The Grid Search approach trains a model using all possible combinations of a specific set of values for each hyperparameter. It may be used to optimize a limited collection of hyperparameters, but as the number of hyperparameters rises since it computes all possibilities, it becomes computationally costly.
For example, if we are tuning the hyperparameters of a support vector machine (SVM) model,
C: [0.1, 1]
kernel: [‘linear’, ‘rbf’]
gamma: [0.01, 0.1, 1]
12 possible combinations of hyperparameters will be computed, and the highest giving accuracy will be tracked.
Advantages of Grid Search:
- It is quite simple to implement and parallelize.
- It searches the full search space and determines which set of specified hyperparameters works best together.
Random Search
Randomized search is an alternative technique in which hyperparameters are randomly selected from a predefined search space. This looks optimal when compared to Grid Search. This technique is often faster than grid search because it evaluates only a subset of possible hyperparameter combinations. However, a randomized search is not guaranteed to find the best hyperparameters and may require more converging iterations than a grid search.
For example, if we are tuning the hyperparameters of a support vector machine (SVM) model,
C: [0.1, 1,0.01,0.12,0.0015]
kernel: [‘linear’, ‘rbf’]
gamma: [0.01, 0.1, 1]
From the above given space, some values are randomly selected for the model tuning process, and the best parameter is tracked.
Advantages of random search
- It only considers a portion of the combinations of hyperparameters, so it is faster than grid search.
- When the vast search space and hyperparameters are not highly correlated, it may be more productive than grid search.
When to use Grid Search vs. Randomized Search?
Grid search is a feasible option when there are few hyperparameters and a manageable amount of search space. It is also helpful when the hyperparameters are very dependent on one another.
On the other hand, when the search space is broad, and the hyperparameters are not substantially dependent on one another, randomized search is a suitable option. It is also helpful when there is insufficient time or computational resources to conduct a combination of hyper parameters.
Conclusion
Also, the selection of hyperparameters depends on the nature of the problem and the dataset used for training. It is recommended to try both methods and compare their results to choose the best one for the given problem.
Drop a query if you have any questions regarding Hyper Parameter Tuning and we will get back to you quickly.
Making IT Networks Enterprise-ready – Cloud Management Services
- Accelerated cloud migration
- End-to-end view of the cloud environment
About CloudThat
CloudThat is an award-winning company and the first in India to offer cloud training and consulting services worldwide. As a Microsoft Solutions Partner, AWS Advanced Tier Training Partner, and Google Cloud Platform Partner, CloudThat has empowered over 850,000 professionals through 600+ cloud certifications winning global recognition for its training excellence including 20 MCT Trainers in Microsoft’s Global Top 100 and an impressive 12 awards in the last 8 years. CloudThat specializes in Cloud Migration, Data Platforms, DevOps, IoT, and cutting-edge technologies like Gen AI & AI/ML. It has delivered over 500 consulting projects for 250+ organizations in 30+ countries as it continues to empower professionals and enterprises to thrive in the digital-first world.
FAQs
1. Why do we need Optimization in Model building?
ANS: – It helps to generalize and improve the accuracy of the model.
2. What is a Hyper Parameter?
ANS: – Hyper Parameter is the one that controls the model parameters like Weights, Bias, learning rate, and batch size that influence a model’s training and performance in machine learning. They are crucial for optimizing model outcomes. We define the hyper parameter before model training.

WRITTEN BY Ganesh Raj
Ganesh Raj V works as a Sr. Research Associate at CloudThat. He is a highly analytical, creative, and passionate individual experienced in Data Science, Machine Learning algorithms, and Cloud Computing. In a quest to learn and work with recent technologies, he strives hard to stay updated on advanced technologies along efficiently solving problems analytically.
Comments