Hyperparameter optimization
It's not always Grid Search or Random Search
Grid Search
It is the most widely used Hyperparameter Optimization (HPO) technique. Grid search is the most straight forward method, it evaluates all combinations and then returns the ideal hyperparameter set. This method is ideal when the dimension is less (1 or 2). The main drawback of grid search is it takes too much time, the number of iterations grows exponentially with dimensionality.
Random Search
Random search only searches a subset of all combinations at random. This way it drastically dropped the time taken to tune the parameters, while going through entire search space at random, it works well when not all hyperparameters are equally important (as is the case in reality). Since it’s fast and has good results, it can be used as a good baseline method for tuning hyperparameters.
Bayesian Optimization Method
In the Grid Search and Random Search methods the hyperparameters doesn’t look back on previous iterations. In Bayesian optimization methods the hyperparameter values and the corresponding scores are mapped so it learns from probabilistic model as to which values to evaluate next. This method delivers the best parameters efficiently without going through all the search space like in grid search or randomly selecting a subset of search space as in random search where we may miss ideal parameters. Hence it takes less time to optimize the parameters. However, an extra time is added for calculating the next set of hyperparameters to run.
#mlopszoomcamp