Almost all machine learning (ML) algorithms have additional parameters that need to be fine-tuned to your dataset. This is an important step in the ML process and will vary under different datasets, algorithms and evaluation methods.
In this talk, we will discuss optimising these hyperparameters for classic ML algorithms (e.g. SVM), and answer questions including:
– What is a hyperparameter and what do they do?
– Can I just use default values?
– This sounds like overfitting, how do I avoid this?
– Are you sure I need to know this…won’t it be automated soon?
This talk will be light on math and heavy on intuitive visual examples.
The audience will find out what hyperparameters are, which algorithms have them and why it can be useful to optimise them. Optimisation strategies covered include grid search, random search and Bayesian methods. We’ll also look into the best practice approaches nested cross-validation and evaluation metrics.
A basic understanding of (classic) machine learning. No math background required.
You can view Kate’s slides below: