How to Fine Tune Machine Learning Algorithms with Hyperparameter Optimisation

Almost all machine learning (ML) algorithms have additional parameters that need to be fine-tuned to your dataset. This is an important step in the ML process and will vary under different datasets, algorithms and evaluation methods.

In this talk, we will discuss optimising these hyperparameters for classic ML algorithms (e.g. SVM), and answer questions including:

– What is a hyperparameter and what do they do?

– Can I just use default values?

– This sounds like overfitting, how do I avoid this?

– Are you sure I need to know this…won’t it be automated soon?

This talk will be light on math and heavy on intuitive visual examples.

Objective of the talk

The audience will find out what hyperparameters are, which algorithms have them and why it can be useful to optimise them. Optimisation strategies covered include grid search, random search and Bayesian methods. We’ll also look into the best practice approaches nested cross-validation and evaluation metrics.

Required audience experience

A basic understanding of (classic) machine learning. No math background required.

You can view Kate’s slides below:

Kate Kilgour – Tuning Hyperparameters in Machine Learning

Track 1
Location: Mountbatten Date: September 30, 2019 Time: 12:15 pm - 1:00 pm Kate Kilgour, University of Dundee