In the field of deep learning, where data scientists are attempting to solve increasingly complex problems using deep learning, or using ever growing datasets, we’re placing a huge strain on the GPU. Increasingly developers and engineers are looking for ways to overcome the bounds of GPU memory, which limits the size of models and the scale at which datasets can grow.
In this session we’ll discuss the GPU memory limitation and IBM’s approach to solving the problem with infrastructure. Discussing various techniques for overcoming the challenge to improve model accuracy and performance with large datasets. Specifically looking at the LMS (Large Model Support) TensorFlow implementation
This talk aims to illustrate the impact of the GPU memory limitation on Machine Learning and Deep Learning worklods (including some worked examples in the Medical Imaging domain.)
Knowledge of TensorFlow/Machine Learning Fundamentals