Partager

Optimization of hyper-parameters of deep learning models

by Guillaume Lacharme - March 23th 2021 - 1pm

The optimization of hyper-parameters is an old problem that comes back in force with deep learning. Indeed, deep neural networks have multiple hyper-parameters to optimize in particular the architecture (number of layers, size of these layers, type, arrangement...) and the learning parameters related to this architecture. The literature suggests several methods to solve this difficult task such as grid searching, random searching, Bayesian optimization or evolutionary algorithms. These methods can be improved in terms of network quality and calculation time. The objective of this work is to propose an alternative optimization method combining the skills of the ROOT (Operational Research, Scheduling and Transport) and RFAI (Shape Recognition and Image Analysis) teams of the LIFAT (Laboratory of Fundamental and Applied Computer Science of Towers) in order to improve both the quality of the results and the calculation times. The application of this method will be the generation of deep neural network models, it can be extended to other deep learning models such as deep forests.