The advent of deep neural networks is one of the most exciting developments in the past five years in machine learning; they have achieved astounding results on several complex decision problems, some beyond human ability. However, despite their great empirical success, the mathematical analysis of neural networks is still in its infancy. With this project we aim at approaching the difficult problem of a mathematical analysis of the training phase of neuralnetworks, which, in our current understanding, requires an enormous amount of data and significant computational efforts. We claim that higher order differentiation allows for a fast learning procedure with few data and thus allowing for significant improvements over stochastic gradient descent and backpropagation.However, since differentiation only considers local information, we further beg the question whether we could also take global information into account. This leeds us to the so-called collective dynamics methods. We shall investigate the effectiveness of collective dynamics methods, mean-field control and mean-field evolutive games for standard machine learning tasks such as robust regression and, eventually, for training deep neural networks. Furthermore, neural networks can be understood as iterative algorithms taking the same form as iterative thresholding algorithms, which are used for minimizing certain functionals. We shall investigate whether this minimizing property of a neural network can be used to guide the training phase.