Past

On Sparsity in Overparametrised Shallow ReLU Networks

The analysis of neural network training beyond their linearization regime remains an outstanding open question, even in the simplest setup of a single hidden-layer. The limit of infinitely wide networks provides an appealing route forward through the mean-field perspective, but a key challenge is to bring learning guarantees back to the finite-neuron setting, where practical algorithms operate. Towards closing this gap, and focusing on shallow neural networks, in this work we study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime. Specifically, we consider (i) a form of implicit regularisation obtained by injecting noise into training targets [Blanc et al.19], and (ii) the variation-norm regularisation [Bach17], compatible with the mean-field scaling. Under mild assumptions on the activation function (satisfied for instance with ReLUs), we establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation. We study the consequences of such property and describe the settings where one form of regularisation is favorable over the other.

UCLA Math 32A: Calculus of Several Variables - Fall 2020

The course 32A treats topics related to differential calculus in several variables, including curves in the plane, curves and surfaces in space, various coordinate systems, partial differentiation, tangent planes to surfaces, and directional derivatives. The course culminates with the solution of optimization problems by the method of Lagrange multipliers. TA for Sections 1C, 1D. (Taught by [Peter Spaas](https://www.math.ucla.edu/~pspaas/)). Link to the [CCLE site](https://ccle.ucla.edu/course/view/20F-MATH32A-1).

UCLA Math 131A: Analysis - Fall 2020

Math 131AB is the core undergraduate course sequence in mathematical analysis. The aim of the course is to cover the basics of calculus, rigorously. TA for Section 2A. (Taught by [James Cameron](https://www.math.ucla.edu/~jcameron/)). Link to the [CCLE site](https://ccle.ucla.edu/course/view/20F-MATH131A-1).