The course 32A treats topics related to differential calculus in several variables, including curves in the plane, curves and surfaces in space, various coordinate systems, partial differentiation, tangent planes to surfaces, and directional derivatives. The course culminates with the solution of optimization problems by the method of Lagrange multipliers.
TA for Sections 1A, 1B. (Taught by [Suzuki Fumiaki](https://www.math.ucla.edu/~suzuki/)). Link to the [CCLE site](https://ccle.ucla.edu/course/view/21W-MATH32A-1).

The course 32B treats topics related to integration in several variables, culminating in the theorems of Green, Gauss and Stokes. Each of these theorems asserts that an integral over some domain is equal to an integral over the boundary of the domain. In the case of Green's theorem the domain is an area in the plane, in the case of Gauss's theorem the domain is a volume in three-dimensional space, and in the case of Stokes' theorem the domain is a surface in three-dimensional space. These theorems are generalizations of the fundamental theorem of calculus, which corresponds to the case where the domain is an interval on the real line. The theorems play an important role in electrostatics, fluid mechanics, and other areas in engineering and physics where conservative vector fields play a role.
TA for Sections 1A, 1B. (Taught by [March Boedihardjo](https://www.math.ucla.edu/~march/)). Link to the [CCLE site](https://ccle.ucla.edu/course/view/21W-MATH32B-1).

(The talk is aimed at early graduate students)
Decoupling estimates were introduced by Wolff [1] in order to improve local smoothing estimates for the wave equation. Since then, they have found multiple applications in analysis: from PDEs and restriction theory, to additive number theory, where Bourgain, Demeter and Guth[2] used decoupling-type estimates to prove the main conjecture of the Vinogradov mean value theorem for d>3.

The analysis of neural network training beyond their linearization regime remains an outstanding open question, even in the simplest setup of a single hidden-layer. The limit of infinitely wide networks provides an appealing route forward through the mean-field perspective, but a key challenge is to bring learning guarantees back to the finite-neuron setting, where practical algorithms operate. Towards closing this gap, and focusing on shallow neural networks, in this work we study the ability of different regularisation strategies to capture solutions requiring only a finite amount of neurons, even on the infinitely wide regime. Specifically, we consider (i) a form of implicit regularisation obtained by injecting noise into training targets [Blanc et al.~19], and (ii) the variation-norm regularisation [Bach~17], compatible with the mean-field scaling. Under mild assumptions on the activation function (satisfied for instance with ReLUs), we establish that both schemes are minimised by functions having only a finite number of neurons, irrespective of the amount of overparametrisation. We study the consequences of such property and describe the settings where one form of regularisation is favorable over the other.

The course 32A treats topics related to differential calculus in several variables, including curves in the plane, curves and surfaces in space, various coordinate systems, partial differentiation, tangent planes to surfaces, and directional derivatives. The course culminates with the solution of optimization problems by the method of Lagrange multipliers.
TA for Sections 1C, 1D. (Taught by [Peter Spaas](https://www.math.ucla.edu/~pspaas/)). Link to the [CCLE site](https://ccle.ucla.edu/course/view/20F-MATH32A-1).

Math 131AB is the core undergraduate course sequence in mathematical analysis. The aim of the course is to cover the basics of calculus, rigorously.
TA for Section 2A. (Taught by [James Cameron](https://www.math.ucla.edu/~jcameron/)). Link to the [CCLE site](https://ccle.ucla.edu/course/view/20F-MATH131A-1).

Published with Academic Website Builder