"False"
Hoppa direkt till innehållet
printicon
Huvudmenyn dold.

On the rate of convergence of deep neural network regression estimates

mån
20
sep
Tid Måndag 20 september, 2021 kl. 13:15 - 14:15
Plats Zoom

Recent results in nonparametric regression show that deep learning, i.e., neural network estimates with many hidden layers, are able to circumvent the so–called curse of dimensionality in case that suitable restrictions on the structure of the regression function hold. Under a general composition assumption on the regression function, one key feature of the neural networks used in these results is that their network architecture has a further constraint, namely the network sparsity. In this talk we show that we can get similar results also for least squares estimates based on simple fully connected neural networks with ReLU activation functions. Here either the number of neurons per hidden layer is fixed and the number of hidden layers tends to infinity suitably fast for sample size tending to infinity, or the number of hidden layers is bounded by some logarithmic factor in the sample size and the number of neurons per hidden layer tends to infinity suitably fast for sample size tending to infinity. In a second result we show that deep neural networks (DNNs) achieve a dimensionality reduction in case that the regression function has locally low dimensionality. Consequently, the rate of convergence of the estimate does not depend on its input dimension d, but on its local dimension d* and the DNNs are able to circumvent the curse of dimensionality in case that d* is much smaller than d.

Join Zoom Meeting: https://umu.zoom.us/j/62887038685

Evenemangstyp: Seminarium

Talare: Dr. Sophie Langer, TU Darmstadt, Germany

Kontaktperson
Jun Yu
Läs om Jun Yu