Семинар HDI Lab: A generalisation bound for nearly linear networks
В эту пятницу, 2 августа, в 14:40. С докладом выступит Евгений Голиков (EPFL).
We derive a novel generalisation bound for neural networks that becomes nonvacuous when the activation functions get close to linearity. To the best of our knowledge, our bound is the first nonvacuous generalisation bound which is computable prior to training. Our analysis is based on a novel concept of "proxy models" who exploit the weights of trained models with activation functions removed.
The report is based on this work