• A
  • A
  • A
  • ABC
  • ABC
  • ABC
  • А
  • А
  • А
  • А
  • А
Regular version of the site

Семинар HDI Lab: How benign is benign overfitting?

Event ended
January 31 at 16:20

Speaker: Dmitry Lishudi, Research Intern: International Laboratory of Stochastic Algorithms and Multivariate Data Analysis, Department of Computer Science.

Subject: "How benign is benign overfitting?"

Abstract: On many datasets, neural networks when trained with SGD achieve almost zero error on training data and good generalizability on test data, even when the training partitioning is noisy. This effect is called benign overfitting. Nevertheless, such models are vulnerable to adversarial attacks. The first reason for this vulnerability is poor sampling. Thus, theoretical and empirical analysis shows that noise in data partitioning is one of the causes of adversarial vulnerability and robust models fail to achieve zero training error. However, removing incorrect labels does not achieve resilience to adversarial attacks. The paper suggests that it is also a matter of suboptimal representation learning. Using a simple theoretical statement, it is shown that the choice of representations can significantly affect adversarial stability.

Link to article: https://arxiv.org/pdf/2007.04028.pdf