Currently, deep neural networks are the state of the art on problems such as speech recognition and computer vision. Recently Simonyan & Zisserman (2014) reported that the more layers deep network has the better accuracy it achieves by training. Yet, the more neurons are contained in neural network the larger memory is needed for keeping network and the more computational resources are required for train it. Currently, the best architectures have reached the limit of memory and CPU/GPU performance of personal computers.
A few years ago Oseledets (2011) proposed the Tensor Train (TT) format. This technique constructs a compact representation of a tensor (multidimensional array) such that allows for efficient application of linear algebra operations. The TT-format is only efficient when applied to tensors with many dimensions (say 5 or more), but one can achieve tremendous memory and computation efficiency while dealing with matrices and vectors by reshaping them into tensors.
The goal of this project is to substitute weights matrix of the fully connected layers with matrix in the TT-format to achieve several orders of magnitude more compact representation of the fully connected layer while having comparable accuracy. Another research direction is to use much more hidden units than feasible on nowaday computers while having moderate number of parameters to tune. This will hopefully lead to the increase in the accuracy of the network. This idea is inspired by the results of Ba & Caruana (2014) who report that the more hidden units one uses, the better accuracy one obtains.
Simonyan K., Zisserman A. Very deep convolutional networks for large-scale image recognition //arXiv preprint arXiv:1409.1556. – 2014.
Oseledets, I. V. Tensor-Train decomposition. SIAM J. Scientific Computing, 33(5):2295–2317, 2011.
Ba J., Caruana R. Do Deep Nets Really Need to be Deep? //Advances in Neural Information Processing Systems. – 2014. – С. 2654-2662.
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!