[1] Erhan D, Courville A, Bengio Y. Understanding representations learned in deep architectures, Technical Report 1355 [R]. Montreal, Canada: Universite de Montr’ eal, 2010.
[2] Hinton G E. Reducing the dimensionality of data with neural networks[J].Science, 2006, 313(5786): 504-507. DOI:10.1126/science.1127647.
[3] Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images[R]. Toronto, Ontario, Canada: University of Toronto, 2009.
[4] Yu D, Deng L. Deep learning and its applications to signal and information processing [exploratory DSP][J]. IEEE Signal Processing Magazine, 2011, 28(1): 145-154. DOI:10.1109/msp.2010.939038.
[5] Krizhevsky A, Sutskever I, Hinton G E. ImageNet classification with deep convolutional neural networks[C]// International Conference on Neural Information Processing Systems. Curran Associates Inc, 2012:1-9.
[6] Springenberg J T, Dosovitskiy A, Brox T, et al. Striving for simplicity: The all convolutional net[J]. arXiv preprint arXiv:1412.6806, 2015.
[7] Zagoruyko S, Komodakis N. Wide residual networks[J]. arXiv preprint arXiv:1605.07146, 2016.
[8] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016: 2818-2826.
[9] Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition[J]. arXiv preprint arXiv:1409.1556, 2014.
[10] Lin M, Chen Q, Yan S. Network in network[J]. arXiv preprint arXiv:1312.4400, 2013.