[1]
|
纪荣嵘, 林绍辉, 晁飞, 吴永坚, 黄飞跃. 深度神经网络压缩与加速综述[J]. 计算机研究与发展, 2018, 55(9): 1871.
|
[2]
|
张弛, 田锦, 王永森, 刘宏哲. 神经网络模型压缩方法综述[C]//中国计算机用户协会网络应用分会2018年第二十二届网络新技术与应用年会论文集. 苏州, 2018: 5.
|
[3]
|
Han, S., Mao, H. and Dally, W.J. (2015) Deep Compression: Compressing Deep Neural Net-works with Pruning, Trained Quantization and Huffman Coding.
|
[4]
|
Yoon, J. and Hwang, S.J. (2017) Combined Group and Exclu-sive Sparsity for Deep Neural Networks. International Conference on Machine Learning, Sydney, 6 August 2017, 3958-3966.
|
[5]
|
Liu, Z., Li, J., Shen, Z., Huang, G., Yan, S. and Zhang, C. (2017) Learning Efficient Convolutional Networks through Network Slimming. 2017 IEEE International Conference on Computer Vision, Venice, 22-29 October 2017, 2736-2744.
https://doi.org/10.1109/ICCV.2017.298
|
[6]
|
He, Y., Zhang, X. and Sun, J. (2017) Channel Pruning for Accelerating Very Deep Neural Networks. Proceedings of the IEEE International Conference on Computer Vision, Venice, 22-29 October 2017, 1389-1397.
https://doi.org/10.1109/ICCV.2017.155
|
[7]
|
Sun, X., Ren, X., Ma, S. and Wang, H. (2017) Meprop: Sparsified Back Propaga-tion for Accelerated Deep Learning with Reduced Overfitting.
|
[8]
|
Denton, E., Zaremba, W., Bruna, J., Lecun, Y. and Fergus, R. (2014) Exploiting Linear Structure within Convolutional Networks for Efficient Evaluation. Advances in Neural Information Pro-cessing Systems 27 (NIPS 2014), Montréal, 8 December 2014, 1269-1277.
|
[9]
|
Lebedev, V., Ganin, Y., Rakhuba, M., Oseledets, I. and Lempitsky, V. (2014) Speeding-Up Convolutional Neural Networks Using Fine-Tuned CP-Decomposition.
|
[10]
|
Howard, A., Zhu, M., Chen, B., Kalenichenko, D., Wang, W., Weyand, T., Adam, H., et al. (2017) MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications.
|
[11]
|
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A. and Chen, L. (2018) MobileNetV2: Inverted Residuals and Linear Bottlenecks. Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 4510-4520.
https://doi.org/10.1109/CVPR.2018.00474
|
[12]
|
Zhang, X., Zhou, X., Lin, M. and Sun, J. (2018) ShuffleNet: An Extremely Effi-cient Convolutional Neural Network for Mobile Devices. Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 6848-6856.
https://doi.org/10.1109/CVPR.2018.00716
|
[13]
|
Hinton, G., Vinyals, O. and Dean, J. (2015) Distilling the Knowledge in a Neural Network.
|
[14]
|
Kim, J., Park, S. and Kwak, N. (2018) Paraphrasing Complex Network: Network Compression via Factor Transfer. Advances in Neural Information Processing Systems, Montréal, 3 December 2018, 2760-2769.
|
[15]
|
Passalis, N. and Tefas, A. (2018) Learning Deep Representations with Probabilistic Knowledge Transfer. Proceedings of the European Conference on Computer Vision (ECCV), Munich, 8-14 September 2018, 283-299.
https://doi.org/10.1007/978-3-030-01252-6_17
|
[16]
|
Park, W., Kim, D., Lu, Y. and Cho, M. (2019) Relational Knowledge Distil-lation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, 15-20 June 2019, 3967-3976.
https://doi.org/10.1109/CVPR.2019.00409
|
[17]
|
Peng, B., Jin, X., Liu, J., Li, D., Wu, Y., Liu, Y., Zhang, Z., et al. (2019) Corre-lation Congruence for Knowledge Distillation. Proceedings of the IEEE International Conference on Computer Vision, Seoul, 27 Octo-ber-2 November 2019, 5007-5016. https://doi.org/10.1109/ICCV.2019.00511
|
[18]
|
Tang, R., Lu, Y., Liu, L., Mou, L., Vech-tomova, O. and Lin, J. (2019) Distilling Task-Specific Knowledge from Bert into Simple Neural Networks.
|
[19]
|
Romero, A., Ballas, N., Kahou, S. E., Chassang, A., Gatta, C. and Bengio, Y. (2014) Fitnets: Hints for Thin Deep Nets.
|
[20]
|
Zagoruyko, S. and Komoda-kis, N. (2016) Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Trans-fer.
|
[21]
|
Huang, Z. and Wang, N. (2017) Like What You Like: Knowledge Distill via Neuron Selectivity Transfer.
|
[22]
|
Yim, J., Joo, D., Bae, J. and Kim, J. (2017) A Gift from Knowledge Distillation: Fast Optimization, Network Minimization and Transfer Learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, 21-26 July 2017, 4133-4141. https://doi.org/10.1109/CVPR.2017.754
|
[23]
|
Heo, B., Lee, M., Yun, S. and Choi, J.Y. (2019) Knowledge Transfer via Distilla-tion of Activation Boundaries Formed by Hidden Neurons. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 3779-3787.
https://doi.org/10.1609/aaai.v33i01.33013779
|
[24]
|
Heo, B., Lee, M., Yun, S. and Choi, J. Y. (2019) Knowledge Distillation with Adversarial Samples Supporting Decision Boundary. Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33, 3771-3778.
https://doi.org/10.1609/aaai.v33i01.33013771
|
[25]
|
Zhang, L., Song, J., Gao, A., Chen, J., Bao, C. and Ma, K. (2019) Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation. Proceedings of the IEEE International Conference on Computer Vision, Seoul, 27 October-2 November 2019, 3713-3722. https://doi.org/10.1109/ICCV.2019.00381
|
[26]
|
Zhang, Y., Xiang, T., Hospedales, T.M. and Lu, H. (2018) Deep Mutual Learning. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, 18-23 June 2018, 4320-4328.
https://doi.org/10.1109/CVPR.2018.00454
|
[27]
|
Meng, F., Cheng, H., Li, K., Xu, Z., Ji, R., Sun, X. and Lu, G. (2020) Filter Grafting for Deep Neural Networks. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, 13-19 June 2020, 6599-6607.
https://doi.org/10.1109/CVPR42600.2020.00663
|
[28]
|
Xu, Z., Hsu, Y.C. and Huang, J. (2017) Training Shallow and Thin Net-works for Acceleration via Knowledge Distillation with Conditional Adversarial Networks.
|