[1]
|
Rohrbach, M., Ebert, S. and Schiele, B. (2013) Transfer Learning in a Transductive Setting. 2013 NIPS Workshops, Lake Tahoe, 5-8 December 2013, 46-54.
|
[2]
|
Fu, Y., Hospedales, T.M., Xiang, T., et al. (2015) Transductive Multi-View Zero-Shot Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 2332-2345. https://doi.org/10.1109/TPAMI.2015.2408354
|
[3]
|
Guo, Y., Ding, G., Jin, X., et al. (2016) Transductive Ze-ro-Shot Recognition via Shared Model Space Learning. AAAI-16: Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, 12-17 February 2016, 3434-3500.
|
[4]
|
Xu, Y., Han, C., Qin, J., et al. (2021) Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks. IEEE Transactions on Neural Networks and Learning Systems, 32, 3761-3769.
https://doi.org/10.1109/TNNLS.2020.3015848
|
[5]
|
Wang, Q. and Chen, K. (2020) Multi-Label Zero-Shot Human Action Recognition via Joint Latent Ranking Embedding. Neural Networks, 122, 1-23. https://doi.org/10.1016/j.neunet.2019.09.029
|
[6]
|
Wang, H., Oneata, D., Verbeek, J.J., et al. (2016) A Robust and Efficient Video Representation for Action Recognition. International Journal of Computer Vision, 119, 219-238. https://doi.org/10.1007/s11263-015-0846-5
|
[7]
|
Kong, Y. and Fu, Y. (2018) Human Action Recognition and Pre-diction: A Survey. CoRR, abs/1806.11230.
|
[8]
|
Simonyan, K. and Zisserman, A. (2014) Two-Stream Convolutional Networks for Action Recognition in Videos. 2014 NIPS Workshops, Nevada, December 2014, 568-576.
|
[9]
|
Tran, D., Bourdev, L.D., Fergus, R., et al. (2015) Learning Spatiotemporal Features with 3D Convolutional Networks. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 7-13 December 2015, 4489-4497.
https://doi.org/10.1109/ICCV.2015.510
|
[10]
|
Ji, S., Xu, W., Yang, M., et al. (2013) 3D Convolutional Neural Networks for Human Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 221-231. https://doi.org/10.1109/TPAMI.2012.59
|
[11]
|
Carreira, J. and Zisserman, A. (2017) Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Proceedings CVPR’17, Honolulu, 21-26 July 2017, 4724-4733. https://doi.org/10.1109/CVPR.2017.502
|
[12]
|
Soomro, K., Zamir, A.R. and Shah, M. (2012) UCF101: A Dataset of 101 Human Actions Classes From Videos in the Wild.
|
[13]
|
Kuehne, H., Jhuang, H., Garrote, E., et al. (2011) HMDB: A Large Video Database for Human Motion Recognition. ICCV 2011, Barcelona, 6-13 November 2011, 2556-2563. https://doi.org/10.1109/ICCV.2011.6126543
|
[14]
|
Xian, Y., Lorenz, T., Schiele, B., et al. (2018) Fea-ture Generating Networks for Zero-Shot Learning. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 5542-5551. https://doi.org/10.1109/CVPR.2018.00581
|
[15]
|
Verma, V.K. and Rai, P. (2017) A Simple Exponential Family Framework for Zero-Shot Learning. ECML/PKDD, Skopje, 18-22 September 2017, Vol. 10535, 792-808. https://doi.org/10.1007/978-3-319-71246-8_48
|
[16]
|
Tran, D., Wang, H., Torresani, L., et al. (2018) A Closer Look at Spatiotemporal Convolutions for Action Recognition. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 6450-6459. https://doi.org/10.1109/CVPR.2018.00675
|
[17]
|
Niebles, J.C., Chen, C. and Li, F. (2010) Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification. ECCV 2010: 11th European Confer-ence on Computer Vision, Heraklion, 5-11 September 2010, 392-405.
https://doi.org/10.1007/978-3-642-15552-9_29
|
[18]
|
Wang, L., Qiao, Y. and Tang, X. (2015) Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 7-12 June 2015, 4305-4314.
https://doi.org/10.1109/CVPR.2015.7299059
|
[19]
|
Tsochantaridis, I., Joachims, T., Hofmann, T., et al. (2005) Large Margin Methods for Structured and Interdependent Output Variables. Journal of Machine Learning Research, 6, 1453-1484.
|
[20]
|
Song, J., Shen, C., Yang, Y., et al. (2018) Transductive Unbiased Embedding for Zero-Shot Learning. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 1024-1033. https://doi.org/10.1109/CVPR.2018.00113
|
[21]
|
Gao, J., Zhang, T. and Xu, C. (2019) I Know the Relationships: Zero-Shot Action Recognition via Two-Stream Graph Convolutional Networks and Knowledge Graphs. The Thir-ty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, 27 January-1 February 2019, 8303-8311. https://doi.org/10.1609/aaai.v33i01.33018303
|
[22]
|
Akata, Z., Reed, S.E., Walter, D., et al. (2015) Evaluation of Output Embeddings for Fine-Grained Image Classification. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 7-12 June 2015, 2927-2936. https://doi.org/10.1109/CVPR.2015.7298911
|
[23]
|
Xu, X., Hospedales, T.M. and Gong, S. (2015) Semantic Embedding Space for Zero-Shot Action Recognition. 2015 IEEE International Conference on Image Processing, ICIP 2015, Quebec City, 27-30 September 2015, 63-67.
https://doi.org/10.1109/ICIP.2015.7350760
|
[24]
|
Xun, X., Hospedales, T.M. and Gong, S.G. (2016) Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation. ECCV 2016, Amsterdam, 8-16 October 2016, 343-359. https://doi.org/10.1007/978-3-319-46475-6_22
|
[25]
|
Li, Y., Hu, S. and Li, B. (2016) Recognizing Unseen Actions in a Domain-Adapted Embedding Space. 2016 IEEE International Conference on Image Processing, Phoenix, 25-28 September 2016, 4195-4199.
https://doi.org/10.1109/ICIP.2016.7533150
|
[26]
|
Xu, X., Hospedales, T.M. and Gong, S. (2017) Transductive Ze-ro-Shot Action Recognition by Word-Vector Embedding. International Journal of Computer Vision, 123, 309-333. https://doi.org/10.1007/s11263-016-0983-5
|
[27]
|
Qin, J., Liu, L., Shao, L., et al. (2017) Zero-Shot Action Recogni-tion with Error-Correcting Output Codes. Proceedings CVPR’17, Honolulu, 21-26 July 2017, 1042-1051. https://doi.org/10.1109/CVPR.2017.117
|
[28]
|
Mishra, A., Verma, V.K., Reddy, M.S.K., et al. (2018) A Generative Approach to Zero-Shot and Few-Shot Action Recognition. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, 12-15 March 2018, 372-380. https://doi.org/10.1109/WACV.2018.00047
|
[29]
|
Zhu, Y., Long, Y., Guan, Y., et al. (2018) Towards Universal Representation for Unseen Action Recognition. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 9436-9445. https://doi.org/10.1109/CVPR.2018.00983
|
[30]
|
Zhang, C. and Peng, Y. (2018) Visual Data Synthesis via GAN for Zero-Shot Video Classification. Proceedings of the Twen-ty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, 13-19 July 2018, 1128-1134.
https://doi.org/10.24963/ijcai.2018/157
|
[31]
|
Kodirov, E., Xiang, T., Fu, Z., et al. (2015) Unsupervised Domain Adaptation for Zero-Shot Learning. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 7-13 December 2015, 2452-2460.
https://doi.org/10.1109/ICCV.2015.282
|
[32]
|
Wang, Q. and Chen, K. (2017) Zero-Shot Visual Recognition via Bi-directional Latent Embedding. International Journal of Computer Vision, 124, 356-383. https://doi.org/10.1007/s11263-017-1027-5
|
[33]
|
Rohrbach, M., Ebert, S. and Schiele, B. (2013) Transfer Learning in a Transductive Setting. 2013 NIPS Workshops, Lake Tahoe, 5-8 December 2013, 46-54.
|
[34]
|
Fu, Y., Hospedales, T.M., Xiang, T., et al. (2015) Transductive Multi-View Zero-Shot Learning. IEEE Transactions on Pattern Analysis and Machine Intelligence, 37, 2332-2345. https://doi.org/10.1109/TPAMI.2015.2408354
|
[35]
|
Guo, Y., Ding, G., Jin, X., et al. (2016) Transductive Ze-ro-Shot Recognition via Shared Model Space Learning. AAAI-16: Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, 12-17 February 2016, 3434-3500.
|
[36]
|
Xu, Y., Han, C., Qin, J., et al. (2021) Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks. IEEE Transactions on Neural Networks and Learning Systems, 32, 3761-3769.
https://doi.org/10.1109/TNNLS.2020.3015848
|
[37]
|
Wang, Q. and Chen, K. (2020) Multi-Label Zero-Shot Human Action Recognition via Joint Latent Ranking Embedding. Neural Networks, 122, 1-23. https://doi.org/10.1016/j.neunet.2019.09.029
|
[38]
|
Wang, H., Oneata, D., Verbeek, J.J., et al. (2016) A Robust and Efficient Video Representation for Action Recognition. International Journal of Computer Vision, 119, 219-238. https://doi.org/10.1007/s11263-015-0846-5
|
[39]
|
Kong, Y. and Fu, Y. (2018) Human Action Recognition and Pre-diction: A Survey. CoRR, abs/1806.11230.
|
[40]
|
Simonyan, K. and Zisserman, A. (2014) Two-Stream Convolutional Networks for Action Recognition in Videos. 2014 NIPS Workshops, Nevada, December 2014, 568-576.
|
[41]
|
Tran, D., Bourdev, L.D., Fergus, R., et al. (2015) Learning Spatiotemporal Features with 3D Convolutional Networks. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 7-13 December 2015, 4489-4497.
https://doi.org/10.1109/ICCV.2015.510
|
[42]
|
Ji, S., Xu, W., Yang, M., et al. (2013) 3D Convolutional Neural Networks for Human Action Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35, 221-231. https://doi.org/10.1109/TPAMI.2012.59
|
[43]
|
Carreira, J. and Zisserman, A. (2017) Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset. Proceedings CVPR’17, Honolulu, 21-26 July 2017, 4724-4733. https://doi.org/10.1109/CVPR.2017.502
|
[44]
|
Soomro, K., Zamir, A.R. and Shah, M. (2012) UCF101: A Dataset of 101 Human Actions Classes From Videos in the Wild.
|
[45]
|
Kuehne, H., Jhuang, H., Garrote, E., et al. (2011) HMDB: A Large Video Database for Human Motion Recognition. ICCV 2011, Barcelona, 6-13 November 2011, 2556-2563. https://doi.org/10.1109/ICCV.2011.6126543
|
[46]
|
Xian, Y., Lorenz, T., Schiele, B., et al. (2018) Fea-ture Generating Networks for Zero-Shot Learning. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 5542-5551. https://doi.org/10.1109/CVPR.2018.00581
|
[47]
|
Verma, V.K. and Rai, P. (2017) A Simple Exponential Family Framework for Zero-Shot Learning. ECML/PKDD, Skopje, 18-22 September 2017, Vol. 10535, 792-808. https://doi.org/10.1007/978-3-319-71246-8_48
|
[48]
|
Tran, D., Wang, H., Torresani, L., et al. (2018) A Closer Look at Spatiotemporal Convolutions for Action Recognition. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 6450-6459. https://doi.org/10.1109/CVPR.2018.00675
|
[49]
|
Niebles, J.C., Chen, C. and Li, F. (2010) Modeling Temporal Structure of Decomposable Motion Segments for Activity Classification. ECCV 2010: 11th European Confer-ence on Computer Vision, Heraklion, 5-11 September 2010, 392-405.
https://doi.org/10.1007/978-3-642-15552-9_29
|
[50]
|
Wang, L., Qiao, Y. and Tang, X. (2015) Action Recognition with Trajectory-Pooled Deep-Convolutional Descriptors. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 7-12 June 2015, 4305-4314.
https://doi.org/10.1109/CVPR.2015.7299059
|
[51]
|
Tsochantaridis, I., Joachims, T., Hofmann, T., et al. (2005) Large Margin Methods for Structured and Interdependent Output Variables. Journal of Machine Learning Research, 6, 1453-1484.
|
[52]
|
Song, J., Shen, C., Yang, Y., et al. (2018) Transductive Unbiased Embedding for Zero-Shot Learning. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 1024-1033. https://doi.org/10.1109/CVPR.2018.00113
|
[53]
|
Gao, J., Zhang, T. and Xu, C. (2019) I Know the Relationships: Zero-Shot Action Recognition via Two-Stream Graph Convolutional Networks and Knowledge Graphs. The Thir-ty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Honolulu, 27 January-1 February 2019, 8303-8311. https://doi.org/10.1609/aaai.v33i01.33018303
|
[54]
|
Akata, Z., Reed, S.E., Walter, D., et al. (2015) Evaluation of Output Embeddings for Fine-Grained Image Classification. 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, 7-12 June 2015, 2927-2936. https://doi.org/10.1109/CVPR.2015.7298911
|
[55]
|
Xu, X., Hospedales, T.M. and Gong, S. (2015) Semantic Embedding Space for Zero-Shot Action Recognition. 2015 IEEE International Conference on Image Processing, ICIP 2015, Quebec City, 27-30 September 2015, 63-67.
https://doi.org/10.1109/ICIP.2015.7350760
|
[56]
|
Xun, X., Hospedales, T.M. and Gong, S.G. (2016) Multi-Task Zero-Shot Action Recognition with Prioritised Data Augmentation. ECCV 2016, Amsterdam, 8-16 October 2016, 343-359. https://doi.org/10.1007/978-3-319-46475-6_22
|
[57]
|
Li, Y., Hu, S. and Li, B. (2016) Recognizing Unseen Actions in a Domain-Adapted Embedding Space. 2016 IEEE International Conference on Image Processing, Phoenix, 25-28 September 2016, 4195-4199.
https://doi.org/10.1109/ICIP.2016.7533150
|
[58]
|
Xu, X., Hospedales, T.M. and Gong, S. (2017) Transductive Ze-ro-Shot Action Recognition by Word-Vector Embedding. International Journal of Computer Vision, 123, 309-333. https://doi.org/10.1007/s11263-016-0983-5
|
[59]
|
Qin, J., Liu, L., Shao, L., et al. (2017) Zero-Shot Action Recogni-tion with Error-Correcting Output Codes. Proceedings CVPR’17, Honolulu, 21-26 July 2017, 1042-1051. https://doi.org/10.1109/CVPR.2017.117
|
[60]
|
Mishra, A., Verma, V.K., Reddy, M.S.K., et al. (2018) A Generative Approach to Zero-Shot and Few-Shot Action Recognition. 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, 12-15 March 2018, 372-380. https://doi.org/10.1109/WACV.2018.00047
|
[61]
|
Zhu, Y., Long, Y., Guan, Y., et al. (2018) Towards Universal Representation for Unseen Action Recognition. Proceedings CVPR’18, Salt Lake City, 18-22 June 2018, 9436-9445. https://doi.org/10.1109/CVPR.2018.00983
|
[62]
|
Zhang, C. and Peng, Y. (2018) Visual Data Synthesis via GAN for Zero-Shot Video Classification. Proceedings of the Twen-ty-Seventh International Joint Conference on Artificial Intelligence, Stockholm, 13-19 July 2018, 1128-1134.
https://doi.org/10.24963/ijcai.2018/157
|
[63]
|
Kodirov, E., Xiang, T., Fu, Z., et al. (2015) Unsupervised Domain Adaptation for Zero-Shot Learning. 2015 IEEE International Conference on Computer Vision (ICCV), Santiago, 7-13 December 2015, 2452-2460.
https://doi.org/10.1109/ICCV.2015.282
|
[64]
|
Wang, Q. and Chen, K. (2017) Zero-Shot Visual Recognition via Bi-directional Latent Embedding. International Journal of Computer Vision, 124, 356-383. https://doi.org/10.1007/s11263-017-1027-5
|