Abstract
The objective of Multi-Task Learning (MTL) is to boost learning performance by simultaneously learning multiple relevant tasks. Identifying and modeling the task relationship is essential for multi-task learning. Most previous works assume that related tasks have common shared structure. However, this assumption is too restrictive. In some real-world applications, relevant tasks are partially sharing knowledge at the feature level. In other words, the relevant features of related tasks can partially overlap. In this paper, we propose a new MTL approach to exploit this partial relationship of tasks, which is able to selectively exploit shared information across the tasks while produce a task-specific sparse pattern for each task. Therefore, this increased flexibility is able to model the complex structure among tasks. An efficient alternating optimization has been developed to optimize the model. We perform experimental studies on real world data and the results demonstrate that the proposed method significantly improves learning performance by simultaneously exploiting the partial relationship across tasks at the feature level.
Similar content being viewed by others
References
Argyriou, A., Evgeniou, T., Pontil, M.: Multi-task feature learning. In: Advances in Neural Information Processing Systems, pp. 19–41 (2007)
Caruana, R.: Multitask learning. Mach. Learn. 28(1), 41–75 (1997)
Chen, J., Zhou, J., Ye, J.: Integrating low-rank and group-sparse structures for robust multi-task learning. In: Proceedings of the 17th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 42–50 (2011)
Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 109–117 (2004)
Gong, P., Ye, J., Zhang, C.: Robust multi-task feature learning. In: Proceedings of the 18th ACM SIGKDD International conference on Knowledge Discovery and Data Mining, pp. 895–903 (2012)
Jacob, L., Vert, J.P., Bach, F.R.: Clustered multi-task learning: a convex formulation. In: Advances in Neural Information Processing Systems, pp. 745–752 (2009)
Kang, Z., Grauman, K., Sha, F.: Learning with whom to share in multi-task feature learning. In: Proceedings of the 28th International Conference on Machine Learning, pp. 521–528 (2011)
Kim, S., Xing, E.P.: Tree-guided group lasso for multi-response regression with structured sparsity, with an application to eQTL mapping. Ann. Appl. Stat. 6(3), 1095–1117 (2012)
Kumar, A., Daume III, H.: Learning task grouping and overlap in multi-task learning. arXiv preprint arXiv:1206.6417 (2012)
Lee, S., Zhu, J., Xing, E.P.: Adaptive multi-task lasso: with application to eQTL detection. In: Advances in Neural Information Processing Systems, pp. 1306–1314 (2010)
Lee, G., Yang, E., Hwang, S.J.: Asymmetric multi-task learning based on task relatedness and loss. In: Proceedings of the 33rd International Conference on Machine Learning, pp. 230–238 (2016)
Liu, A.A., Su, Y.T., Nie, W.Z., Kankanhalli, M.: Hierarchical clustering multi-task learning for joint human action grouping and recognition. IEEE Trans. Pattern Anal. Mach. Intell. 39(1), 102–114 (2017)
Lu, X., Li, X., Mou, L.: Semi-supervised multitask learning for scene recognition. IEEE Trans. Cybern. 45(9), 1967–1976 (2015)
Maurer, A., Pontil, M., Romera-Paredes, B.: Sparse coding for multitask and transfer learning. In: Proceedings of the 30th International Conference on Machine Learning, pp. 343–351 (2013)
Obozinski, G., Taskar, B., Jordan, M.: Multi-task feature selection. Statistics Department, UC Berkeley, Technical report 17 (2006)
Parameswaran, S., Weinberger, K.Q.: Large margin multi-task metric learning. In: Advances in Neural Information Processing Systems, pp. 1867–1875 (2010)
Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. 73(3), 267–288 (1996)
Acknowledgments
The work described in this paper was partially supported by a grant from the Research Grants Council of the Hong Kong Special Administrative Region, China [Project No. CityU 11300715], and a grant from City University of Hong Kong [Project No. 7004674].
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Liu, C., Cao, WM., Zheng, CT., Wong, HS. (2017). Learning with Partially Shared Features for Multi-Task Learning. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10638. Springer, Cham. https://doi.org/10.1007/978-3-319-70139-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-319-70139-4_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-70138-7
Online ISBN: 978-3-319-70139-4
eBook Packages: Computer ScienceComputer Science (R0)