Gradient Coordination for Quantifying and Maximizing Knowledge Transference in Multi-Task Learning
Abstract
References
Index Terms
- Gradient Coordination for Quantifying and Maximizing Knowledge Transference in Multi-Task Learning
Recommendations
MetaBalance: Improving Multi-Task Recommendations via Adapting Gradient Magnitudes of Auxiliary Tasks
WWW '22: Proceedings of the ACM Web Conference 2022In many personalized recommendation scenarios, the generalization ability of a target task can be improved via learning with additional auxiliary tasks alongside this target task on a multi-task network. However, this method often suffers from a serious ...
MaxGNR: A Dynamic Weight Strategy via Maximizing Gradient-to-Noise Ratio for Multi-task Learning
Computer Vision – ACCV 2022AbstractWhen modeling related tasks in computer vision, Multi-Task Learning (MTL) can outperform Single-Task Learning (STL) due to its ability to capture intrinsic relatedness among tasks. However, MTL may encounter the insufficient training problem, i.e.,...
Metric-Guided Multi-task Learning
Foundations of Intelligent SystemsAbstractMulti-task learning (MTL) aims to solve multiple related learning tasks simultaneously so that the useful information in one specific task can be utilized by other tasks in order to improve the learning performance of all tasks. Many ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
- General Chairs:
- Hsin-Hsi Chen,
- Wei-Jou (Edward) Duh,
- Hen-Hsen Huang,
- Program Chairs:
- Makoto P. Kato,
- Josiane Mothe,
- Barbara Poblete
Sponsors
Publisher
Association for Computing Machinery
New York, NY, United States
Publication History
Check for updates
Author Tags
Qualifiers
- Short-paper
Conference
Acceptance Rates
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 79Total Downloads
- Downloads (Last 12 months)46
- Downloads (Last 6 weeks)2
Other Metrics
Citations
View Options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in