Abstract
Multi-task learning is a subfield of machine learning in which the data is trained with a shared model to solve different tasks simultaneously. Instead of training multiple models corresponding to different tasks, we only need to train a single model with shared parameters by using multi-task learning. Multi-task learning highly reduces the number of parameters in the machine learning models and thus reduces the computational and storage requirements. When we apply multi-task learning on deep neural networks (DNNs), we need to further compress the model since the model size of a single DNN is still a critical challenge to many computation systems, especially for edge platforms. However, when model compression is applied to multi-task learning, it is challenging to maintain the performance of all the different tasks. To deal with this challenge, we propose a min-max optimization framework for the training of highly compressed multi-task DNN models. Our proposed framework can automatically adjust the learnable weighting factors corresponding to different tasks to guarantee that the task with worst-case performance across all the different tasks will be optimized.
| Original language | English |
|---|---|
| Title of host publication | Proceedings - IEEE International Symposium on Circuits and Systems |
| Place of Publication | usa |
| Publisher | Institute of Electrical and Electronics Engineers Inc. |
| ISBN (Electronic) | 9798350330991 |
| DOIs | |
| State | Published - Jan 1 2024 |
| Event | 2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024 - Singapore, Singapore Duration: May 19 2024 → May 22 2024 |
Conference
| Conference | 2024 IEEE International Symposium on Circuits and Systems, ISCAS 2024 |
|---|---|
| Country/Territory | Singapore |
| City | Singapore |
| Period | 05/19/24 → 05/22/24 |
Keywords
- deep learning
- model compression
- multi-task learning
- weight pruning
Cite this
- APA
- Author
- BIBTEX
- Harvard
- Standard
- RIS
- Vancouver