Nothing Special   »   [go: up one dir, main page]

Skip to main content
Log in

Forget less, count better: a domain-incremental self-distillation learning benchmark for lifelong crowd counting

忘得少, 数得好: 一种域增量式自蒸馏终身人群计数基准

  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript

Abstract

Crowd counting has important applications in public safety and pandemic control. A robust and practical crowd counting system has to be capable of continuously learning with the newly incoming domain data in real-world scenarios instead of fitting one domain only. Off-the-shelf methods have some drawbacks when handling multiple domains: (1) the models will achieve limited performance (even drop dramatically) among old domains after training images from new domains due to the discrepancies in intrinsic data distributions from various domains, which is called catastrophic forgetting; (2) the well-trained model in a specific domain achieves imperfect performance among other unseen domains because of domain shift; (3) it leads to linearly increasing storage overhead, either mixing all the data for training or simply training dozens of separate models for different domains when new ones are available. To overcome these issues, we investigate a new crowd counting task in incremental domain training setting called lifelong crowd counting. Its goal is to alleviate catastrophic forgetting and improve the generalization ability using a single model updated by the incremental domains. Specifically, we propose a self-distillation learning framework as a benchmark (forget less, count better, or FLCB) for lifelong crowd counting, which helps the model leverage previous meaningful knowledge in a sustainable manner for better crowd counting to mitigate the forgetting when new data arrive. A new quantitative metric, normalized Backward Transfer (nBwT), is developed to evaluate the forgetting degree of the model in the lifelong learning process. Extensive experimental results demonstrate the superiority of our proposed benchmark in achieving a low catastrophic forgetting degree and strong generalization ability.

摘要

人群计数在公共安全和流行病控制方面具有重要应用。一个鲁棒且实用的人群计数系统须能够在真实场景中不断学习持续到来的新域数据, 而非仅仅拟合某一单域的数据分布。现有方法在处理多个域的数据时有一些不足之处: (1)由于来自不同域的固有数据分布之间的差异, 模型在训练来自新域的图像数据后在旧域中的性能可能会变得十分有限(甚至急剧下降), 这种现象被称为灾难性遗忘; (2)由于域分布的偏移, 在某一特定域数据中训练好的模型在其他未见域中通常表现不佳; (3)处理多个域的数据通常会导致存储开销的线性增长, 例如混合来自所有域的数据进行训练, 或者是简单地为每一个域的数据单独训练一个模型。为克服这些问题, 我们探索了在域增量式训练设置下一种新的人群计数任务, 即终身人群计数。它的目标是通过使用单个模型持续不断地学习新域数据以减轻灾难性遗忘并提高泛化能力。具体来说, 提出一种自蒸馏学习框架作为终身人群计数的基准模型(forget less, count better, FLCB), 这有助于模型可持续地利用之前学到的有意义的知识来更好地对人数进行估计, 以减少训练新数据后对旧数据的遗忘。此外, 设计了一种新的定量评价指标, 即归一化后向迁移(normalized Backward Transfer, nBwT), 用于评估模型在终身学习过程中的遗忘程度。大量实验结果证明了该模型的优越性, 即较低的灾难性遗忘度和较强的泛化能力。

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

References

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Fei-Yue Wang  (王飞跃) or Junping Zhang  (张军平).

Additional information

Project supported by the National Natural Science Foundation of China (Nos. 62176059, 62101136, and U1811463), the Shanghai Municipal Science and Technology Major Project (No. 2018SHZDZX01), Zhangjiang Lab, the Shanghai Municipal of Science and Technology Project (No. 20JC1419500), the Shanghai Sailing Program (No. 21YF1402800), the Natural Science Foundation of Shanghai (No. 21ZR1403600), and the Shanghai Center for Brain Science and Brain-inspired Technology

Contributors

Jiaqi GAO designed the research and drafted the paper. Jingqi LI contributed ideas for experiments and analysis. Jingqi LI, Hongming SHAN, Yanyun QU, James Z. WANG, Fei-Yue WANG, and Junping ZHANG helped organize and revised the paper. Jiaqi GAO, Hongming SHAN, and Junping ZHANG finalized the paper.

Compliance with ethics guidelines

Jiaqi GAO, Jingqi LI, Hongming SHAN, Yanyun QU, James Z. WANG, Fei-Yue WANG, and Junping ZHANG declare that they have no conflict of interest.

Data availability

The data that support the findings of this study are available from the corresponding authors upon reasonable request.

List of supplementary materials

1 Domain concept and gaps of different datasets

2 Effect of different training orders

Fig. S1 Data distributions of four benchmark datasets Table S1 Forgetting degree comparison results with different training orders

Table S2 Generalization comparison results with different training orders on the unseen JHU-Crowd++ dataset

Supplementary materials

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Gao, J., Li, J., Shan, H. et al. Forget less, count better: a domain-incremental self-distillation learning benchmark for lifelong crowd counting. Front Inform Technol Electron Eng 24, 187–202 (2023). https://doi.org/10.1631/FITEE.2200380

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1631/FITEE.2200380

Key words

关键词

CLC number

Navigation