Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1145/3461353.3461359acmotherconferencesArticle/Chapter ViewAbstractPublication PagesiciaiConference Proceedingsconference-collections
research-article
Open access

Optical Flow Estimation with Foreground Attention Guided Network

Published: 04 September 2021 Publication History

Abstract

Optical flow is used to describe the variations between adjacent images of a sequence. Although the pixels belonging to a same object have similar displacements, most existing optical flow estimation methods focus on processing each pixel indiscriminately, while neglecting the semantic information of pixels. Consider the foreground objects usually have large displacement and outstanding to viewer, we in this paper propose a foreground attention guided network to strengthen the foreground target feature for optical flow estimation. Specifically, we first adopt a foreground attention network to obtain a map of foreground objects. Then the attention map of foreground is utilized to strengthen the features in multiple scales via 3D convolution for optical flow estimation progression. Finally, the strengthening features are concatenated with the original feature maps in multi-scale deconvolution operation to achieve the final optical flow. To the end, we pretrain our proposed framework on Flying Chairs dataset, and then execute comparison experiments on MPI Sintel and KITTI benchmark datasets. The experimental results verify that our proposed framework is comparable with state-of-the-art methods with a miniature network.

References

[1]
Christian Bailer, Kiran Varanasi, and Didier Stricker. 2017. CNN-based patch matching for optical flow with thresholded hinge embedding loss. In CVPR, Vol. 2. 7.
[2]
Simon Baker, Daniel Scharstein, JP Lewis, Stefan Roth, Michael J Black, and Richard Szeliski. 2011. A database and evaluation methodology for optical flow. International Journal of Computer Vision 92, 1 (2011), 1–31.
[3]
Thomas Brox, Christoph Bregler, and Jitendra Malik. 2009. Large displacement optical flow. In CVPR. IEEE, 41–48.
[4]
Daniel J Butler, Jonas Wulff, Garrett B Stanley, and Michael J Black. 2012. A naturalistic open source movie for optical flow evaluation. In ECCV. Springer, 611–625.
[5]
Yang Cong, Haifeng Gong, Song-Chun Zhu, and Yandong Tang. 2009. Flow mosaicking: Real-time pedestrian counting without scene-specific learning. In 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE, 1093–1100.
[6]
Jiahua Dong, Yang Cong, Gan Sun, Yunsheng Yang, Xiaowei Xu, and Zhengming Ding. 2020. Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions Segmentation. IEEE Transactions on Circuits and Systems for Video Technology (2020).
[7]
Jiahua Dong, Yang Cong, Gan Sun, Bineng Zhong, and Xiaowei Xu. 2020. What can be transferred: Unsupervised domain adaptation for endoscopic lesions segmentation. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 4023–4032.
[8]
Alexey Dosovitskiy, Philipp Fischer, Eddy Ilg, Philip Hausser, Caner Hazirbas, Vladimir Golkov, Patrick Van Der Smagt, Daniel Cremers, and Thomas Brox. 2015. Flownet: Learning optical flow with convolutional networks. In ICCV. 2758–2766.
[9]
A. Dosovitskiy, P. Fischer, E. Ilg, P. Häusser, C. Hazırbaş, V. Golkov, P. v.d. Smagt, D. Cremers, and T. Brox. 2015. FlowNet: Learning Optical Flow with Convolutional Networks. In ICCV.
[10]
Denis Fortun, Patrick Bouthemy, and Charles Kervrann. 2015. Optical flow modeling and computation: a survey. Computer Vision and Image Understanding 134 (2015), 1–21.
[11]
Andreas Geiger, Philip Lenz, Christoph Stiller, and Raquel Urtasun. 2013. Vision meets robotics: The kitti dataset. The International Journal of Robotics Research 32, 11 (2013), 1231–1237.
[12]
James J Gibson. 1950. The perception of the visual world.(1950).
[13]
Fatma Güney and Andreas Geiger. 2016. Deep discrete flow. In Asian Conference on Computer Vision. Springer, 207–224.
[14]
Tak-Wai Hui, Xiaoou Tang, and Chen Change Loy. 2018. LiteFlowNet: A Lightweight Convolutional Neural Network for Optical Flow Estimation. In CVPR. 8981–8989.
[15]
Eddy Ilg, Nikolaus Mayer, Tonmoy Saikia, Margret Keuper, Alexey Dosovitskiy, and Thomas Brox. 2017. Flownet 2.0: Evolution of optical flow estimation with deep networks. In CVPR, Vol. 2. 6.
[16]
Songbai Ji, Xiaoyao Fan, David W Roberts, Alex Hartov, and Keith D Paulsen. 2014. Cortical surface shift estimation using stereovision and optical flow motion tracking via projection image registration. Medical image analysis 18, 7 (2014), 1169–1183.
[17]
Yan Ke, Rahul Sukthankar, and Martial Hebert. 2005. Efficient visual event detection using volumetric features. In ICCV, Vol. 1. 166–173.
[18]
Salman Khan, Hossein Rahmani, Syed Afaq Ali Shah, and Mohammed Bennamoun. 2018. A guide to convolutional neural networks for computer vision. Synthesis Lectures on Computer Vision 8, 1 (2018), 1–207.
[19]
Ivan Laptev, Marcin Marszalek, Cordelia Schmid, and Benjamin Rozenfeld. 2008. Learning realistic human actions from movies. In CVPR. 1–8.
[20]
Shuang Li, Tong Xiao, Hongsheng Li, Wei Yang, and Xiaogang Wang. 2017. Identity-aware textual-visual matching with latent co-attention. In ICCV. 1908–1917.
[21]
Xihui Liu, Haiyu Zhao, Maoqing Tian, Lu Sheng, Jing Shao, Shuai Yi, Junjie Yan, and Xiaogang Wang. 2017. Hydraplus-net: Attentive deep features for pedestrian analysis. In CVPR. 350–359.
[22]
Moritz Menze, Christian Heipke, and Andreas Geiger. 2015. Joint 3D Estimation of Vehicles and Scene Flow. In ISPRS Workshop on Image Sequence Analysis (ISA).
[23]
Anurag Ranjan and Michael J Black. 2017. Optical flow estimation using a spatial pyramid network. In CVPR, Vol. 2. 2.
[24]
Zhe Ren, Junchi Yan, Bingbing Ni, Bin Liu, Xiaokang Yang, and Hongyuan Zha. 2017. Unsupervised Deep Learning for Optical Flow Estimation. In AAAI, Vol. 3. 7.
[25]
Jerome Revaud, Philippe Weinzaepfel, Zaid Harchaoui, and Cordelia Schmid. 2015. Epicflow: Edge-preserving interpolation of correspondences for optical flow. In CVPR. 1164–1172.
[26]
Laura Sevilla-Lara, Deqing Sun, Varun Jampani, and Michael J Black. 2016. Optical flow with semantic segmentation and localized layers. In CVPR. 3889–3898.
[27]
Xiaojing Song, Lakmal D Seneviratne, and Kaspar Althoefer. 2011. A Kalman filter-integrated optical flow method for velocity sensing of mobile robots. IEEE/ASME Transactions on Mechatronics 16, 3 (2011), 551–563.
[28]
Deqing Sun, Stefan Roth, and Michael J Black. 2014. A quantitative analysis of current practices in optical flow estimation and the principles behind them. International Journal of Computer Vision 106, 2 (2014), 115–137.
[29]
Deqing Sun, Xiaodong Yang, Ming-Yu Liu, and Jan Kautz. 2018. Pwc-net: Cnns for optical flow using pyramid, warping, and cost volume. In CVPR. 8934–8943.
[30]
Andreas Wedel, Thomas Pock, Christopher Zach, Horst Bischof, and Daniel Cremers. 2009. An improved algorithm for tv-l 1 optical flow. In Statistical and geometrical approaches to visual motion analysis. Springer, 23–45.
[31]
Philippe Weinzaepfel, Jerome Revaud, Zaid Harchaoui, and Cordelia Schmid. 2013. DeepFlow: Large displacement optical flow with deep matching. In ICCV. 1385–1392.
[32]
Jonas Wulff and Michael J Black. 2015. Efficient sparse-to-dense optical flow estimation using a learned basis and layers. In CVPR. 120–130.
[33]
Jia Xu, René Ranftl, and Vladlen Koltun. 2017. Accurate optical flow via direct cost volume processing. In CVPR. 1289–1297.
[34]
Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhudinov, Rich Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention. In ICML. 2048–2057.
[35]
Koichiro Yamaguchi, David McAllester, and Raquel Urtasun. 2014. Efficient joint segmentation, occlusion labeling, stereo and flow estimation. In ECCV. Springer, 756–771.
[36]
Xiaoning Zhang, Tiantian Wang, Jinqing Qi, Huchuan Lu, and Gang Wang. 2018. Progressive Attention Guided Recurrent Network for Salient Object Detection. In CVPR.
[37]
Zhen Zhou, Yan Huang, Wei Wang, Liang Wang, and Tieniu Tan. 2017. See the forest for the trees: Joint spatial and temporal recurrent neural networks for video-based person re-identification. In CVPR. 6776–6785.
[38]
Zheng Zhu, Wei Wu, Wei Zou, and Junjie Yan. 2018. End-to-End Flow Correlation Tracking With Spatial-Temporal Attention. In CVPR.

Index Terms

  1. Optical Flow Estimation with Foreground Attention Guided Network
        Index terms have been assigned to the content through auto-classification.

        Recommendations

        Comments

        Please enable JavaScript to view thecomments powered by Disqus.

        Information & Contributors

        Information

        Published In

        cover image ACM Other conferences
        ICIAI '21: Proceedings of the 2021 5th International Conference on Innovation in Artificial Intelligence
        March 2021
        246 pages
        ISBN:9781450388634
        DOI:10.1145/3461353
        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        Published: 04 September 2021

        Permissions

        Request permissions for this article.

        Check for updates

        Author Tags

        1. attention
        2. foreground guided
        3. optical flow

        Qualifiers

        • Research-article
        • Research
        • Refereed limited

        Funding Sources

        • China Postdoctoral Science Foundation under Grant
        • ?National Nature Science Foundation of China under Grant
        • National Postdoctoral Innovative Talents Support Program
        • Nature Foundation of Liaoning Province of China under Grant

        Conference

        ICIAI 2021

        Contributors

        Other Metrics

        Bibliometrics & Citations

        Bibliometrics

        Article Metrics

        • 0
          Total Citations
        • 396
          Total Downloads
        • Downloads (Last 12 months)182
        • Downloads (Last 6 weeks)39
        Reflects downloads up to 18 Nov 2024

        Other Metrics

        Citations

        View Options

        View options

        PDF

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader

        HTML Format

        View this article in HTML Format.

        HTML Format

        Login options

        Media

        Figures

        Other

        Tables

        Share

        Share

        Share this Publication link

        Share on social media