Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-642-33885-4_60guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Technical demonstration on model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes

Published: 07 October 2012 Publication History

Abstract

In this technical demonstration, we will show our framework of automatic modeling, detection, and tracking of arbitrary texture-less 3D objects with a Kinect. The detection is mainly based on the recent template-based LINEMOD approach [1] while the automatic template learning from reconstructed 3D models, the fast pose estimation and the quick and robust false positive removal is a novel addition.
In this demonstration, we will show each step of our pipeline, starting with the fast reconstruction of arbitrary 3D objects, followed by the automatic learning and the robust detection and pose estimation of the reconstructed objects in real-time. As we will show, this makes our framework suitable for object manipulation e.g. in robotics applications.

References

[1]
Hinterstoisser, S., Holzer, S., Cagniart, C., Ilic, S., Konolige, K., Navab, N., Lepetit, V.: Multimodal Templates for Real-Time Detection of Texture-Less Objects in Heavily Cluttered Scenes. In: ICCV (2011).
[2]
Newcombe, R.A., Izadi, S., Hilliges, O., Molyneaux, D., Kim, D., Davison, A.J., Kohli, P., Shotton, J., Hodges, S., Fitzgibbon, A.: KinectFusion: Real-Time Dense Surface Mapping and Tracking. In: ISMAR (2011).
[3]
Anonymous, Authors: Anonymous Title. In: submitted to ACCV (2012).
[4]
Pan, Q., Reitmayr, G., Drummond, T.: ProFORMA: Probabilistic Feature-based On-line Rapid Model Acquisition. In: BMVC (2009).
[5]
Weise, T., Wismer, T., Leibe, B., Van Gool, L.: In-hand Scanning with Online Loop Closure. In: International Workshop on 3-D Digital Imaging and Modeling (2009).
[6]
Newcombe, R.A., Lovegrove, S.J., Davison, A.J.: DTAM: Dense Tracking and Mapping in Real-Time. In: ICCV (2011).

Cited By

View all
  • (2024)Optical Flow-Guided 6DoF Object Pose Tracking with an Event CameraProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680992(6501-6509)Online publication date: 28-Oct-2024
  • (2024)Multi Task-Guided 6D Object Pose EstimationProceedings of the 2024 9th International Conference on Intelligent Information Technology10.1145/3654522.3654576(215-222)Online publication date: 23-Feb-2024
  • (2024)Omni6DPose: A Benchmark and Model for Universal 6D Object Pose Estimation and TrackingComputer Vision – ECCV 202410.1007/978-3-031-73226-3_12(199-216)Online publication date: 29-Sep-2024
  • Show More Cited By
  1. Technical demonstration on model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes

    Recommendations

    Comments

    Please enable JavaScript to view thecomments powered by Disqus.

    Information & Contributors

    Information

    Published In

    cover image Guide Proceedings
    ECCV'12: Proceedings of the 12th international conference on Computer Vision - Volume Part III
    October 2012
    682 pages
    ISBN:9783642338847
    • Editors:
    • Andrea Fusiello,
    • Vittorio Murino,
    • Rita Cucchiara

    Sponsors

    • Adobe
    • TOYOTA: TOYOTA
    • Google Inc.
    • IBMR: IBM Research
    • Microsoft Reasearch: Microsoft Reasearch

    Publisher

    Springer-Verlag

    Berlin, Heidelberg

    Publication History

    Published: 07 October 2012

    Qualifiers

    • Article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)0
    • Downloads (Last 6 weeks)0
    Reflects downloads up to 16 Nov 2024

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Optical Flow-Guided 6DoF Object Pose Tracking with an Event CameraProceedings of the 32nd ACM International Conference on Multimedia10.1145/3664647.3680992(6501-6509)Online publication date: 28-Oct-2024
    • (2024)Multi Task-Guided 6D Object Pose EstimationProceedings of the 2024 9th International Conference on Intelligent Information Technology10.1145/3654522.3654576(215-222)Online publication date: 23-Feb-2024
    • (2024)Omni6DPose: A Benchmark and Model for Universal 6D Object Pose Estimation and TrackingComputer Vision – ECCV 202410.1007/978-3-031-73226-3_12(199-216)Online publication date: 29-Sep-2024
    • (2023)A high-resolution dataset for instance detection with multi-view instance captureProceedings of the 37th International Conference on Neural Information Processing Systems10.5555/3666122.3667944(42064-42076)Online publication date: 10-Dec-2023
    • (2023)Manifold-aware self-training for unsupervised domain adaptation on regressing 6D object poseProceedings of the Thirty-Second International Joint Conference on Artificial Intelligence10.24963/ijcai.2023/193(1740-1748)Online publication date: 19-Aug-2023
    • (2023)SE-UF-PVNet: A Structure Enhanced Pixel-wise Union vector Fields Voting Network for 6DoF Pose EstimationProceedings of the 2023 4th International Conference on Computing, Networks and Internet of Things10.1145/3603781.3603859(426-439)Online publication date: 26-May-2023
    • (2023)Pose Estimation of Space Targets Based on Geometry Structure FeaturesProceedings of the 2023 2nd Asia Conference on Algorithms, Computing and Machine Learning10.1145/3590003.3590096(510-514)Online publication date: 17-Mar-2023
    • (2023)OCSKB: An Object Component Sketch Knowledge Base for Fast 6D Pose EstimationProceedings of the 31st ACM International Conference on Multimedia10.1145/3581783.3612063(5819-5827)Online publication date: 26-Oct-2023
    • (2022)OnePose++Proceedings of the 36th International Conference on Neural Information Processing Systems10.5555/3600270.3602814(35103-35115)Online publication date: 28-Nov-2022
    • (2022)Deep Learning on Monocular Object Pose Detection and Tracking: A Comprehensive OverviewACM Computing Surveys10.1145/352449655:4(1-40)Online publication date: 21-Nov-2022
    • Show More Cited By

    View Options

    View options

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media