LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning
Abstract
References
Index Terms
- LEGO: Learning EGOcentric Action Frame Generation via Visual Instruction Tuning
Recommendations
Visual Event-Based Egocentric Human Action Recognition
Pattern Recognition and Image AnalysisAbstractThis paper lies at the intersection of three research areas: human action recognition, egocentric vision, and visual event-based sensors. The main goal is the comparison of egocentric action recognition performance under either of two visual ...
Geometrical Cues in Visual Saliency Models for Active Object Recognition in Egocentric Videos
PIVP '14: Proceedings of the 1st International Workshop on Perception Inspired Video ProcessingIn the problem of "human sensing", videos recorded with wearable cameras give an "egocentric" view of the world, capturing details of human activities. In this paper we continue research on visual saliency for such kind of content with the goal of "...
Egocentric Activity Recognition and Localization on a 3D Map
Computer Vision – ECCV 2022AbstractGiven a video captured from a first person perspective and the environment context of where the video is recorded, can we recognize what the person is doing and identify where the action occurs in the 3D space? We address this challenging problem ...
Comments
Please enable JavaScript to view thecomments powered by Disqus.Information & Contributors
Information
Published In
Publisher
Springer-Verlag
Berlin, Heidelberg
Publication History
Author Tags
Qualifiers
- Article
Contributors
Other Metrics
Bibliometrics & Citations
Bibliometrics
Article Metrics
- 0Total Citations
- 0Total Downloads
- Downloads (Last 12 months)0
- Downloads (Last 6 weeks)0
Other Metrics
Citations
View Options
View options
Login options
Check if you have access through your login credentials or your institution to get full access on this article.
Sign in