Nothing Special   »   [go: up one dir, main page]

Xiaohao Sun | 孙小皓

Stay humble, trust your instincts. Most importantly, act. When you come to a fork in the road, take it.

I'm a second year Ph.D. student of computer science at Simon Fraser University, advised by professor Angel Xuan Chang. Prior to this, I got my Master of Applied Science degree from the Electrical Engineering Department University of Windsor. And I received my Bachelor of Science degree in the area of Mathematical and Physics Basic Science at School of Mathematical from University of Electronic Science and Technology of China. Currently I am also a research assistant in SFU GrUVi Lab working with professor Angel Xuan Chang at Simon Fraser University. My research interests are in 3D vision and language, generative models, and artificial intelligence.

News

  • Aug, 2022 - One paper got accpeted at 3DV 2022
  • Jan, 2022 - I joined GrUVi Lab, and start to work with Prof Angel Xuan Chang
  • Sep, 2021 - Start my CS PHD at SFU
More...

Xiaohao Sun
xiaohao_sun-{at}-sfu-"dot"-ca

PhD Student
Sep 2021 - Present
Simon Fraser University
Research Assistant (Gruiv & 3dlg)
Google Scholar
Twitter

Publications

image  

OPDMulti: Openable Part Detection for Multiple Objects

Xiaohao Sun*, Hanxiao Jiang*, Angel X. Chang, Manolis Savva
arXiv

Openable part detection is the task of detecting the openable parts of an object in a single-view image, and predicting corresponding motion parameters. Prior work investigated the unrealistic setting where all input images only contain a single openable object. We generalize this task to scenes with multiple objects each potentially possessing openable parts, and create a corresponding dataset based on real-world scenes. We then address this more challenging scenario with OPDFormer: a part-aware transformer architecture. Our experiments show that the OPDFormer architecture significantly outperforms prior work. The more realistic multiple-object scenarios we investigated remain challenging for all methods, indicating opportunities for future work.

[Paper] [Project] [Code]

image  

Articulated 3D Human-Object Interactions from RGB Videos: An Empirical Analysis of Approaches and Challenges

Sanjay Haresh, Xiaohao Sun, Hanxiao Jiang, Angel X. Chang, Manolis Savva
3DV 2022

Human-object interactions with articulated objects are common in everyday life. Despite much progress in single-view 3D reconstruction, it is still challenging to infer an articulated 3D object model from an RGB video showing a person manipulating the object. We canonicalize the task of articulated 3D human-object interaction reconstruction from RGB video, and carry out a systematic benchmark of four methods for this task: 3D plane estimation, 3D cuboid estimation, CAD model fitting, and free-form mesh fitting. Our experiments show that all methods struggle to obtain high accuracy results even when provided ground truth information about the observed objects. We identify key factors which make the task challenging and suggest directions for future work on this challenging 3D computer vision task.

[Paper] [Project] [Code]

image  

Reading Line Classification Using Eye-trackers

Xiaohao Sun, Balakumar Balasingam
IEEE TIM

Eye-tracking while reading is an emerging application where the goal is to track the progression of reading. The challenges for accurate tracking of the reading progression are due to the measurement noise of the eye-tracker and the rapid and uncertain movement of the eye gaze. Solutions to this problem developed in the recent past suffer from many limitations, such as the need to know the text context and the need to have a batch of one page of data for classification. In this article, we relax these assumptions and develop a novel, real-time line classification approach. The proposed solution consists of an improved slip-Kalman smoother (slip-KS) that is designed to detect new line returns and to reduce the variance in the eye-gaze measurements. After preprocessing of the data by the slip-KS, a classification approach is employed to track the lines being read in real-time. Two such classifiers are demonstrated in this article; one is based on Gaussian discriminants, and the other is based on support vector machines. The proposed approaches were tested using realistic eye-gaze data from seven participants. Analysis based on the collected data using the proposed algorithms shows significantly improved performance over existing methods.

[Paper]

image  

Algorithms for Reading Line Classification

Xiaohao Sun, Balakumar Balasingam
IEEE SMC

Eye-Tracking has been emerging as a useful tool in human-computer interaction. However, the state of the art in eye-tracking applications suffers from a significant amount of measurement noise. Also, the inherent nature of the eye-gaze movement adds to the difficulty of obtaining valuable information from eye-gaze measurements. In this paper, a novel classification approach is proposed to classify the lines being read based on eye-gaze measurements. The proposed approach consists of a novel Kalman smoother-based preprocessing procedure to separate eye-gaze data corresponding to different text lines and to reduce variance. The preprocessed data is then used to train two different classifiers, one based on Gaussian discriminants and the other based on support vector machines. The resulting line-classification approach is shown to be superior in performance compared to other recent approaches.

[Paper]