Profile image
Hsien-Tzu Cheng
M.S. Student, Research Assistant
Department of Electrical Engineering
National Tsing Hua University, Taiwan

Publications


Cube Padding for Weakly-Supervised Saliency Prediction in 360° Videos
Hsien-Tzu Cheng, Chun-Hung Chao, Jin-Dong Dong, Hao-Kai Wen, Tyng-Luh Liu, Min Sun
Automatic saliency prediction in 360° videos is critical for viewpoint guidance applications (e.g., Facebook 360 Guide). We propose a spatial-temporal network which is (1) weakly-supervised trained and (2) tailor-made for 360° viewing sphere. Note that most existing methods rely on annotated saliency map for training. Most importantly, they convert 360° sphere to equirectangular or separate Normal Field-of-View (NFoV) images, which introduces distortion and image boundaries. In contrast, we propose a simple and effective Cube Padding (CP) technique, introducing no image boundary while being applicable to entire Convolutional Neural Network (CNN) structures. Our method outperforms baseline methods in both speed and quality.
CVPR 2018
Deep 360 Pilot: Learning a Deep Agent for Piloting through 360° Sports Videos
Yen-Chen Lin*, Hou-Ning Hu*, Ming-Yu Liu, Hsien-Tzu Cheng, Yung-Ju Chang, Min Sun
(* indicates equal contribution)
Watching a 360° sports video requires a viewer to continuously select a viewing angle, either through a sequence of mouse clicks or head movements. To relieve the viewer from this "360 piloting" task, we propose "deep 360 pilot" -- a deep learning-based agent for piloting through 360° sports videos automatically. At each frame, the agent observes a panoramic image and has the knowledge of previously selected viewing angles.
CVPR 2017 (Oral)
Tell Me Where to Look: Investigating Ways for Assisting Focus in 360° Video
Yen-Chen Lin, Yung-Ju Chang, Hou-Ning Hu, Hsien-Tzu Cheng, Chi-Wen Huang, Min Sun
360° videos give viewers a spherical view and immersive experience of surroundings. However, one challenge of watching 360° videos is continuously focusing and re-focusing intended targets. To address this challenge, we developed two Focus Assistance techniques: Auto Pilot (directly bringing viewers to the target), and Visual Guidance (indicating the direction of the target).
CHI 2017 (Full Paper)

Selected Projects


Computer Vision for Visual Effects
Course Project
Leverage, implement and experiment with multiple computer vision algorithms.
Project Website
Mar. 2016 - June 2016
Dense Trajectories Video Recognition and Acceleration
CUDA acceleration of video motion descriptor applied to recognition tasks, such as action recognition.
June 2015 - May 2016
Light-Field View Synthesis
Bachelor Project
Utilize depth maps to synthesize 2D projection image according to different viewing angles in 2.5D reconstruction.
Mar. 2014 - Jan. 2015

Work Experience


M.S. Student, Research Assistant
Vision Science Lab, National Tsing Hua University, Hsinchu, Taiwan
Involve in research projects related to state-of-the-art computer vision, deep learning and human-computer interaction.
May 2015 - Present
Software Developer Intern
RE'FLEKT GmbH, Munich, Germany
Develop video object tracking and image processing algorithms with Unity based applications in the R&D team.
Apr. 2017 - Aug. 2017