Time Slice Video Synthesis by Robust Video Alignment

SIGGRAPH 2017
Zhaopeng Cui Oliver Wang Ping Tan Jue Wang
Simon Fraser University Adobe Research
Teaser

Abstract

Time slice photography is a popular effect that visualizes the passing of time by aligning and stitching multiple images capturing the same scene at different times together into a single image. Extending this effect to video is a difficult problem, and one where existing solutions have only had limited success. In this paper, we propose an easy-to-use and robust system for creating time slice videos from a wide variety of consumer videos. The main technical challenge we address is how to align videos taken at different times with substantially different appearances, in the presence of moving objects and moving cameras with slightly different trajectories. To achieve a temporally stable alignment, we perform a mixed 2D-3D alignment, where a rough 3D reconstruction is used to generate sparse constraints that are integrated into a pixelwise 2D registration. We apply our method to a number of challenging scenarios, and show that we can achieve a higher quality registration than prior work. We propose a 3D user interface that allows the user to easily specify how multiple videos should be composited in space and time. Finally, we show that our alignment method can be applied in more general video editing and compositing tasks, such as object removal.

paper (17.8 MB) small (1.19 MB) supplementary materials (all results) (624 MB)

Main Supplemental Video


Acknowledgements

We thank the Flickr user Miguel Mendez whose photograph we use under Creative Commons license. We are grateful to Shuaicheng Liu and Kaimo Lin for providing the results of their methods in our comparisons, and to Renjiao Yi for her help in capturing the data. We would also like to thank all the reviewers for their constructive comments. This study is partially supported by Canada NSERC Discovery Grant 31-611664, Discovery Accelerator Supplement 31-611663, and a gift grant from Adobe.