Welcome to GrUVi @ CS.SFU!

We are an inter-disciplinary team of researchers working in visual computing, in particular, computer graphics and computer vision. Current areas of focus include 3D and robotic vision, 3D printing and content creation, animation, AR/VR, geometric and image-based modelling, machine learning, natural phenomenon, and shape analysis. Our research works frequently appear in top venues such as SIGGRAPH, CVPR, and ICCV (we rank #11 in the world in terms of top publications in visual computing, as of 7/2020) and we collaborate widely with the industry and academia (e.g., Adobe Research, Google, MSRA, Princeton, Stanford, and Washington). Our faculty and students have won numerous honours and awards, including FRSC, Alain Fournier Best Thesis Award, Google Faculty Award, TR35@Singapore, NSERC Discovery Accelerator, and several best paper awards from ECCV, SCA, SGP, etc. Gruvi alumni went on to take up faculty positions in Canada, the US, and Asia, while others now work at companies including Apple, EA, Facebook, Google, IBM, and Microsoft.

×

Talk by Yotam Nitzan from Tel Aviv University

May 5th, 2022

Title: MyStyle: A Personalized Generative Prior

Time: Friday, May 6th at 11:00 AM PST

Abstract: Deep generative models have proved to be successful for many image-to-image applications. Such models hallucinate information based on their large and diverse training datasets. Therefore, when enhancing or editing a portrait image, the model produces a generic and plausible output, but often it isn’t the person who actually appears in the image. In this talk, I’ll present our latest work, MyStyle - which introduces the notion of a personalized generative model. Trained on ~100 images of the same individual, MyStyle learns a personalized prior, custom to their unique appearance. This prior is then leveraged to solve ill-posed image enhancement and editing tasks - such as super-resolution, inpainting and changing the head pose.

MyStyle Paper

Yotam Nitzan’s personal webpage

Talk by Yotam Nitzan from Tel Aviv University

May 5th, 2022

Title: MyStyle: A Personalized Generative Prior Time: Friday, May 6th at 11:00 AM PST Abstract: Deep generative models have proved to be successful for many image-to-image applications. Such models hallucinate information based on their large and diverse training datasets. Therefore, when enhancing or editing a portrait image, the model produces a generic and plausible output, but often it isn’t the person who actually appears in the image. In this talk, I’ll present our latest work, MyStyle - which introduces the notion of a personalized generative model. Trained on ~100 images of the same individual, MyStyle learns a personalized prior, custom to their unique appearance. This prior is then leveraged to solve ill-posed image enhancement and editing tasks - such as super-resolution, inpainting and changing the head pose. MyStyle Paper Yotam Nitzan’s personal webpage

×

Gruviers Receive Awards at Graphics Interface 2022.

May 4th, 2022

Congratulations to Hao (Richard) Zhang and Manolis Savva for receiving awards at the Graphics Interface 2022. Richard has received the 2022 CHCCS/SCDHM Achievement Award of the Canadian Human-Computer Communications Society. Richard has had numerous high-impact contributions to computer graphics including geometric modeling, shape analysis, geometric deep learning, and computational design and fabrication. Manolis has received the 2022 Early Career Researcher Award. Manolis has established himself as a central figure in topics at the intersection of computer graphics, 3D sensing, and machine learning. To learn more about the research contributions of Richard and Manolis please checkout here and here.

Gruviers Receive Awards at Graphics Interface 2022.

May 4th, 2022

Congratulations to Hao (Richard) Zhang and Manolis Savva for receiving awards at the Graphics Interface 2022. Richard has received the 2022 CHCCS/SCDHM Achievement Award of the Canadian Human-Computer Communications Society. Richard has had numerous high-impact contributions to computer graphics including geometric modeling, shape analysis, geometric deep learning, and computational design and fabrication. Manolis has received the 2022 Early Career Researcher Award. Manolis has established himself as a central figure in topics at the intersection of computer graphics, 3D sensing, and machine learning. To learn more about the research contributions of Richard and Manolis please checkout here and here.

×

Gruviers have 12 Accepted Papers at CVPR 2022.

March 2nd, 2022

Congratulations to all Gruviers who are publishing their work at CVPR 2022. CVPR is the premier conference on computer vision and will be held in New Orleans this year. To learn more about the sample work that Gruvi will be presenting checkout here and here.

Gruviers have 12 Accepted Papers at CVPR 2022.

March 2nd, 2022

Congratulations to all Gruviers who are publishing their work at CVPR 2022. CVPR is the premier conference on computer vision and will be held in New Orleans this year. To learn more about the sample work that Gruvi will be presenting checkout here and here.

×

We Wish Everyone a Very Happy New Year.

Dec 20th, 2021

We wrap-up the year 2021 with great achievements and look forward to the new year ahead of us. In 2021, Gruviers were able to publish their work at many 1st tier conferences: CVPR (12 papers), SIGGRAPH and SIGGRAPH Asia (4 papers), ICCV (4 papers), Eurographics and Neurips. Congratulations to all Gruviers for their hard work.

We Wish Everyone a Very Happy New Year.

Dec 20th, 2021

We wrap-up the year 2021 with great achievements and look forward to the new year ahead of us. In 2021, Gruviers were able to publish their work at many 1st tier conferences: CVPR (12 papers), SIGGRAPH and SIGGRAPH Asia (4 papers), ICCV (4 papers), Eurographics and Neurips. Congratulations to all Gruviers for their hard work.

More News

× Talk by Or Perel from Tel Aviv University

Oct 19, 2021

Title: SAPE: Spatially-Adaptive Progressive Encoding for Neural Optimization

Time: Wednesday, Nov 3, 1:30 PM

Abstract: Multilayer-perceptrons (MLP) are known to struggle with learning functions of high-frequencies, and in particular cases with wide frequency bands. We present a spatially adaptive progressive encoding (SAPE) scheme for input signals of MLP networks, which enables them to better fit a wide range of frequencies without sacrificing training stability or requiring any domain specific preprocessing. SAPE gradually unmasks signal components with increasing frequencies as a function of time and space. The progressive exposure of frequencies is monitored by a feedback loop throughout the neural optimization process, allowing changes to propagate at different rates among local spatial portions of the signal space. We demonstrate the advantage of SAPE on a variety of domains and applications, including regression of low dimensional signals and images, representation learning of occupancy networks, and a geometric task of mesh transfer between 3D shapes.

SAPE Paper

Or Perel's personal webpage

Oct 19, 2021
Talk by Or Perel from Tel Aviv University

× Zhiqin Chen Recieves Google PhD Fellowship.

September 24th, 2021

Congratulations to Zhiqin Chen for receiving a PhD Fellowship from Google. The Google PhD Fellowship Program was created to recognize outstanding graduate students doing exceptional and innovative research in areas relevant to computer science and related fields. Fellowships support promising PhD candidates of all backgrounds who seek to influence the future of technology. To learn more about Zhiqin's research please visit here.

September 24th, 2021
Zhiqin Chen Recieves Google PhD Fellowship.

× Akshay Gadi Patil Receives ICCV 2021 Outstanding Reviewer Award.

August 31st, 2021

Congratulations to Akshay for receiving the Outstanding Reviewer Award at ICCV 2021. To learn more about Akshay's work please visit here.

August 31st, 2021
Akshay Gadi Patil Receives ICCV 2021 Outstanding Reviewer Award.

× Talk by Tel Aviv students Or Patashnik and Yuval Alaluf

Aug 5, 2021

Title: Recent Advancements in StyleGAN Inversion

Time: Wednesday, August 11, 10AM

Abstract: StyleGAN has recently been established as the state-of-the-art unconditional generator, synthesizing images of phenomenal realism and fidelity. With its rich semantic space, many works have attempted to understand and control StyleGAN’s latent representations with the goal of performing image manipulations. To perform manipulations on real images, however, one must learn to “invert” the GAN and encode a given image into StyleGAN’s latent space, which remains an open challenge. In this talk, we will discuss recent techniques and advancements in GAN Inversion and explore their importance for real image editing applications. In addition, going beyond the inversion task, we demonstrate how StyleGAN can be used for performing a wide range of image-to-image translation tasks.

Bios: Or and Yuval are both graduate students studying Computer Science at Tel-Aviv University under the supervision of Professor Daniel Cohen-Or and have collaborated on numerous works in the past year. Their main interests lie in the field of Computer Vision with recent work centered around image generation and manipulation.

https://orpatashnik.github.io/

https://yuval-alaluf.github.io/

Aug 5, 2021
Talk by Tel Aviv students Or Patashnik and Yuval Alaluf

× Nelson Nauata Receives Borealis AI 2021 Fellowship.

July 23rd, 2021

Congratulations to Nelson Nauata for receiving a Borealis AI 2021 Fellowship. Only ten students from the top academic institutions in Canada received fellowships this year, all with the goal of contributing to the advancement of artificial intelligence and machine learning. To learn more about the award and Nelson’s contributions check out here.

July 23rd, 2021
Nelson Nauata Receives Borealis AI 2021 Fellowship.

× GrUVi making waves at CVPR 2021

Jun 19, 2021

CVPR, the premier conference on computer vision, will be held virtually this year (June 19-25). GrUVi lab will once again have an incredible showing at CVPR, with 16 technical papers, 2 invited talks, 4 co-organized workshops, and 1 hosted challenge!



Workshop co-organization

 

GrUViers will co-organize 4 workshops featuring state-of-the-art research and will host one challenge:

 



Invited workshop talks

 

Yasutaka Furukawa will give a talk at the “Computer Vision in the Built Environment” workshop, while Manolis Savva will give a talk at the “3D Vision and Robotics” workshop.



Technical Papers and GrUVi authors

 

Congratulations to all authors for the accepted papers! The full list of papers featured on CVPR 2021 can be accessed here.

Jun 19, 2021
GrUVi making waves at CVPR 2021

× Talk by Yang Wang

April 23, 2021

Talk title: Self–Adaptive Visual Learning

Time: April 23, from 3:30 to 4:30PM (PST)

Abstract: There have been significant advances in computer vision in the past few years. Despite the success, current computer vision systems are still hard to use or deploy in many real-world scenarios. In particular, current computer vision systems usually learn a generic model. But in real world applications, a single generic model is often not powerful enough to handle the diverse scenarios. In this talk, I will introduce some of our recent work on self-adaptive visual learning. Instead of learning and deploying one generic model, our goal is to learn a model that can effectively adapt itself to different environments during testing. I will present applications from several computer visions, such as crowd counting, anomaly detection, personalized highlight detection, etc.

Bio: Yang Wang is an associate professor in the Department of Computer Science, University of Manitoba. He is currently on leave and working as the Chief Scientist in Computer Vision, Noah‘s Ark Lab, Huawei Technologies Canada. He did his PhD from Simon Fraser University, MSc from University of Alberta, and BEng from Harbin Institute of Technology. Before joining UManitoba, he worked as a NSERC postdoc at the University of Illinois at Urbana-Champaign. His research focuses on computer vision and machine learning. He received the 2017 Falconer Emerging Researcher Rh Award in applied science at the University of Manitoba. He currently holds the inaugural Faculty of Science research chair in fundamental science at UManitoba.

April 23, 2021
Talk by Yang Wang

× Talk by Unnat Jain

April 16, 2021

Talk title: AI Agents that can Collaborate and Communicate in Virtual Visual Worlds

Time: April 16, from 3:30 to 4:30PM (PST)

Abstract: The past decade in artificial intelligence, particularly computer vision, has been about hammering passively collected datasets with massive deep learning models. As the race to boost metrics on them is saturating, researchers like me are working on visual or embodied AI agents that draw inspiration from how toddlers acquire intelligence, i.e., by exploring, interacting, and navigating in their environments.
Particularly, I am excited to study how visual embodied agents can learn key skills of social intelligence – collaboration and communication. In this talk, I’ll discuss how we are building AI Agents that can collaborate and communicate in virtual visual worlds. Moreover, I’ll discuss how simplistic gridworlds and visual worlds can be connected with a ‘GridToPix’ methodology. The relevant papers can be found on my webpage.

Bio: Unnat Jain is a Ph.D. student in Computer Science at UIUC working with Alex Schwing and Svetlana Lazebnik. His research is focused on developing collaborative and communicative visual agents. He has worked as a research intern at DeepMind, Facebook AI Research, and Allen Institute for AI. He has won many awards including the Director’s Gold Medal (IIT Kanpur), Cadence Gold Medal for best engineering thesis (IIT Kanpur), David J. Kuck Outstanding MS Thesis Award (UIUC), Siebel Scholars, and was a finalist of Qualcomm Innovation Fellowship 2019.

April 16, 2021
Talk by Unnat Jain

× Talk by Kwang Moo Yi

April 9, 2021

Talk title: Towards Machines that Understand Geometry

Time: April 9, from 3:30 to 4:30PM (PST)

Abstract: Understanding the how the worlds looks like and interacting with the environment is a core ability of an intelligent being. Naturally, it has been a long-lasting research topic in Computer Vision. The capacity of machines in figuring out surrounding geometry has increased dramatically over the last decade, so much so that self-driving cars and autonomous drones are not a distant future. However, the “last mile” has shown to be more difficult than anticipated, delaying the arrival of these machines. Machine learning, like many other applications, have very recently started to help in this regard, again creating a leap from what it could do just a couple years back.
In this talk, I will introduce our journey towards machines that understand geometry. I will show that by combining our knowledge about the physical world with machine learning, we can achieve much more than a black-box solution. Specifically, I will show how we use non-differentiable components within deep networks and still train as a whole; how we constrain the network to follow physics via deep network architectures and formulations; how we can tailor architectures for solving image correspondence problems; and how we simplify the role of machine learning by turning the problem into hypothesis testing.

Bio: Kwang Moo Yi is an assistant professor in the Department of Computer Science at the University of British Columbia (UBC), and a member of the Computer Vision Lab, CAIDA, and ICICS at UBC. Before, he was at the University of Victoria as an assistant professor, where he is currently an adjunct professor. Prior to being a professor, he worked as a post-doctoral researcher at the Computer Vision Lab in École Polytechnique Fédérale de Lausanne (EPFL, Switzerland), working with Prof. Pascal Fua and Prof. Vincent Lepetit. He received his Ph.D. from Seoul National University under the supervision of Prof. Jin Young Choi. He also received his B.Sc. from the same University. He serves as area chair for top Computer Vision conferences (CVPR, ICCV, and ECCV), as well as AAAI. He is part of the organizing committee for CVPR 2023.

April 9, 2021
Talk by Kwang Moo Yi

... see all News