June 21, 2022
June 21, 2022

A Spotlight on Snap Research at CVPR 2022

Snap’s Research team is kicking off the week in New Orleans at the 2022 Computer Vision and Pattern Recognition Conference. This year at CVPR, our team will share seven new academic papers, alongside the world’s leading researchers, that show breakthroughs across image, video, object synthesis and object manipulation methods.

Snap’s Research team is kicking off the week in New Orleans at the 2022 Computer Vision and Pattern Recognition Conference. This year at CVPR, our team will share seven new academic papers, alongside the world’s leading researchers, that show breakthroughs across image, video, object synthesis and object manipulation methods.

We’ve worked closely with interns and external academic institutions on this work to make significant gains in video synthesis technology. These developments can ultimately inform what we bring to our community of Snapchatters around the world. 

The work presented in our papers is based on the following developments: Our team has built implicit video representations, resulting in state-of-the-art video synthesis on a variety of tasks, while maintaining modest computational requirements. We then introduce two new problems in the domain: multimodal video synthesis and playable environments.

For example, the CLIP-NeRF paper was a collaborative research effort to study the manipulation of Neural Radiance fields. Neural Radiance fields make it possible to render objects using neural networks, without needing  sophisticated graphics pipelines. Findings from this work can help inform improvements to the ways digital assets are created for use in augmented reality experiences. And, this PartGlot paper explores how machines can better understand shapes and objects around us using language models.

We’re excited about the potential of this work to unlock the creativity of our community and creators across our products and platforms in the future.

GOING TO CVPR?  

Our team will be on site so come say hello! If you’d like  to learn more about our papers, team, and products, stop by booth #1322 during the Expo (June 21 - June 23) or email conferences@snap.com

2022 CVPR PAPERS

Written by and in collaboration with Snap Research

Playable Environments: Video Manipulation in Space and Time

Willi Menapace, Stéphane Lathuilière, Aliaksandr Siarohin, Christian Theobalt, Sergey Tulyakov, Vladislav Golyanik, Elisa Ricci
Poster Session: Tuesday, June 21, 2022 2:30PM – 5:00PM

Paper ID: 2345 | Poster ID: 99b

Show Me What and Tell Me How: Video Synthesis via Multimodal Conditioning
Ligong Han, Jian Ren, Hsin-Ying Lee, Francesco Barbieri, Kyle Olszewski, Shervin Minaee, Dimitris Metaxas, Sergey Tulyakov

Poster Session: Tuesday, June 21, 2022  2:30PM – 5:00PM 

Paper ID: 3594 | Poster ID: 102b

CLIP-NeRF: Text-and-Image Driven Manipulation of Neural Radiance Fields

Can Wang, Menglei Chai, Mingming He, Dongdong Chen, Jing Liao
Poster Session: Tuesday, June 21, 2022 |  2:30PM – 5:00PM 

Paper ID: 6311 | Poster ID: 123b

StyleGAN-V: A Continuous Video Generator with the Price, Image Quality and Perks of StyleGAN2 

Ivan Skorokhodov, Sergey Tulyakov, Mohamed Elhoseiny

Poster Session: Tuesday, June 21, 2022 | 2:30PM – 5:00PM 

Paper ID: 5802 | Poster ID: 103b

Diverse Image Outpainting via GAN Inversion

Yen-Chi Cheng, Chieh Hubert Lin, Hsin-Ying Lee, Jian Ren, Sergey Tulyakov, Ming-Hsuan Yang

Poster Session: Thursday, June 23, 2022 | 10:00AM-12:30 PM 

Paper ID: 5449 | Poster ID: 79a

PartGlot: Learning Shape Part Segmentation from Language Reference Games 

Ian Huang,Juil Koo, Panos Achlioptas, Leonidas Guibas, Minhyuk Sung

Poster Session: Friday, June 24, 2022 8:30 AM - 10:18 AM

Paper ID: 3830 | Poster ID: 49a

Are Multimodal Transformers Robust to Missing Modality?

Mengmeng Ma, Jian Ren, Long Zhao, Davide Testuggine, Xi Peng

Poster Session: Friday, June 24, 2022 | 10:00AM - 12:30 PM 

Paper ID: 7761 | Poster ID:  212a

 

 


Back To News