19 June 2023
19 June 2023

Snap Showcases Research Advancement at CVPR 2023

Our leading research across generative AI, computer vision and augmented reality shapes Snap's products and reaches our global community

This week, Snap is heading to the Computer Vision and Pattern recognition Conference (CVPR) to spotlight our leading research and product innovation across generative AI, computer vision and augmented reality.

At Snap, our approach to research is to transform bold ideas into breakthrough innovations, and then turn state-of-the-art technologies into products that our community can use to express their creativity.

Our R&D advancement shape Snap's features across Snapchat, our AR development tool Lens studio, Spectacles and even our new AR Enterprise Services and AR Mirrors.

These innovative products have led Snap to become one of the largest AR platforms in the world: More than 750 million people use Snapchat every month, more than 300,000 AR creators and developers have built AR experiences in Lens Studio, and businesses use our Sponsored AR ads and AR Enterprise Services to drive better business results.


Say Hi to Snap at CVPR


Snap team member will present twelve papers, one tutorial and two demos at CVPR this year, including one highlighted by the conference. We're excited to achieve a 70 % paper acceptance rate this year, which is a testament to the advances made by our team.

We're excited to achieve a 70% paper acceptance rate this year.

Here are five top Snap sessions at CVPR (you won't want to miss!):

Tuesday, June 20th

DisCoScene: Spatially Disentangled Generative Radiance Fields for controllable 3D-aware Scene Synthesis

Yinghao Xu, Menglei Chai, Zifan Shi, Sida Peng, Ivan Skorokhodov, Aliaksandr Siarohin, Ceyuan Yang, Yujun Shen, Hsin-Ying Lee, Bolei Zhou, Sergey Tulyakov

4:3- - 6:30pm | #26

This work presents DisCoScene: A 3D-aware generative model for high-quality and controllable scene synthesis.


Unsupervised Volumetric Animation

Aliaksandr siarohin, willi Menapace, Ivan Skorokhodov, Kyle Olszewski, Jian Ren, Hsin-Ying Lee, Menglei Chai, sergey Tulyakov

4:40 - 6:30pm | #50

This paper proposes a novel approach for unsupervised 3D animation of non-rigid deformable objects. Our method learns the 3D structure and dynamics of objects solely from single RGB videos and, can decompose them into semantically meaningful parts that can be tracked and animated.


3DAvatarGAN: Bridging Domains for Personalized Editable Avatars

Rameen Abdal, Hsin-Ying Lee, Peihao Zhu, Menglei Chai, Aliaksandr siarohin, Peter wonka, Sergey Tulyakov

4:30 - 6:30pm | #40

This contribution allows the generation, editing and animation of personalized artistics 3D avatars on artistic datasets.


Affection: Learning Affective Explanations for Real-World Visual Data

Panos achlioptas. Maks Ovsjanikov, Leonidas Guibas, Sergey Tulyakov

4:30 - 6:30pm | #240

In this work, we explore the emotional reactions that real-world images tend to induce by using natural language as the medium to express the rationale behind an affective response to a given visual stimulus.


Wednesday, June 21st

Real-Time Neural Light Field on Mobile Devices

Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makaviichuk, Sergey Tulyakov, Jian Ren

10: AM - 12:30 PM | #10

In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering.


Stop by booth #923 to meet our team, try out Lens Studio, Spectacles and our AR Mirror, and learn more about career opportunities at Snap.

Join the Snap Team

We're hiring talented researchers, engineers and interns who specialize in machine learning, computer vision and natural language processing. Sign up if you'd like to be contacted about current and future computer vision & machine learning roles at Snap or check out all of our current full-time openings at careers.snap.com.

We can't wait to meet you at CVPR!

A full schedule of Snap sessions at CVPR.

Back to News