This week, Snap is heading to the Computer Vision and Pattern Recognition Conference (CVPR) to spotlight our leading research and product innovation across generative AI, computer vision, and augmented reality.
At Snap, our approach to research is to transform bold ideas into breakthrough innovations, and then turn state of the art technologies into products that our community can use to express their creativity.
Our R&D advancements shape Snap’s features across Snapchat, our AR development tool Lens Studio, Spectacles, and even our new AR Enterprise Services and AR Mirrors.
These innovative products have led Snap to become one of the largest AR platforms in the world: More than 750 million people use Snapchat every month, more than 300,000 AR creators and developers have built AR experiences in Lens Studio, and businesses use our Sponsored AR ads and AR Enterprise Services to drive better business results.
Say Hi to Snap at CVPR
Snap team members will present twelve papers, one tutorial, and two demos at CVPR this year, including one highlighted by the conference. We’re excited to achieve a 70% paper acceptance rate this year, which is a testament to the advances made by our team.
Here are five top Snap sessions at CVPR (you won’t want to miss!):
Tuesday, June 20th
Yinghao Xu, Menglei Chai, Zifan Shi, Sida Peng, Ivan Skorokhodov, Aliaksandr Siarohin, Ceyuan Yang, Yujun Shen, Hsin-Ying Lee, Bolei Zhou, Sergey Tulyakov
4:30 - 6:30pm | #26
This work presents DisCoScene: a 3D-aware generative model for high-quality and controllable scene synthesis.
Unsupervised Volumetric Animation
Aliaksandr Siarohin, Willi Menapace, Ivan Skorokhodov, Kyle Olszewski, Jian Ren, Hsin-Ying Lee, Menglei Chai, Sergey Tulyakov
4:30 -6:30pm | #50
This paper proposes a novel approach for unsupervised 3D animation of non-rigid deformable objects. Our method learns the 3D structure and dynamics of objects solely from single-view RGB videos, and can decompose them into semantically meaningful parts that can be tracked and animated.
3DAvatarGAN: Bridging Domains for Personalized Editable Avatars
Rameen Abdal, Hsin-Ying Lee, Peihao Zhu, Menglei Chai, Aliaksandr Siarohin, Peter Wonka, Sergey Tulyakov
4:30 -6:30pm | #40
This contribution allows the generation, editing, and animation of personalized artistic 3D avatars on artistic datasets.
Affection: Learning Affective Explanations for Real-World Visual Data
Panos Achlioptas, Maks Ovsjanikov, Leonidas Guibas, Sergey Tulyakov
4:30 -6:30PM | #240
In this work, we explore the emotional reactions that real-world images tend to induce by using natural language as the medium to express the rationale behind an affective response to a given visual stimulus.
Wednesday, June 21st
Real-Time Neural Light Field on Mobile Devices
Junli Cao, Huan Wang, Pavlo Chemerys, Vladislav Shakhrai, Ju Hu, Yun Fu, Denys Makoviichuk, Sergey Tulyakov, Jian Ren
10:30 AM -12:30 PM | #10
In this work, we propose an efficient network that runs in real-time on mobile devices for neural rendering.
Stop by booth #923 to meet our team, try out Lens Studio, Spectacles, and our AR Mirror, and learn more about career opportunities at Snap.
Join the Snap Team
We’re hiring talented researchers, engineers, and interns who specialize in machine learning, computer vision, and natural language processing. Sign up if you’d like to be contacted about current and future computer vision & machine learning roles at Snap, or check out all of our current full-time openings at careers.snap.com.
We can’t wait to meet you at CVPR!