Snap Research Introduces a New Text-to-Image Diffusion Model for Generative AI 

In a new paper, Snap Research presents a method for the fastest available on-device model with image generation in under two seconds.
At Snap, we’re inspired by the new features and products that enhance creativity and bring imaginations to life, all enabled by generative AI technology. While there’s huge interest in these experiences, due to their complex technical architecture, they require tremendous time, resources, and processing power in order to come to life–particularly on mobile. 
That is why today, we’re excited to share that Snap Research has developed a new model called SnapFusion that shortens the model runtime from text input to image generation on mobile to under two seconds–the fastest time published to date by the academic community. 
Snap Research achieved this breakthrough by optimizing the network architecture and denoising process, making it incredibly efficient, while maintaining image quality. So, now it’s possible to run the model to generate images based on text prompts, and get back crisp clear images in mere seconds on mobile rather than minutes or hours, as other research presents. 
While it is still early days for this model, this work has the potential to supercharge high quality generative AI experiences on mobile in the future. To read more about this breakthrough, please check out our more detailed paper here
Back to News