Snap Hints At Future AR Glasses Powered By Generative AI
The exec said that, initially, generative AI could be used to do things like improve the resolution and clarity of a Snap after the user captures it, or could even be used for “more extreme transformations,” editing images or creating Snaps based on text input. (We should note that generative AI, at least in the way the term is being thrown around today, is not necessarily required to improve photo resolution.) Spiegel didn’t pin any time frames to these types of developments or announce specific products Snap had in the works, but said the company was thinking about how to integrate AI tools into its existing Lens Studio technology for AR developers. “We saw a lot of success integrating Snap ML tools into Lens Studio, and it’s really enabled creators to build some incredible things. We now have 300,000 creators who built more than 3 million lenses in Lens Studio,” Spiegel told investors. “So, the democratization of these tools, I think, will also be very powerful,” he added, in reference to the future integrations of AI tech.
What’s most interesting, perhaps, was the brief insight Spiegel offered about how Snap foresees the potential for AI when used in AR glasses. Though Snap’s Spectacles have not broken any sales records, the company continues to develop the product. The most recent version, the Spectacles 3, expands beyond recording standard photos and video with the addition of new tools like 3D filters and AR graphics. Spiegel suggested that AI could have an impact on this product as well, thanks to its ability to improve the process of building for AR. “We can use generative AI to help build more of these 3D models very quickly, which can really unlock the full potential of AR and help people make their imagination real in the world,” Spiegel added.
Read more of this story at Slashdot.