
The idea of people experiencing their favorite mobile apps as immersive 3D environments took a step closer to reality with a new Google-funded research iniative at Georgia Tech.
A new approach proposed by Tech researcher Yalong Yang uses generative artificial intelligence (GenAI) technologies to convert almost any mobile or web-based app into a 3D environment.
That includes application software programs from Microsoft and Adobe as well as any social media (Tiktok), entertainment (Spotify), banking (PayPal), or food service app (Uber Eats) and everything in between.
Yang aims to make the 3D environments compatible with augmented and virtual reality (AR/VR) headsets and smart glasses. He believes his research could be a breakthrough in spatial computing and change how humans interact with their favorite apps and computer systems in general.
“We’ll be able to turn around and see things we want, and we can grab them and put them together,” said Yang, an assistant professor in the School of Interactive Computing. “We’ll no longer use a mouse to scroll or the keyboard to type, but we can do more things like physical navigation.”
Yang’s proposal recently earned him recognition as a 2025 Google Research Scholar. Along with converting popular social apps, his platform will be able to instantly render Photoshop, MS Office, and other workplace applications in 3D for AR/VR devices.
“We have so many applications installed in our machines to complete all the various types of work we do,” he said. “We use Photoshop for photo editing, Premiere Pro for video editing, Word for writing documents. We want to create an AR/VR ecosystem that has all these things available in one interface with all apps working cohesively to support multitasking.”
Filling The Gap With AI
Just as Google’s Veo and Open AI’s Sora use generative-AI to create video clips, Yang believes it can be used to create interactive, immersive environments for any Android or Apple app.
“A critical gap in AR/VR is that we do not have all those existing applications, and redesigning all those apps will take forever,” he said. “It’s urgent that we have a complete ecosystem in VR to enable us to do the work we need to do. Instead of recreating everything from scratch, we need a way to convert these applications into immersive formats.”
School of Interactive Computing Assistant Professor Yalong Yang was named a 2025 Google Research Scholar. He is working toward developing a generative artificial intelligence tool that converts any mobile app into a 3D immersive environment that can be experienced on virtual headsets. Photo by Terence Rushin/College of Computing.
The Google Play Store boasts 3.5 million apps for Android devices, while the Apple Store includes 1.8 million apps for iOS users.
Meanwhile, there are fewer than 10,000 apps available on the latest Meta Quest 3 headset, leaving a gap of millions of apps that will need 3D conversion.
“We envision a one-click app, and the (Android Package Kit) file output will be a Meta APK that you can install on your MetaQuest 3,” he said.
Yang said major tech companies like Apple have the resources to redesign their apps into 3D formats. However, small- to mid-sized companies that have created apps either do not have that ability or would take years to do so.
That’s where generative-AI can help. Yang plans to use it to convert source code from web-based and mobile apps into WebXR.
WebXR is a set of application programming interfaces (APIs) that enables developers to create AR/VR experiences within web browsers.
“We start with web-based content,” he said. “A lot of things are already based on the web, so we want to convert that user interface into Web XR.”
Building New Worlds
The process for converting mobile apps would be similar.
“Android uses an XML description file to define its user-interface (UI) elements. It’s very much like HTML on a web page. We believe we can use that as our input and map the elements to their desired location in a 3D environment. AI is great at translating one language to another — JavaScript to C-sharp, for example — so that can help us in this process.”
If generative-AI can create environments, the next step would be to create a seamless user experience.
“In a normal desktop or mobile application, we can only see one thing at a time, and it’s the same for a lot of VR headsets with one application occupying everything. To live in a multi-task environment, we can’t just focus on one thing because we need to keep switching our tasks, so how do we break all the elements down and let them float around and create a spatial view of them surrounding the user?”
Along with Assistant Professor Cindy Xiong, Yang is one of two researchers in the School of IC to be named a 2025 Google Research Scholar.
Four researchers from the College of Competing have received the award. The other two are Ryan Shandler from the School of Cybersecurity and Privacy and Victor Fung from the School of Computational Science and Engineering.