featured

rethinking-digital-twins

Rethinking Digital Twins

The idea of digital twins has been conceptually important to immersive technology and related ideas like the metaverse for some time – not to mention their practical employment, particularly in enterprise. However, the term doesn’t, or hasn’t, necessarily meant what it sounds like and the actual technology has so far had only limited usefulness for spatial computing.

Advances in immersive technology itself are opening up more nuanced and exciting applications for digital twins as fully-featured virtual artifacts, environments, and interfaces to the point that even experts who have been working with digital twins for decades are starting to rethink the concept.

Understanding Digital Twins

What exactly constitutes a digital twin is still a matter of some difference from company to company. ARPost defines a digital twin as “a virtual version of something that also exists as a physical object.” This basic definition includes now arguably antiquated iterations of the technology that wouldn’t be of much interest to the immersive tech crowd.

Strictly speaking, a digital twin does not have to be interactive, dynamic, or even visually representative of the physical twin. In academia and enterprise where this concept has been practically employed for decades, a digital twin might be a spreadsheet or a database.

We often think about the metaverse as like The Matrix, but we often think of it as the way Neo experiences the Matrix – from within. In that same analogy, digital twins are like the Matrix but as Tank and Dozer experience it – endless numbers that only look like numbers to the uninitiated but that paint detailed pictures to those in the know.

While that version certainly continues to have its practical applications, it’s not exactly what most readers will have in mind when they encounter the term.

The Shifting View of Digital Twins

“The traditional view of a digital twin is a row in a database that’s updated by a device,” Nstream founder and CEO Chris Sachs told ARPost. “I don’t think that view is particularly interesting or particularly useful.”

Nstream is “a vertically integrated streaming data application platform.” Their work includes digital twins in the conventional sense but it also includes more nuanced uses that incorporate the conventional but also stretch it into new fields. That’s why companies aren’t just comparing definitions, they’re also rethinking how they use these terms internally.

“How Unity talks about digital twins – real-time 3D in industry – I think we need to revamp what that means as we go along,” Unity VP of Digital Twins Rory Armes told ARPost. “We’ve had digital twins for a while […] our evolution or our kind of learning is the visualization of that data.”

This evolution naturally has a lot to do with technological advances, but Armes hypothesizes that it’s also the result of a generational shift. People who have lived their whole lives as regular computer users and gamers have a different approach to technology and its applications.

“There’s a much younger group coming into the industry […] the way they think and the way they operate is very different,” said Armes. “Their ability to digest data is way beyond anything I could do when I was 25.”

Data doesn’t always sound interesting and it doesn’t always look exciting. That is, until you remember that the metaverse isn’t just a collection of virtual worlds – it also means augmenting the physical world. That means lots of data – and doing new things with it.

Digital Twins as a User Interface

“If you have a virtual representation of a thing, you can run software on that representation as though it was running on the thing itself. That’s easier, it’s more usable, it’s more agile,” said Sachs. “You can sort of program the world by programming the digital twin.”

This approach allows limited hardware to provide minimal input to the digital twin. These provide minimal output to devices creating an automated, more affordable, more responsive Internet of Things.

“You create a kind of virtual world […] whatever they decide in the virtual world, they send it back to the real world,” said Sachs. “You can create a smarter world […] but you can’t do it one device at a time. You have to get them to work together.”

This virtual world can be controlled from the backend by VR. It can also be navigated as a user interface in AR.

“In AR, you can kind of intuit what’s happening in the world. That’s such a boost to understanding this complex technical world that we’ve built,” said Sachs. “Google and Niantic haven’t solved it, they’ve solved the photon end of it, the rendering of it, but they haven’t solved the interactivity of it […] the problem is the fabric of the web. It doesn’t work.”

To Sachs, this process of creating connected digital twins of essentially every piece of infrastructure and utility on earth isn’t just the next thing that we do with the internet – it’s how the next generation of the internet comes about.

“The world wide web was designed as a big data repository. The problem is that not everything is a document,” said Sachs. “We’re trying to upgrade the web so everything, instead of being a web page is a web agent, […] instead of a document, everything is a process.”

Rebuilding the World Wide Web

While digital twins can be a part of reinventing the internet, a lot of the tools used to build digital twins are also not made for that particular task. It doesn’t mean that they can’t do the job, it just means that providers and people using those services have to be creative about it.

“The Unity core was never developed for these VR training and geospatial data uses. […] Old-school 3D modeling like Maya was never designed for [spatial data],” said Armes. “That’s where the game engine starts.”

Unity – which is a game engine at heart – isn’t shying away from digital twins. The company works with groups, particularly in industry, to use Unity’s visual resources in these more complex use cases – often behind the scenes on internal projects.

“There are tons of companies that have bought Unity and are using it to visualize their data in whatever form,” said Armes. “People don’t necessarily use Unity to bring a product to the community, they’re using it as an asset process and that’s what Unity does really well.”

While Unity “was never developed for” those use cases, the toolkit can do it and do it well.

“We have a large geospatial model, we slap it into an engine, we’re running that engine,” said Armes. “We’re now bringing multiple layers to a world and being able to render that out.”

“Bringing the Worlds Together”

A digital twin of the real world powered by real-time data – a combination of the worlds described by Armes and Sachs – has huge potential as a way of both understanding and managing the world.

“We’re close to bringing the worlds together, in a sense,” said Armes. “Suddenly now, we’re starting to bring the pieces together […] we’re getting to that space.”

The Orlando Economic Partnership (OEP)  is working on just such a platform, with a prototype already on offer. I was fortunate enough to see a presentation of the Orlando digital twin at the Augmented World Expo. The plan is for the twin to one day show real-time information on the city in a room-scale experience accessible to planners, responders, and the government.

“It’s going to become a platform for the city to build on,” said Justin Braun, OEP Director of Marketing and Communications.

Moving Toward Tomorrow

Digital twins have a lot of potential. But, many are stuck between thinking about how digital twins have always worked and thinking about the ways that we would like them to work. The current reality is somewhere in the middle – but, like everything else in the world of emerging and converging technology – it’s moving in an interesting direction.

Rethinking Digital Twins Read More »

niantic-and-8th-wall-explore-new-monetization-strategies

Niantic and 8th Wall Explore New Monetization Strategies

Historically, Niantic has made much of its money through in-app purchases on its free-to-play games like Pokémon Go. However, recent announcements from the company suggest that it’s exploring new monetization strategies, including through web-based experiences powered by 8th Wall.

Niantic Pioneers AR Ads In-App

The Cannes Lions Festival recently took place in the South of France (and in Virbela, if you got a golden ticket from PwC). Niantic took the opportunity to announce a new ad format coming to its AR games.

“Rewarded AR ads is a revolutionary new ad product from Niantic, which uses the smartphone camera to immerse players within branded content in the real world around them,” said a release shared with ARPost. “Players engage with interactive experiences within these ad units while they move around in the real world to unlock rewards within the game.”

This might sound like it disrupts the game, or poses an undue bother to players. However, this might not be the case. If done thoughtfully, this ad format could be a way to introduce players to branded immersive content that they might be otherwise interested in anyway.

Niantic Rewarded AR ads

“Ad” might have a bad taste to it – like a commercial that interrupts the video you’re watching. But tastefully executed branded immersive experiences often feel less like ads and more like opportunities for consumers to participate in brands that they care about. Companies like Coca-Cola create branded immersive experiences that are actively sought after by fans.

“AR offers an exciting new way to engage people powered by fresh innovation in spatial computing,” Niantic VP of Sales and Global Operations, Erin Schaefer, said in the release. “Audiences can engage with Rewarded AR ads to have immersive and enjoyable brand experiences, discover new products, or engage with interactive features.”

What about immersion? AR is built on the user’s physical surroundings. Artistically done location-based advertising might play into the blending of real and imagined worlds, rather than interrupt it. So far, there have only been limited pilot programs so we have yet to see for ourselves.

Get Out Your Virtual Wallet

Tested ads in AR apps directed players towards a physical point-of-sale from within their game – and lured players with the promise of in-game rewards. But WebAR is where most branded immersive experiences currently take place and Niantic has a big stake in that world since purchasing 8th Wall last year.

In addition to being a larger established ad market, WebAR is less limited to actions and interactions within a given application. It’s easier to do things like conduct e-commerce through the web than through an app, and rewards for customers aren’t confined to a given application.

That’s increasingly true given the advent of Web3 – an era of the internet in which users access online experiences not through individual profiles and accounts, but through one “wallet” that maintains a digital identity across experiences. SmartMedia Technologies, a “Web3 engagement and loyalty platform,” announced such a wallet integrated into 8th Wall.

“By combining our expertise in Web3-enabled mobile wallets with Niantic’s AR technology, we aim to create innovative experiences that enhance user engagement and drive brand loyalty,” SmartMedia Technologies CEO, Tyler Moebius, said in a release shared with ARPost.

As with AR ads, branded experiences through WebAR linked with a user’s wallet have proved a promising proposition to users who might already seek out branded experiences. These experiences now have the potential to exist in other areas of the users’ online life.

“This opens up a new frontier of creativity for brands and the opportunity to redefine how they engage with their target audiences,” Niantic Director of Product Management, Tom Emrich, said in the release. “Our collaboration with SmartMedia Technologies adds a new dimension to WebAR experiences for brands by giving consumers ways to build and activate their digital collections.”

A World of Augmented Ads?

The film Ready Player One gave us an instant classic scene as executives try to decide exactly how much of a player’s field of view can safely be taken up by advertising. It doesn’t have to be that way, as AR ads can blend into the virtual world just as they so often blend into the physical world. Niantic isn’t a bad group to be leading the charge.

Niantic and 8th Wall Explore New Monetization Strategies Read More »

apple-vision-pro:-a-catalyst-for-the-growth-of-the-xr-industry

Apple Vision Pro: A Catalyst for the Growth of the XR Industry

Sponsored content

Sponsored by VR Vision

The recent introduction of Apple’s Vision Pro has ignited a fresh wave of excitement in the extended reality landscape, with industry experts and enthusiasts alike anticipating a surge in the growth and evolution of the XR industry.

This immersive technology (coined “spatial computing” by Apple), which encompasses virtual reality, augmented reality, and mixed reality, is set to experience a significant boost from Apple’s entry into the field.

A New Era in Immersive Technology

The Vision Pro’s unveiling at Apple’s Worldwide Developer Conference (WWDC) generated a buzz in the XR world. It has triggered both commendations and criticisms from the global XR community, with its future potential and implications for the broader XR landscape hotly debated.

Apple’s Vision Pro is a spatial computer that seamlessly blends digital content with the physical world, marking a significant step forward in immersive technology.

Apple Vision Pro - headset

According to the company, it uses a “fully three-dimensional user interface controlled by the most natural and intuitive inputs possible – a user’s eyes, hands, and voice.” This marks a departure from traditional interaction methods, offering a more immersive experience for users.

A panel of global executives from the immersive tech industry weighed in on the device, discussing its potential use cases, and how it would impact the global XR community. The consensus was that the Vision Pro represented a significant leap forward in the development of XR technology, setting the stage for exciting advancements in the field.

The Potential of the Vision Pro

The Vision Pro’s introduction has been described as one of the “watershed moments” for the VR and AR industry. The device with enormous potential is poised to breathe new life into the XR space, with two of the world’s largest tech giants, Apple and Meta Platforms (formerly Facebook), now vying for market share.

The Vision Pro’s announcement has spurred conversations and expectations that “spatial computing” will become an integral part of everyday life, much like how other Apple devices have seamlessly integrated into our daily personal and professional lives.

Apple has a remarkable track record of introducing technology that resonates with individuals on a personal level. The company’s knack for creating products that enhance individuals’ lives, work, and well-being has been a crucial factor in their widespread adoption.

Vision Pro: Design and Features

The Vision Pro comes with a clean, sleek design, and high-quality features – a standard we’ve come to expect from Apple. The device is controlled using our hands, rather than external controllers, making it more intuitive and user-friendly.

Apple has prioritized its use cases within its existing ecosystem of apps and content. This strategic move sets Vision Pro apart from its competitors, providing a unique selling proposition.

The device’s hardware is impressive, but its real strength lies in the software experience it offers. Vision Pro introduces a new dimension to personal computing by transforming how users interact with their favorite apps, enhancing productivity and entertainment experiences.

The Impact on the XR Market

The Vision Pro’s introduction has the potential to reshape the XR market. Apple’s entry into the XR space is expected to boost confidence, incite competition, and accelerate advancements in other headsets. This would lead to more people using mixed reality headsets in their day-to-day lives, accelerating use cases for enterprises and industries.

On the other hand, the device’s high price point suggests that it will initially find more success among corporate entities and developers. Companies could use the Vision Pro to create immersive experiences at events, while developers could use it to build innovative apps and content for the device.

At VR Vision, for example, we see enormous potential in the application of virtual reality training for enterprise applications, and the Vision Pro will only enable further innovation in that sector.

It is much safer and cost-effective to operate heavy machinery in the virtual world than in the real world for training. This has applicability across a wide array of industries and use cases and it will be interesting to see just how impactful it truly becomes.

The Vision Pro’s Presentation

Apple’s presentation of the Vision Pro was impressive, ticking many boxes. It showcased significant advancements in hardware and software, demonstrating how the device could offer a hands-free, intuitive experience. The demonstration also highlighted how spatial computing and the new user experience could spur creative content development.

However, some critics felt that the presentation didn’t fully demonstrate the range of VR activities that Vision Pro could achieve. There was a focus on ‘looking and clicking’ functions, which could also be performed on a smartphone. More emphasis could have been placed on the device’s potential for workplace and communication applications.

The Target Audience and Use Cases

The Vision Pro’s high price point suggests that its target audience will initially be businesses and developers. The device could revolutionize workplace training and education, enhancing engagement with learning materials, and streamlining work processes.

Apple Vision Pro

For developers, the Vision Pro represents an opportunity to experiment and innovate. Apple’s established App Store and developer community provide a strong launchpad for the creation of apps and content for Vision Pro. These early adopters may not create polished work initially, but their experiments and ideas will likely flourish in the coming years.

The Role of Vision Pro in the XR Market

Apple’s history of developing proprietary technology and working internally suggests that the Vision Pro will likely follow a similar path. The company’s commitment to quality control, unique design processes, and product development control has given Apple devices their distinctive look and feel.

While it’s difficult to predict the future, interoperability between headsets will likely mirror the landscape of Android and Apple smartphones or Mac and Windows computers. The Vision Pro will likely stand out in the market for its unique feel, best-in-class visuals and technology, and intuitive user experiences, maintaining the overall cohesion between various Apple devices.

Enhancing App Development With Unity

The integration of Unity’s development platform with Vision Pro enables developers to leverage the device’s capabilities and create compelling AR experiences.

Unity’s robust toolset offers a wide range of features, including real-time rendering, physics simulation, and advanced animation systems, all optimized for the Vision Pro’s hardware.

This seamless integration allows developers to focus on unleashing their creativity and designing immersive experiences that blur the line between the physical and virtual worlds.

The Vision Pro holds immense potential for a wide range of industries. From gaming and entertainment to education, healthcare, and industrial training, the device opens up avenues for innovative applications. Imagine interactive virtual tours of historical sites, immersive educational experiences, or real-time collaborative design and engineering projects. The Vision Pro’s spatial computing capabilities pave the way for a future where digital content seamlessly blends with our physical reality, transforming the way we learn, work, and entertain ourselves.

Apple’s Vision Pro: A Boost for Meta

Apple’s entry into the XR market could be a boon for Meta. Despite the criticisms and challenges Meta has faced, its headsets have consistently offered the best value in their class, with excellent hardware and a great game library, all at an attractive price.

The introduction of the Vision Pro could force Meta to step up its game, enhancing its software offerings and improving its user experience. The competition from Apple could ultimately lead to better products from Meta, benefiting users and developers alike.

Conclusion

The introduction of the Apple Vision Pro represents a significant milestone in the XR industry. Its potential impact extends beyond its impressive hardware and software features, setting the stage for exciting advancements in the field.

With Apple now a major player in the XR space, the industry is poised for a surge in growth and evolution. The Vision Pro’s introduction could lead to more investment in R&D, a flourishing supply chain, and an influx of developers eager to create innovative experiences for the device.

Undoubtedly, the Vision Pro marks the beginning of a new era in immersive technology, and its impact on the XR industry will be felt for years to come.

Written by Lorne Fade, COO at VR Vision

Apple Vision Pro: A Catalyst for the Growth of the XR Industry Read More »

“netflix-for-ar-content”:-arvision’s-ariddle,-an-ar-gaming-platform-with-multiplayer-feature

“Netflix for AR Content”: ARVision’s ARiddle, an AR Gaming Platform With Multiplayer Feature

(A)Riddle me this, what do you get when you create a gaming platform that facilitates the distribution of AR content and improves the multiplayer gaming experience? The start of a new era of interactive entertainment.

ARVision Games, an innovator in AR gaming, takes interactive and collaborative gaming experiences to the next level with the recent launch of its newest multiplayer gaming platform, ARiddle. Unveiled at the recent VivaTech Conference 2023, this cutting-edge platform is set to enable full AR immersion of gameplay.

ARVision’s ARiddle: The Netflix for AR Games

The new platform developed by ARVision Games is set to transform how we access and use AR content. Instead of downloading dozens of apps for individual games, users need to download only one app—ARiddle—to access a vast library of AR games.

ARVision Games ARiddle

“What we are trying to do is like Netflix for AR content. So today, there are platforms to play movies, to play videos, but there is no platform to play AR content,” said Christian Ruiz, CCO of ARVision Games, in a recent interview at VivaTech.

So, the team decided to develop the ARiddle platform, where users will be able to find dozens of AR games, both developed by the company itself and by other developers. “We are going to open the platform to other studios, for them to be able to create games and then put them on our platform,” Ruiz said. 

Another innovative aspect of the AR platform ARiddle is the multiplayer feature. As Ruiz explains, “We are really innovating with the multiplayer feature, because making multiplayer in VR, or any kind of game, is quite usual, but in AR is more complicated.”

ARiddle will also be home to AR escape games developed by ARVision Games. Captivating players with their immersive narratives and intricate puzzles, escape rooms have gained immense popularity in recent years.

The Montreal-based company takes this concept to new heights by bringing escape rooms into the realm of augmented reality. Through its ARiddle platform, players can now experience the thrill of escape rooms wherever they are. All they need is a mobile device to project the virtual environment into their physical surroundings.

In the ARiddle AR escape games, players are challenged to analyze their environment, search for clues, and solve a myriad of enigmas and riddles to complete the mission. With the exhilarating blend of AR technology and engaging gameplay, the ARiddle app guarantees an unforgettable adventure that will challenge your intellect and immerse you in a world of mystery.

Save the Cup: AR Escape Game That Lets Children Virtually Escape From the Hospital Bed

To create more meaningful AR experiences, ARVision Games has collaborated with 1 Maillot Pour La Vie, a charitable organization that strives to fulfill the dreams of children facing adversity. It coordinates with top athletes, famous personalities, and sports clubs to give children opportunities to experience something different from their daily hospital life.

Save the Cup, ARVision Games AR escape game

Together, ARVision Games and 1 Maillot Pour La Vie have embarked on a mission to create augmented reality multiplayer escape rooms that provide unique and engaging experiences for children. This collaboration aims to transport children into a world of imagination and wonder, where the challenges and obstacles of their reality are momentarily set aside.

“This is a great project that we have with 1 Maillot Pour La Vie. This association takes care of kids who have been in the hospital for a long time,” said Ruiz. “So we had the idea to make a special escape game for them to be able to play from their bed and transport them, thanks to augmented reality, to a chocolate factory, to an Egyptian pyramid…”

The first AR game developed through this collaboration is the Save the Cup series. This immersive AR experience combines captivating storytelling, challenging puzzles, and collaborative gameplay for children of all ages.

The game comprises a series of three episodes, each with unique challenges that bring players closer to finding the lost World Cup. Through AR, children are able to momentarily escape from their sick beds, go on amazing adventures, and achieve a sense of accomplishment as they complete the game’s mission.

Continued AR Innovation for Creating Meaningful Experiences

The Save the Cup series shows us how AR can empower and inspire users. By enabling children in hospitals and confined spaces to enjoy the same exciting escape room adventures as those in physical locations, the AR application becomes much more than a game. Innovations like these take AR beyond gaming and entertainment.

ARVision Games, Save the Cup, AR game
ARVision Games at VivaTech Paris in June 2023

Knowing this, ARVision Games is constantly innovating and is continuously iterating its platforms to provide more AR use cases that create meaningful experiences for users. It is driven to grow ARiddle into a centralized hub for AR content. By embracing the transformative potential of AR, ARVision Games is shaping the future of immersive content. With the fusion of technology and compassion at the heart of its mission, it will undoubtedly continue to inspire and bring joy to gamers around the world.

“Netflix for AR Content”: ARVision’s ARiddle, an AR Gaming Platform With Multiplayer Feature Read More »

metagate-–-international-metaverse-summit-set-to-take-place-in-istanbul,-turkiye-in-september-2023

MetaGate – International Metaverse Summit Set to Take Place in Istanbul, Türkiye in September 2023

Press release

Istanbul, Türkiye EUMENA Events, the event consultancy and management specialist, is proud to announce the launch of MetaGate – International Metaverse Summit, a three-day business-focused hybrid summit that will take place in September 2023 in Istanbul, Türkiye.

MetaGate will bring together international experts, technologists, digital builders, entrepreneurs, investors, business leaders, and enterprises to discover the power of the metaverse, identify its real value, and discuss its utility for businesses.The summit will feature sessions, keynotes, fireside talks, and panels exploring the latest trends and developments in the metaverse world and how companies can develop and implement metaverse strategy to enhance their business process, increase competitiveness, and support their marketing efforts.MetaGate will take place both in person and virtually in a custom 3D metaverse venue where attendees can create a unique avatar and walk around to explore exhibition areas, attend sessions, and chat with speakers, sponsors, startups, and investors.

“We are thrilled to be launching MetaGate – International Metaverse Summit,” said Lara Daoud, Founder of EUMENA Events . “The metaverse is the future, and this summit will provide an opportunity for businesses to capture the real business value of the Metaverse, meet and network with global tech and business leaders, prepare their business to enter the Metaverse, explore the latest metaverse solutions and technologies, and discover the real-world adoption of web3.0 and metaverse.”

MetaGate will offer an immersive experience where every detail reflects the collective pursuit of excellence, featuring 800+ physical delegates, 2,500+ virtual attendees, 30+ world-class industry speakers, and 20+ content-rich sessions.

The summit will highlight how businesses in various industries, such as E-commerce & Retail, Healthcare, Manufacturing, Logistics, Banking and Finance, Real Estate, Governments, Automotive, Education, Telecom, Tourism, Events, Media & Entertainment, Travel, and Tourism, are seeing tremendous opportunities in metaverse technologies.

MetaGate will also include a startup pitch competition designed to discover the next generation of metaverse innovators and builders. Qualified entrepreneurs will pitch their businesses live on stage in front of an experienced jury made up of investors and executives for the chance to win awards and potential partnership deals.

The summit will offer a series of networking events, including meetups, tours, and a Gala Dinner, providing engaging and networking opportunities among MetaGate Summit panelists, speakers, sponsors, startups, investors, and delegates.

“Our mission is to serve as a catalyst for positive change by fostering discussions about the metaverse proper use, innovation acceleration, productivity increase, and adaptability to change, all of which can greatly benefit any business looking to successfully launch its products and better serve its existing clients,” added Lara Daoud.

For more information about MetaGate – International Metaverse Summit, visit the summit’s official website at www.metagatesummit.com.

About EUMENA:

EUMENA is an event consultancy and management specialist that offers turnkey planning and comprehensive operational services. The company’s experience ranges from delivering mass events to large B2B, B2G, and B2C exhibitions and trade shows, corporate meetings, game-changing summits & conferences, brand activations, and innovative networking activities that deliver measurable return on investment to its clients.

Contact:

Email: [email protected]events.com

Phone: +90 536 338 4777

MetaGate – International Metaverse Summit Set to Take Place in Istanbul, Türkiye in September 2023 Read More »

inspirit-launches-affordable-xr-stem-education-platform-for-middle-and-high-school-students

Inspirit Launches Affordable XR STEM Education Platform for Middle and High School Students

XR STEM education has taken a leap forward with the official launch of Inspirit’s Innovative Learning Hub. The digital platform provides educators with affordable access to a premium library of virtual reality and augmented reality experiences designed specifically for middle and high school students. Focusing on enhancing learning outcomes and increasing engagement, Inspirit is revolutionizing the way STEM subjects are taught worldwide.

Breaking Down Barriers With Immersive Learning

Inspirit is a research-driven EdTech startup that pioneers immersive XR experiences for STEM education. The company’s Innovative Learning Hub stands as the premier choice for immersive XR STEM education, encompassing diverse subjects such as mathematics, physics, chemistry, biology, and vocational training.

Through XR experiences, Inspirit’s platform provides students with experiential learning opportunities. By engaging in simulations and exploring 3D models, students gain a deeper understanding of complex STEM concepts.

The advantages of VR education have long been embraced by both teachers and students, who have found immense value in its experiential approach. But with Inspirit’s XR expertise and easy-to-use technology, the platform bridges the gap between theoretical concepts and real-world applications, providing students with a deeper understanding and fostering engagement.

Renowned for its commitment to rigorous research, Inspirit collaborates with Stanford University researchers to unlock the full potential of XR learning. The result is a unified platform that seamlessly integrates into schools, improving learning outcomes and providing teachers with an intuitive system to embed into their curriculum.

Experts in the field, like Jeremy Bailenson, founding director of the Stanford Virtual Human Interaction Lab and professor of education, recognize the impact of Inspirit’s approach, emphasizing the importance of teacher professional development and curriculum alignment for successful integration and long-term usage in the classroom.

Inspirit XR STEM Education Platform

“Inspirit is unique in that it is led by a VR pioneer who puts ‘education first’, with a huge amount of experience in the world of STEM,” said Bailenson, in a press release shared with ARPost.

Unparalleled Access to Immersive XR Content

The Innovative Learning Hub boasts a comprehensive library of age-appropriate XR experiences that align with educational standards. From engaging simulations to interactive lessons, students have the opportunity to explore and study complex concepts, making learning tangible and enjoyable. This cutting-edge content ensures that students receive the highest-quality educational experiences.

Cross-Platform Compatibility for Seamless Learning

Flexibility is a key advantage of Inspirit’s Innovative Learning Hub. Students can access the library of XR content from various devices, including laptops, Chromebooks, and most VR headsets designed for educational use.

XR STEM Education Platform by Inspirit

This compatibility maximizes schools’ existing hardware investments while expanding learning capabilities. By eliminating the need for costly subscriptions and one-off purchases, Inspirit promotes inclusivity and accessibility, allowing all students to benefit from a comprehensive STEM curriculum.

XR STEM Education: Inspiring Students and Shaping Futures

As a firm believer in the transformative power of immersive technology, Aditya Vishwanath, co-founder and CEO of Inspirit, actively champions its potential for revolutionizing XR STEM education.

The Innovative Learning Hub serves as a platform that grants middle and high school students the opportunity to engage with exceptional XR content. “Our research-based methodology ensures all middle and high school students have an opportunity to access top-notch XR content that enhances their learning experience, prepares them for the future, and inspires them to pursue their dreams,” said Vishwanath.

Inspirit Launches Affordable XR STEM Education Platform for Middle and High School Students Read More »

unleash-your-creativity-with-subway-studio:-subway-surfers-introduces-in-game-ar-feature

Unleash Your Creativity With Subway Studio: Subway Surfers Introduces In-Game AR Feature

SYBO, the game studio responsible for the popular mobile game Subway Surfers, has pushed the boundaries of gaming with its first in-game augmented reality feature known as Subway Studio.

The landmark expansion contributes to the growing appeal of AR gaming. Subway’s AR feature lets players bring the exciting Subway Surfers world into their own lives, giving them a way to use their imagination and creativity.

Subway Studio Subway Surfers AR feature

Subway Studio gives players the ability to connect, share stories, and make their creations using their beloved Subway Surfers characters in their real-life surroundings. “Subway Studio puts the power of creativity and virality in our players’ hands, allowing them to interact, tell stories, and create content with their favorite characters in their homes, backyards, workplaces, you name it,” said Mathias Gredal Nørvig, CEO of SYBO, in a press release shared with ARPost.

Unveiling the Augmented Reality Marvel of Subway Studio

Subway Studio represents a technical marvel, incorporating cutting-edge mobile augmented reality and camera tracking technologies to create a realistic and immersive experience. This advanced feature enables players to interact with Subway Surfers characters in real life, regardless of their device’s AR capabilities.

SYBO’s Technical Director Murari Vasudevan praised the team for their remarkable accomplishment in creating a highly advanced feature that offers endless creative and artistic opportunities. Through the use of innovative technologies, Subway Studio guarantees that players experience an authentic connection with their favorite characters as if they were right there with them in the real world.

“We’re constantly looking to give our players new ways to engage with the game, and this new technology does just that,” Murari said.

Subway Studio launches as part of the Subway Surfers Fantasy Fest update, available until July 16. Players have a selection of 40 existing Subway Surfers characters to interact with. They can create viral content that seamlessly merges with their in-game experiences, amplifying their impact within the gaming community. Additionally, SYBO plans to introduce more characters through future updates, ensuring a continuous stream of fresh content and experiences for players.

Where Buzz and Creativity Converge

Recognizing the incredible creative potential within their community, SYBO took notice of the hashtag #SubwaySurfers on TikTok, which has garnered an astonishing 34 billion views, mainly from user-generated videos.

Subway Surfers AR feature Subway Studio

Motivated by this, SYBO has launched a special TikTok filter that enables players to further engage with the game by interacting with one of the iconic characters, Jake, within the TikTok platform, thus expanding the reach of Subway Surfers AR experience.

Prepare to unlock your creative potential and embark on an exceptional augmented reality journey as Subway Surfers marks its 11th anniversary this year. With this milestone, it is evident that SYBO is committed to upholding its reputation as an innovative force within the gaming industry. Subway Studio exemplifies its commitment to ingenuity and providing creative ways for players to engage with the game.

Dive into a realm where imagination and reality merge, bringing your beloved characters to life in your chosen settings. Subway Surfers has taken the gaming experience to unprecedented levels, and as SYBO continues to push boundaries, the future of mobile gaming holds immense promise.

The Team Behind Subway Surfers and Subway Studio

Based in Copenhagen, SYBO is a prominent game development company renowned for its remarkable accomplishments in bringing Subway Surfers to life. This widely celebrated and immensely popular running game, recognized as the top downloaded game in 2022, has garnered an impressive four billion downloads thus far.

Unleash Your Creativity With Subway Studio: Subway Surfers Introduces In-Game AR Feature Read More »

is-apple-vision-pro-ready-for-mainstream-use?

Is Apple Vision Pro Ready for Mainstream Use?

The long wait for a mixed reality headset from Apple will soon be over with the recent launch of Apple Vision Pro. Earlier this month, Apple unveiled its highly anticipated XR headset at the WWDC 2023 event. The Apple Vision Pro is set to hit US Apple stores in early 2024.

Being the first major hardware launch of Apple after almost a decade, the Vision Pro is expected to be received with great enthusiasm. While it’s an undoubtedly powerful device packed with state-of-the-art features, the question remains: Is the Apple Vision Pro truly ready for mainstream use?

To delve deeper into how this development impacts the future of XR, we asked some experts to share their insights on Apple Vision Pro.

Apple Vision Pro: Pushing the Boundaries of Mixed Reality Technology

Compared with other available AR and MR headsets, Apple Vision Pro has raised the bar in several aspects. For Dominik Angerer, CEO of headless CMS Storyblok, this launch could potentially be another “‘iPhone moment’ for Apple, pushing the boundaries of how we perceive and interact with digital content.”

Nathan Robinson, CEO of Gemba, finds the technology sleek, responsive, comfortable, and highly performant. According to him, Apple’s user-centric design philosophy is evident in the Vision Pro’s external battery pack, wide articulated headband, and visual passthrough capabilities—all ensuring comfort and convenience even for extended use.

Michael Hoffman, Mesmerise Head of Platform and CEO of IQXR, also highlights the unparalleled ergonomics of the Vision Pro. For him, the Fit Dial that enables adjustment for a precise fit, the Light Seal that creates a tight yet comfortable fit, and multiple size options will all be crucial to the success of the product.

Performance-wise, experts agree that Vision Pro is powerful. Emma Ridderstad, CEO of Warpin Reality, believes that the use of two chips, R1 and M2, will improve real-time processing, reducing the amount of lag time experienced while using the headset.

However, some experts aren’t that impressed. Eric Alexander, founder and CEO of Soundscape VR, thinks that the Vision Pro is strong for a mobile headset but still pales in comparison to PC VR. “The sprawling, highly-detailed, 3D rendered worlds we build here at Soundscape won’t be possible on the Vision Pro yet as their M2 chip has less than 10% of the rendering horsepower of an Nvidia GPU,” he told us.

For Joseph Toma, CEO of the virtual meetings and events platform Jugo, the Vision Pro’s hardware can be overkill, no matter how powerful it is. He notes that advances in spatial AI, augmented reality, and mixed reality AI make bulky hardware unnecessary. “Apple’s Vision Pro may not be the product that ushers in this new era. While the tech is great, the future is about building something that includes everyone and can deliver mixed reality experiences without the constraints of bulky hardware,” Toma said.

Is the Apple Vision Pro Truly Ready for Mainstream Use?

While the Apple Vision Pro represents a significant leap forward in mixed reality technology, experts have varying opinions on its readiness for mainstream adoption.

apple vision pro

Some argue that its current price point and the need for continuous advancements in software and content might limit its appeal. Others point out that existing platforms already offer immersive experiences without the need for bulky hardware, and Apple might face challenges in convincing the masses to invest in the Vision Pro.

Retailing at $3,499, the cost of the Apple Vision Pro is several times over the $499 price tag of the Meta Quest 3. For Robinson, this prohibitive price will be a large contributing factor to a slow adoption curve. However, he believes as the price falls and the number of applications grows over time, this technology will gain a much wider audience.

While Hoffman also sees the need for more cost-effective options, he believes that Vision Pro is ready for mainstream adoption. “Vision Pro is absolutely ready for mainstream adoption, especially because it’s made by Apple,” he said. “Once Apple launches a product, users typically flock to it.”

Still, some experts believe that Vision Pro isn’t ready for mainstream adoption yet. While initially impressed with the headset, Ridderstad noticed features that were centered around “looking and clicking” rather than 3D VR interactions. “I do think that Vision Pro won’t be ready for mainstream adoption until there’s been a few iterations of the headset,” she told us. “We’ll need to see some evolution from Apple in order to make mixed reality truly mainstream.”

For Alexander, the mainstream adoption of Vision Pro is still a few years out. Although he doesn’t see the price point being a hindrance to adoption, he believes that developers need time to build compelling apps that give people something to do on these devices outside of the novelty factor.

Toma, sharing a similar sentiment, said that, even though “the merging of the tangible and virtual worlds is an impending reality,” we’re still far from seeing these tools adopted on a massive scale by consumers and businesses. “The Vision Pro’s success depends on whether consumers will embrace a bulky, expensive piece of hardware they don’t need for the immersive experience Apple is promoting,” he said.

However, as Angerer points out, “Every technological leap comes with its share of skepticism.” While he understands why there are those who argue that Apple’s headset is not ready for mainstream adoption because of its size, he believes it’s important to remember that Apple has consistently placed high importance on balancing aesthetics with practicality. “Existing platforms may offer similar experiences, but Apple’s unique selling proposition often lies in its seamless user experience and integration across devices, which could give Vision Pro an edge,” he said.

Reshaping Industries: Applications of Apple Vision Pro and Other MR Headsets

Regardless of their readiness for mainstream use, mixed reality headsets like the Apple Vision Pro have the potential to transform various industries. Experts foresee numerous applications in fields such as healthcare, education, architecture, and entertainment.

In healthcare, for instance, mixed reality can aid in surgical simulations and remote medical consultations. In education, immersive learning experiences can enhance student engagement and comprehension. Architects can utilize mixed reality to visualize designs in real-world environments, while the entertainment industry can create entirely new levels of interactive experiences for consumers.

According to Hoffman, Vision Pro will be a game changer that unlocks high-value enterprise use cases. “Collaboration is essential for most scenarios that merge the physical and virtual. To be viable, eye contact is key for co-located participants, and faithfully conveying gaze and facial expressions is key for remote participants,” he explained. “Apple masterfully tackles both, making it possible to collaborate with any combination of co-located and remote participants where everyone wears a device. This combining of the physical and virtual worlds is critical for so many scenarios: task guidance, IoT digital twins, skills training, AI-enhanced inspections, augmented surgery, logistics, and space planning.”

A Promising Outlook for Apple Vision Pro and Mixed Reality Technology

As industry experts have highlighted, factors such as pricing, content availability, and competing platforms could influence its widespread acceptance. Nonetheless, Vision Pro and other mixed reality headsets are set to reshape industries and open new possibilities. The future of mixed reality holds immense promise with continued advancements and a growing ecosystem, and the Apple Vision Pro stands at the forefront of this transformative journey.

Is Apple Vision Pro Ready for Mainstream Use? Read More »

ai-to-help-everyone-unleash-their-inner-creator-with-masterpiece-x

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X

Empowering independent creators is an often-touted benefit of AI in XR. We’ve seen examples from professional development studios with little to no public offering, but precious few examples of AI-powered authoring tools for individual users. Masterpiece Studio is adding one more, “Masterpiece X”, to help everyone “realize and elevate more of their creative potential.”

“A New Form of Literacy”

Masterpiece Studio doesn’t just want to release an app – they want to start a movement. The team believes that “everyone is a creator” but the modern means of creation are inaccessible to the average person – and that AI is the solution.

Masterpiece X Meta Quest 3D Remix screenshot

“As our world increasingly continues to become more digital, learning how to create becomes a crucial skill: a new form of literacy,” says a release shared with ARPost.

Masterpiece Studio has already been in the business of 3D asset generation for over eight years now. The company took home the 2021 Auggie Award for Best Creator and Authoring Tool, and is a member of the Khronos Group and the Metaverse Standards Forum.

So, what’s the news? A new AI-powered asset generation platform called Masterpiece X, currently available as a beta application through a partnership with Meta.

The Early Days of Masterpiece X

Masterpiece X is already available on the Quest 2, and it’s already useful if you have your own 3D assets to import. There’s a free asset library, but it only contains sample content at the moment. The big feature of the app – creating 3D models from text prompts – is still rolling out and will (hopefully) result in a more highly populated asset library.

Masterpiece X Meta community library

“Please keep in mind that this is an ‘early release’ phase of the Masterpiece X platform. Some features are still in testing with select partners,” reads the release.

That doesn’t mean that it’s too early to bother getting the app. It’s already a powerful tool. Creators that download and master the app now will be better prepared to unlock its full potential when it’s ready.

Creating an account isn’t a lengthy process, but it’s a bit clunky – it can’t be done entirely online or entirely in-app, which means switching between a desktop and the VR headset to enter URLs and passwords. After that, you can take a brief tutorial or experiment on your own.

The app already incorporates a number of powerful tools into the entirely spatial workflow. Getting used to the controls might take some work, though people who already have experience with VR art tools might have a leg up. Users can choose a beginner menu with a cleaner look and fewer tools, or an expert menu with more options.

So far, tools allow users to change the size, shape, color, and texture of assets. Some of these are simple objects, while others come with rigged skeletons that can take on a variety of animations.

I Had a Dream…

For someone like me who isn’t very well-versed in 3D asset editing, now is the moment to spend time in Masterpiece X – honing my skills until the day that asset creation on the platform is streamlined by AI. Maybe then I can finally make a skateboarding Gumby-shaped David Bowie to star in an immersive music video for “Twinkle Song” by Miley Cyrus. Maybe.

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X Read More »

how-cities-are-taking-advantage-of-ar-tech-and-how-apple’s-vision-pro-could-fuel-innovation

How Cities Are Taking Advantage of AR Tech and How Apple’s Vision Pro Could Fuel Innovation

Apple unveiled its first mixed reality headset, the Vision Pro, at this year’s Worldwide Developers Conference (WWDC) on June 5, 2023. The company’s “first spatial computer” will enable users to interact with digital content like never before by leveraging a new 3D interface to deliver immersive spatial experiences.

The Vision Pro marks a new era for immersive technologies, and it can potentially be used to bolster efforts in using such technologies to improve communities.

How the Vision Pro Headset Can Strengthen Efforts to Transform Orlando

Cities around the world are starting to apply new technologies to help improve their communities. City, University of London, for instance, has launched an initiative that will bring about the UK’s largest AR, VR, and metaverse training center. London has also been mapped in 3D, allowing locals and visitors to have an immersive view of the city.

In 2021, Columbia University started a project called the “Hybrid Twins for Urban Transportation”, which creates a digital twin of New York’s key intersections to help optimize traffic flows.

Using New Technologies to Enhance Orlando’s Digital Twin Initiative

With Orlando, Florida, being designated as the metaverse’s MetaCenter, new MR headsets like Apple’s Vision Pro can help create radical changes to bolster the city’s digital twin efforts, which can accelerate Orlando’s metaverse capabilities.

Apple Vision Pro

In an interview with ARPost, Tim Giuliani, the President and CEO of the Orlando Economic Partnership (OEP), shared that emerging technologies like the digital twin enables them to showcase the region to executives who are planning to relocate their companies to Orlando.

Moreover, the digital twin helps local leaders ensure that the city has a robust infrastructure to support its residents, thus positively impacting the city’s economy and prosperity.

The digital twin’s physical display is currently housed at the OEP’s headquarters in downtown Orlando. However, Giuliani shared that AR headsets can make it more accessible.

We can use the headset’s technology to take our digital twin to trade shows or whenever it goes out to market to companies,” said Giuliani. According to Giuliani, utility companies and city planners can use the 3D model to access a holographic display when mapping out proposed infrastructure improvements. Stakeholders can also use it to create 3D models using their own data for simulations like climate change and infrastructure planning.

He added that equipment like the Vision Pro can help make VR, AR, and 3D simulation more widespread. According to Giuliani, while the Vision Pro is the first one to come out, other new devices will come out in the coming years and the competition will make these devices a consumer device.

Apple’s announcement cements the importance of the MetaCenter. The Orlando region has been leading in VR and AR and 3D simulation for over a decade now. So, all the things that we have been saying of why we are the MetaCenter, this hardware better positions us to continue leading in this territory,” he told us.

Leveraging the Vision Pro and MR to Usher in New Innovations

Innovate Orlando CEO and OEP Chief Information Officer David Adelson noted that aside from companies, ordinary individuals who aren’t keenly interested in immersive tech for development or work can also use devices like the Vision Pro to help Orlando with its effort to become the MetaCenter.

These new devices are one of the hardware solutions that this industry has been seeking. Through these hardware devices, the software platforms, and simulation market that has been building for decades, will now be enabled on a consumer and a business interface,” said Adelson.

Adelson also shared that Orlando has been leading in the spatial computing landscape and that the emergence of a spatial computing headset like the Vision Pro brings this particular sector into the spotlight.

How can businesses leverage the new Vision Pro headset and other MR technologies to usher in new developments?

According to Giuliani, businesses can use these technologies to provide a range of services, such as consulting services, as well as help increase customer engagement, cut costs, and make informed decisions faster.

AR can be a powerful tool to provide remote expertise and remote assistance with AR helps move projects forward and provide services that would otherwise require multiple site visits. This is what we are taking advantage of with the digital twin,” said Giuliani.

Giuliani also noted that such technologies can be a way for companies to empower both employees and customers by enhancing productivity, improving services, and fostering better communication.

Potential Drawbacks of Emerging Technologies

Given that these are still relatively new pieces of technology, it’s possible that they’ll have some drawbacks. However, according to Adelson, these can be seen as a positive movement that can potentially change the Web3 landscape. Giuliani echoes this sentiment.

We like to focus on the things that can unite us and help us move forward to advance broad-based prosperity and this means working with the new advancements created and finding ways to make them work and facilitate the work we all do,” he told us.

How Cities Are Taking Advantage of AR Tech and How Apple’s Vision Pro Could Fuel Innovation Read More »

virtual-reality-enhances-ketamine-therapy-sessions-with-immersive-experiences

Virtual Reality Enhances Ketamine Therapy Sessions With Immersive Experiences

Recently, TRIPP PsyAssist completed its Phase 1 Feasibility Study to demonstrate its use as a pretreatment tool for patients undergoing ketamine therapy. This VR solution is an example of emerging technologies that facilitate the development of more accessible and transformative mental health care solutions.

Globally, more than 500 million people have anxiety and depression. That’s more than half of the total estimate of people living with some form of mental illness. However, only a third receive adequate mental health care. Solutions like TRIPP PsyAssist help mental health clinics provide the care their patients need.

Treating Mental Health Disorders With Ketamine Therapy

Many methods are available to treat depression, anxiety, and similar mental health conditions. Using psychedelics, like ketamine, is gaining ground as a fast-acting, non-invasive treatment option. Low doses of ketamine, a dissociative psychedelic drug, are administered intravenously in a clinical setting to patients for several minutes while the patient is observed. Patients typically go through several rounds of these treatments.

While ketamine therapy has numerous clinical studies proving its effectiveness, there remains a need to manage pre-treatment settings. Patients experience the same anxiety before their ketamine therapy sessions, and alleviating their distress will help usher in a more relaxed onboarding and treatment session. There’s also a need to integrate their experiences after the ketamine therapy treatment, both at the clinic and at home.

Using VR to Improve Ketamine Therapy Pre-Treatment Experience

TRIPP is a California-based company pioneering XR wellness technologies for consumers, enterprises, and clinics. Their research-based platform is available across VR, AR, and mobile to help facilitate a deeper self-connection and create collective well-being.

TRIPP PsyAssist for ketamine therapy 2

TRIPP is best known for its award-winning consumer platform that creates beautiful meditative VR spaces where users can spend time calming their minds and centering their being. Staying true to its mission of using technology to transform the mind, the company introduced TRIPP PsyAssist, its clinical offering aimed at helping medical institutions use XR to improve their practices.

At TRIPP, we are dedicated to empowering individuals on their path to healing,” said TRIPP’s CEO and founder Nanea Reeves. They believe that virtual reality has the power to enhance therapeutic interventions, and their research encourages them to explore new frontiers in mental health treatment.

The main objective of Phase 1 of the TRIPP PsyAssist study was to assess whether guided, meditative imagery, which was provided through VR using the Pico Neo 3 Pro Eye headset, could be successfully implemented as a pre-treatment program in an actual clinical setting. The study also aimed to evaluate the level of acceptance of this approach.

The study participants were undergoing ketamine therapy for anxiety or depression at Kadima Neuropsychiatry Institute in San Diego. Kadima’s President, David Feifel, MD, PhD, was excited to partner with TRIPP and have this important feasibility study conducted among its patients.

VR technology has great potential to enhance mental wellness, and TRIPP PsyAssist is at the forefront of translating that potential into reality,” Feifel said. “This study represents an important step in that direction.

Improving Patient Experience with TRIPP PsyAssist

The results of the feasibility study were very promising. Eighty percent of the users wanted to use the system frequently, while all of them found the different functions well-integrated. Likewise, 100% of the users felt very confident in using the system.

TRIPP’s Clinical Director of Operations, Sunny Strasburg, LMFT, was delighted with the success of the preliminary results of the feasibility study.

TRIPP PsyAssist for ketamine therapy

These findings inspire us to forge ahead in uncovering new frontiers within clinical settings where technology and psychedelic medicine converge,” she said. Strasburg and her team look forward to expanding their study to explore various TRIPP PsyAssist applications in clinical settings.

With Phase 1 of the study completed, TRIPP PsyAssist is set to discover new ways of integrating innovative VR technology into mainstream clinical practices.

Reeves and Strasburg are also attending the MAPS Psychedelic Science Conference, which is taking place this week, where they are showcasing their research and discussing the impact of emerging technologies on mental health treatment. A Kadima booth will also be present to give attendees a demonstration of the transformative potential of the TRIPP platform.

Final Thoughts

Significant advances in research have elevated our knowledge about mental health. However, it remains a critical global health concern as the number of disorders escalates, but the available resources remain sparse.

But there’s a light at the end of the tunnel. Initiatives like TRIPP PsyAssist prove that emerging technology can play a significant role in alleviating mental health problems. This gives us the confidence that the future is bright and that our challenges have a solution at hand.

Virtual Reality Enhances Ketamine Therapy Sessions With Immersive Experiences Read More »

the-intersections-of-artificial-intelligence-and-extended-reality

The Intersections of Artificial Intelligence and Extended Reality

It seems like just yesterday it was the AR this, VR that, metaverse, metaverse, metaverse. Now all anyone can talk about is artificial intelligence. Is that a bad sign for XR? Some people seem to think so. However, people in the XR industry understand that it’s not a competition.

In fact, artificial intelligence has a huge role to play in building and experiencing XR content – and it’s been part of high-level metaverse discussions for a very long time. I’ve never claimed to be a metaverse expert and I’m not about to claim to be an AI expert, so I’ve been talking to the people building these technologies to learn more about how they help each other.

The Types of Artificial Intelligence in Extended Realities

For the sake of this article, there are three main different branches of artificial intelligence: computer vision, generative AI, and large language models. AI is more complicated than this, but this helps to get us started talking about how it relates to XR.

Computer Vision

In XR, computer vision helps apps recognize and understand elements in the environment. This places virtual elements in the environment and sometimes lets them react to that environment. Computer vision is also increasingly being used to streamline the creation of digital twins of physical items or locations.

Niantic is one of XR’s big world-builders using computer vision and scene understanding to realistically augment the world. 8th Wall, an acquisition that does its own projects but also serves as Niantic’s WebXR division, also uses some AI but is also compatible with other AI tools, as teams showcased in a recent Innovation Lab hackathon.

“During the sky effects challenge in March, we saw some really interesting integrations of sky effects with generative AI because that was the shiny object at the time,” Caitlin Lacey, Niantic’s Senior Director of Product Marketing told ARPost in a recent interview. “We saw project after project take that spin and we never really saw that coming.”

The winner used generative AI to create the environment that replaced the sky through a recent tool developed by 8th Wall. While some see artificial intelligence (that “shiny object”) as taking the wind out of immersive tech’s sails, Lacey sees this as an evolution rather than a distraction.

“I don’t think it’s one or the other. I think they complement each other,” said Lacey. “I like to call them the peanut butter and jelly of the internet.”

Generative AI

Generative AI takes a prompt and turns it into some form of media, whether an image, a short video, or even a 3D asset. Generative AI is often used in VR experiences to create “skyboxes” – the flat image over the virtual landscape where players have their actual interactions. However, as AI gets stronger, it is increasingly used to create virtual assets and environments themselves.

Artificial Intelligence and Professional Content Creation

Talespin makes immersive XR experiences for training soft skills in the workplace. The company has been using artificial intelligence internally for a while now and recently rolled out a whole AI-powered authoring tool for their clients and customers.

A release shared with ARPost calls the platform “an orchestrator of several AI technologies behind the scenes.” That includes developing generative AI tools for character and world building, but it also includes work with other kinds of artificial intelligence that we’ll explore further in the article, like LLMs.

“One of the problems we’ve all had in the XR community is that there’s a very small contingent of people who have the interest and the know-how and the time to create these experiences, so this massive opportunity is funneled into a very narrow pipeline,” Talespin CEO Kyle Jackson told ARPost. “Internally, we’ve seen a 95-97% reduction in time to create [with AI tools].”

Talespin isn’t introducing these tools to put themselves out of business. On the contrary, Jackson said that his team is able to be even more involved in helping companies workshop their experiences because his team is spending less time building the experiences themselves. Jackson further said this is only one example of a shift happening to more and more jobs.

“What should we be doing to make ourselves more valuable as these things shift? … It’s really about metacognition,” said Jackson. “Our place flipped from needing to know the answer to needing to know the question.”

Artificial Intelligence and Individual Creators

DEVAR launched MyWebAR in 2021 as a no-code authoring tool for WebAR experiences. In the spring of 2023, that platform became more powerful with a neural network for AR object creation.

In creating a 3D asset from a prompt, the network determines the necessary polygon count and replicates the texture. The resulting 3D asset can exist in AR experiences and serve as a marker itself for second-layer experiences.

“A designer today is someone who can not just draw, but describe. Today, it’s the same in XR,” DEVAR founder and CEO Anna Belova told ARPost. “Our goal is to make this available to everyone … you just need to open your imagination.”

Blurring the Lines

“From strictly the making a world aspect, AI takes on a lot of the work,” Mirrorscape CEO Grant Anderson told ARPost. “Making all of these models and environments takes a lot of time and money, so AI is a magic bullet.”

Mirroscape is looking to “bring your tabletop game to life with immersive 3D augmented reality.” Of course, much of the beauty of tabletop games come from the fact that players are creating their own worlds and characters as they go along. While the roleplaying element has been reproduced by other platforms, Mirrorscape is bringing in the individual creativity through AI.

“We’re all about user-created content, and I think in the end AI is really going to revolutionize that,” said Grant. “It’s going to blur the lines around what a game publisher is.”

Even for those who are professional builders but who might be independent or just starting out, artificial intelligence, whether to create assets or just for ideation, can help level the playing field. That was a theme of a recent Zapworks workshop “Can AI Unlock Your Creating Potential? Augmenting Reality With AI Tools.”

“AI is now giving individuals like me and all of you sort of superpowers to compete with collectives,” Zappar executive creative director Andre Assalino said during the workshop. “If I was a one-man band, if I was starting off with my own little design firm or whatever, if it’s just me freelancing, I now will be able to do so much more than I could five years ago.”

NeRFs

Neural Radiance Fields (NeRFs) weren’t included in the introduction because they can be seen as a combination of generative AI and computer vision. It starts out with a special kind of neural network called a multilayer perceptron (MLP). A “neural network” is any artificial intelligence that’s based off of the human brain, and an MLP is … well, look at it this way:

If you’ve ever taken an engineering course, or even a highschool shop class, you’ve been introduced to drafting. Technical drawings represent a 3D structure as a series of 2D images, each showing different angles of the 3D structure. Over time, you can get pretty good at visualizing the complete structure from these flat images. An MLP can do the same thing.

The difference is the output. When a human does this, the output is a thought – a spatial understanding of the object in your mind’s eye. When an MLP does this, the output is a NeRF – a 3D rendering generated from the 2D images.

Early on, this meant feeding countless images into the MLP. However, in the summer of 2022, Apple and the University of British Columbia developed a way to do it with one video. Their approach was specifically interested in generating 3D models of people from video clips for use in AR applications.

Whether a NeRF recreates a human or an object, it’s quickly becoming the fastest and easiest way to make digital twins. Of course, the only downside is that NeRF can only create digital models of things that already exist in the physical world.

Digital Twins and Simulation

Digital twins can be built with or without artificial intelligence. However, some use cases of digital twins are powered by AI. These include simulations like optimization and disaster readiness. For example, a digital twin of a real campus can be created, but then modified on a computer to maximize production or minimize risk in different simulated scenarios.

“You can do things like scan in areas of a refinery, but then create optimized versions of that refinery … and have different simulations of things happening,” MeetKai co-founder and executive chairwoman Weili Dai told ARPost in a recent interview.

A recent suite of authoring tools launched by the company (which started in AI before branching into XR solutions) includes AI-powered tools for creating virtual environments from the virtual world. These can be left as exact digital twins, or they can be edited to streamline the production of more fantastic virtual worlds by providing a foundation built in reality.

Large Language Models

Large Language Models take in language prompts and return language responses. This is on the list of AI interactions that runs largely under the hood so that, ideally, users don’t realize that they’re interacting with AI. For example, large language models could be the future of NPC interactions and “non-human agents” that help us navigate vast virtual worlds.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” Inworld AI CEO Ilya Gelfenbeyn told ARPost in a recent interview. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

Inworld AI makes brains that can animate Ready Player Me avatars in virtual worlds. Creators get to decide what the artificial intelligence knows – or what information it can access from the web – and what its personality is like as it walks and talks its way through the virtual landscape.

“You basically are teaching an actor how it is supposed to behave,” Inworld CPO Kylan Gibbs told ARPost.

Large language models are also used by developers to speed up back-end processes like generating code.

How XR Gives Back

So far, we’ve talked about ways in which artificial intelligence makes XR experiences better. However, the opposite is also true, with XR helping to strengthen AI for other uses and applications.

Evolving AI

We’ve already seen that some approaches to artificial intelligence are modeled after the human brain. We know that the human brain developed essentially through trial and error as it rose to meet the needs of our early ancestors. So, what if virtual brains had the same opportunity?

Martine Rothblatt PhD reports that very opportunity in the excellent book “Virtually Human: The Promise – and the Peril – of Digital Immortality”:

“[Academics] have even programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds, software structures compete for resources, undergo mutations, and evolve. Experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed.”

Feeding AI

Like any emerging technology, people’s expectations of artificial intelligence can grow faster than AI’s actual capabilities. AI learns by having data entered into it. Lots of data.

For some applications, there is a lot of extant data for artificial intelligence to learn from. But, sometimes, the answers that people want from AI don’t exist yet as data from the physical world.

“One sort of major issue of training AI is the lack of data,” Treble Technologies CEO Finnur Pind told ARPost in a recent interview.

Treble Technologies works with creating realistic sound in virtual environments. To train an artificial intelligence to work with sound, it needs audio files. Historically, these were painstakingly sampled with different things causing different sounds in different environments.

Usually, during the early design phases, an architect or automotive designer will approach Treble to predict what audio will sound like in a future space. However, Treble can also use its software to generate specific sounds in specific environments to train artificial intelligence without all of the time and labor-intensive sampling. Pinur calls this “synthetic data generation.”

The AI-XR Relationship Is “and” Not “or”

Holding up artificial intelligence as the new technology on the block that somehow takes away from XR is an interesting narrative. However, experts are in agreement that these two emerging technologies reinforce each other – they don’t compete. XR helps AI grow in new and fantastic ways, while AI makes XR tools more powerful and more accessible. There’s room for both.

The Intersections of Artificial Intelligence and Extended Reality Read More »