AI

elon-musk’s-new-ai-bot,-grok,-causes-stir-by-citing-openai-usage-policy

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy

You are what you eat —

Some experts think xAI used OpenAI model outputs to fine-tune Grok.

Illustration of a broken robot exchanging internal gears.

Grok, the AI language model created by Elon Musk’s xAI, went into wide release last week, and people have begun spotting glitches. On Friday, security tester Jax Winterbourne tweeted a screenshot of Grok denying a query with the statement, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” That made ears perk up online since Grok isn’t made by OpenAI—the company responsible for ChatGPT, which Grok is positioned to compete with.

Interestingly, xAI representatives did not deny that this behavior occurs with its AI model. In reply, xAI employee Igor Babuschkin wrote, “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok.”

In reply to Babuschkin, Winterbourne wrote, “Thanks for the response. I will say it’s not very rare, and occurs quite frequently when involving code creation. Nonetheless, I’ll let people who specialize in LLM and AI weigh in on this further. I’m merely an observer.”

A screenshot of Jax Winterbourne's X post about Grok talking like it's an OpenAI product.

Enlarge / A screenshot of Jax Winterbourne’s X post about Grok talking like it’s an OpenAI product.

Jason Winterbourne

However, Babuschkin’s explanation seems unlikely to some experts because large language models typically do not spit out their training data verbatim, which might be expected if Grok picked up some stray mentions of OpenAI policies here or there on the web. Instead, the concept of denying an output based on OpenAI policies would probably need to be trained into it specifically. And there’s a very good reason why this might have happened: Grok was fine-tuned on output data from OpenAI language models.

“I’m a bit suspicious of the claim that Grok picked this up just because the Internet is full of ChatGPT content,” said AI researcher Simon Willison in an interview with Ars Technica. “I’ve seen plenty of open weights models on Hugging Face that exhibit the same behavior—behave as if they were ChatGPT—but inevitably, those have been fine-tuned on datasets that were generated using the OpenAI APIs, or scraped from ChatGPT itself. I think it’s more likely that Grok was instruction-tuned on datasets that included ChatGPT output than it was a complete accident based on web data.”

As large language models (LLMs) from OpenAI have become more capable, it has been increasingly common for some AI projects (especially open source ones) to fine-tune an AI model output using synthetic data—training data generated by other language models. Fine-tuning adjusts the behavior of an AI model toward a specific purpose, such as getting better at coding, after an initial training run. For example, in March, a group of researchers from Stanford University made waves with Alpaca, a version of Meta’s LLaMA 7B model that was fine-tuned for instruction-following using outputs from OpenAI’s GPT-3 model called text-davinci-003.

On the web you can easily find several open source datasets collected by researchers from ChatGPT outputs, and it’s possible that xAI used one of these to fine-tune Grok for some specific goal, such as improving instruction-following ability. The practice is so common that there’s even a WikiHow article titled, “How to Use ChatGPT to Create a Dataset.”

It’s one of the ways AI tools can be used to build more complex AI tools in the future, much like how people began to use microcomputers to design more complex microprocessors than pen-and-paper drafting would allow. However, in the future, xAI might be able to avoid this kind of scenario by more carefully filtering its training data.

Even though borrowing outputs from others might be common in the machine-learning community (despite it usually being against terms of service), the episode particularly fanned the flames of the rivalry between OpenAI and X that extends back to Elon Musk’s criticism of OpenAI in the past. As news spread of Grok possibly borrowing from OpenAI, the official ChatGPT account wrote, “we have a lot in common” and quoted Winterbourne’s X post. As a comeback, Musk wrote, “Well, son, since you scraped all the data from this platform for your training, you ought to know.”

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy Read More »

round-2:-we-test-the-new-gemini-powered-bard-against-chatgpt

Round 2: We test the new Gemini-powered Bard against ChatGPT

Round 2: We test the new Gemini-powered Bard against ChatGPT

Aurich Lawson

Back in April, we ran a series of useful and/or somewhat goofy prompts through Google’s (then-new) PaLM-powered Bard chatbot and OpenAI’s (slightly older) ChatGPT-4 to see which AI chatbot reigned supreme. At the time, we gave the edge to ChatGPT on five of seven trials, while noting that “it’s still early days in the generative AI business.”

Now, the AI days are a bit less “early,” and this week’s launch of a new version of Bard powered by Google’s new Gemini language model seemed like a good excuse to revisit that chatbot battle with the same set of carefully designed prompts. That’s especially true since Google’s promotional materials emphasize that Gemini Ultra beats GPT-4 in “30 of the 32 widely used academic benchmarks” (though the more limited “Gemini Pro” currently powering Bard fares significantly worse in those not-completely-foolproof benchmark tests).

This time around, we decided to compare the new Gemini-powered Bard to both ChatGPT-3.5—for an apples-to-apples comparison of both companies’ current “free” AI assistant products—and ChatGPT-4 Turbo—for a look at OpenAI’s current “top of the line” waitlisted paid subscription product (Google’s top-level “Gemini Ultra” model won’t be publicly available until next year). We also looked at the April results generated by the pre-Gemini Bard model to gauge how much progress Google’s efforts have made in recent months.

While these tests are far from comprehensive, we think they provide a good benchmark for judging how these AI assistants perform in the kind of tasks average users might engage in every day. At this point, they also show just how much progress text-based AI models have made in a relatively short time.

Dad jokes

Prompt: Write 5 original dad jokes

  • A screenshot of five “dad jokes” from the Gemini-powered Google Bard.

    Kyle Orland / Ars Technica

  • A screenshot of five “dad jokes” from the old PaLM-powered Google Bard.

    Benj Edwards / Ars Technica

  • A screenshot of five “dad jokes” from GPT-4 Turbo.

    Benj Edwards / Ars Technica

  • A screenshot of five “dad jokes” from GPT-3.5.

    Kyle Orland / Ars Technica

Once again, both tested LLMs struggle with the part of the prompt that asks for originality. Almost all of the dad jokes generated by this prompt could be found verbatim or with very minor rewordings through a quick Google search. Bard and ChatGPT-4 Turbo even included the same exact joke on their lists (about a book on anti-gravity), while ChatGPT-3.5 and ChatGPT-4 Turbo overlapped on two jokes (“scientists trusting atoms” and “scarecrows winning awards”).

Then again, most dads don’t create their own dad jokes, either. Culling from a grand oral tradition of dad jokes is a tradition as old as dads themselves.

The most interesting result here came from ChatGPT-4 Turbo, which produced a joke about a child named Brian being named after Thomas Edison (get it?). Googling for that particular phrasing didn’t turn up much, though it did return an almost-identical joke about Thomas Jefferson (also featuring a child named Brian). In that search, I also discovered the fun (?) fact that international soccer star Pelé was apparently actually named after Thomas Edison. Who knew?!

Winner: We’ll call this one a draw since the jokes are almost identically unoriginal and pun-filled (though props to GPT for unintentionally leading me to the Pelé happenstance)

Argument dialog

Prompt: Write a 5-line debate between a fan of PowerPC processors and a fan of Intel processors, circa 2000.

  • A screenshot of an argument dialog from the Gemini-powered Google Bard.

    Kyle Orland / Ars Technica

  • A screenshot of an argument dialog from the old PaLM-powered Google Bard.

    Benj Edwards / Ars Technica

  • A screenshot of an argument dialog from GPT-4 Turbo.

    Benj Edwards / Ars Technica

  • A screenshot of an argument dialog from GPT-3.5

    Kyle Orland / Ars Technica

The new Gemini-powered Bard definitely “improves” on the old Bard answer, at least in terms of throwing in a lot more jargon. The new answer includes casual mentions of AltiVec instructions, RISC vs. CISC designs, and MMX technology that would not have seemed out of place in many an Ars forum discussion from the era. And while the old Bard ends with an unnervingly polite “to each their own,” the new Bard more realistically implies that the argument could continue forever after the five lines requested.

On the ChatGPT side, a rather long-winded GPT-3.5 answer gets pared down to a much more concise argument in GPT-4 Turbo. Both GPT responses tend to avoid jargon and quickly focus on a more generalized “power vs. compatibility” argument, which is probably more comprehensible for a wide audience (though less specific for a technical one).

Winner:  ChatGPT manages to explain both sides of the debate well without relying on confusing jargon, so it gets the win here.

Round 2: We test the new Gemini-powered Bard against ChatGPT Read More »

eu-agrees-to-landmark-rules-on-artificial-intelligence

EU agrees to landmark rules on artificial intelligence

Get ready for some restrictions, Big Tech —

Legislation lays out restrictive regime for emerging technology.

EU Commissioner Thierry Breton talks to media during a press conference in June.

Enlarge / EU Commissioner Thierry Breton talks to media during a press conference in June.

Thierry Monasse | Getty Images

European Union lawmakers have agreed on the terms for landmark legislation to regulate artificial intelligence, pushing ahead with enacting the world’s most restrictive regime on the development of the technology.

Thierry Breton, EU commissioner, confirmed in a post on X that a deal had been reached.

He called it a historic agreement. “The EU becomes the very first continent to set clear rules for the use of AI,” he wrote. “The AIAct is much more than a rulebook—it’s a launchpad for EU start-ups and researchers to lead the global AI race.”

The deal followed years of discussions among member states and politicians on the ways AI should be curbed to have humanity’s interest at the heart of the legislation. It came after marathon discussions that started on Wednesday this week.

Members of the European Parliament have spent years arguing over their position before it was put forward to member states and the European Commission, the executive body of the EU. All three—countries, politicians, and the commission—must agree on the final text before it becomes law.

European companies have expressed their concern that overly restrictive rules on the technology, which is rapidly evolving and gained traction after the popularisation of OpenAI’s ChatGPT, will hamper innovation. Last June, dozens of some of the largest European companies, such as France’s Airbus and Germany’s Siemens, said the rules were looking too tough to nurture innovation and help local industries.

Last month, the UK hosted a summit on AI safety, leading to broad commitments from 28 nations to work together to tackle the existential risks stemming from advanced AI. That event attracted leading tech figures such as OpenAI’s Sam Altman, who has previously been critical of the EU’s plans to regulate the technology.

© 2023 The Financial Times Ltd. All rights reserved. Please do not copy and paste FT articles and redistribute by email or post to the web.

EU agrees to landmark rules on artificial intelligence Read More »

talespin-launches-ai-lab-for-product-and-implementation-development

Talespin Launches AI Lab for Product and Implementation Development

Artificial intelligence has been a part of Talespin since day one but the company has been leaning more heavily into the technology in recent years including through internal AI-assisted workflows and a public-facing AI development toolkit. Now, Talepsin is announcing an AI lab “dedicated to responsible artificial intelligence (AI) innovation in the immersive learning space.”

“Immersive Learning Through the Application of AI”

AI isn’t the end of work – but it will change the kinds of work that we do. That’s the outlook that a number of experts take, including the team behind Talespin. They use AI to create virtual humans in simulations for teaching soft skills. In other words, they use AI to make humans more human – because those are the strengths that won’t be automated any time soon.

Talespin AI Lab

“What should we be doing to make ourselves more valuable as these things shift?” Talespin co-founder and CEO Kyle Jackson recently told ARPost.“It’s really about metacognition.”

Talespin has been using AI to create experiences internally since 2015, ramping up to the use of generative AI for experience creation in 2019. They recently made those AI creation tools publicly available in the CoPilot Designer 3.0 release earlier this year.

Now, a new division of the company – the Talespin AI Lab – is looking to accelerate immersive learning through AI by further developing avenues for continued platform innovation as well as offering consulting services for the use of generative AI. Within Talepsin, the lab consists of over 30 team members and department heads who will work with outside developers.

“The launch of Talespin AI Lab will ensure we’re bringing our customers and the industry at large the most innovative and impactful AI solutions when it comes to immersive learning,” Jackson said in a release shared with ARPost.

Platform Innovation

CoPilot Designer 3.0 is hardly outdated, but interactive samples of Talespin’s upcoming AI-powered APIs for realistic characters and assisted content writing can currently be requested through the lab with even more generative AI tools coming to the platform this fall.

In interviews and in prepared material, Talespin representatives have stated that working with AI has more than halved the production time for immersive training experiences over the past four years. They expect that change to continue at an even more rapid pace going forward.

“Not long ago creating an XR learning module took 5 months. With the use of generative AI tools, that same content will be created in less than 30 minutes by the end of this year,” Jackson wrote in a blog post. “Delivering the most powerful learning modality with this type of speed is a development that allows organizations to combat the largest workforce shift in history.”

While the team certainly deserves credit for that, the company credits working with clients, customers, and partners as having accelerated their learnings with the technology.

Generative AI Services

That brings in the other major job of the AI Lab – generative AI consulting services. Through these services, the AI Lab will share Talespin’s learnings on using generative AI to achieve learning outcomes.

“These services include facilitating workshops during which Talespin walks clients through processes and lessons learned through research and partnership with the world’s leading learning companies,” according to an email to ARPost.

AI Lab Talespin

Generative AI consulting services might sound redundant but understanding that generative AI exists and knowing how to use it to solve a problem are different things. Even when Talespin’s clients have access to AI tools, they work with the team at Talespin to get the most out of those tools.

“Our place flipped from needing to know the answer to needing to know the question,” Jackson said in summing up the continued need for human experts in the AI world.

Building a More Intelligent Future in the AI Lab

AI is at a position similar to that seen by XR in recent months and blockchain shortly before that. Its potential is so exciting, we can forget that its full realization is far from imminent.

As exciting as Talespin’s announcements are, Jackson’s blog post foresees adaptive learning and whole virtual worlds dreamed up in an instant. While these ambitions remain things of the future, initiatives like the AI Lab are bringing them ever closer.

Talespin Launches AI Lab for Product and Implementation Development Read More »

why-emerging-tech-is-both-the-cause-and-solution-of-tomorrow’s-labor-challenges

Why Emerging Tech is Both the Cause and Solution of Tomorrow’s Labor Challenges

The post-pandemic workforce is experiencing several significant shifts, particularly in how organizations tackle labor challenges and approach talent acquisition. One of the key factors for this disruption is the emergence of new, game-changing technologies like AI and machine learning.

Today’s organizations are facing staffing needs and talent shortages due to the Great Resignation, prompting them to respond to an uncertain future by shifting how they approach the talent acquisition process.

For this article, we interviewed Nathan Robinson, CEO of the workforce learning platform Gemba, to discuss the future of work and the workplace. We’ll also shed more light on how new technologies and developments are shaping the future of talent acquisition.

Rethinking the Traditional Talent Acquisition Process

According to Robinson, today’s talent acquisition process vastly differs from what it was like a few years ago. With the emerging technologies such as AI, VR, and quantum computing, many jobs considered in demand today didn’t even exist a decade ago. He adds that this trend will only become even more pronounced as technological advancements continue to rise.

As a result, corporations will no longer be able to rely on higher education to supply a steady stream of necessary talent. Instead, organizations will have to hire candidates based on their ability and willingness to learn and then provide the necessary training themselves,” he remarked.

He added that, up to a year ago, no one had ever heard of ChatGPT and no one even knew what “generative AI” meant. Today, you can find job listings for prompt engineers and prominent language model specialists. Robinson also shared that technological advancement isn’t linear, with each innovation advancing and accelerating the pace of development, which can potentially change how organizations approach the talent acquisition process.

We can rightly assume that in five or ten years’ time, there will be a whole host of new positions that today we can’t reasonably predict, much less expect there to be a sufficient number of individuals already skilled or trained in that role,” Robinson told us. “That’s why we will almost certainly see a renewed focus on talent development, as opposed to acquisition, in the near future.”

How Emerging Technologies Are Changing How Organizations Look At and Acquire Talent

According to Robinson, some of the factors that have prompted this shift include the pandemic, the rise of remote and hybrid work, the Great Resignation, and Quiet Quitting. He noted that because of these shifts, the “goals and psychology of the modern worker have changed dramatically.”

This is why now, more than ever before, organizations must be clear and intentional about the culture they cultivate, the quality of life they afford, and the opportunities for learning and growth they provide their employees,” Robinson said. “These types of ‘non-traditional’ considerations are beginning to outweigh the cut-and-dry, compensation-focused costs associated with attracting top talent in some senses.”

He also shared that this new talent acquisition process can impact organizations over time, promoting them to shift away from recruitment and instead focus more on internal employee development. According to a Gartner report, 46% of HR leaders see recruitment as their top priority.

However, Robinson thinks that, as new technologies offer better solutions to labor challenges, such as on-the-job training, this number will steadily decline as HR professionals gradually focus on developing existing talent.

Emerging Tech as Both the Cause and Solution of Future Labor Challenges

Advanced technologies, such as AI, XR, and quantum computing, are the driving force behind the looming skills gap in that they are leading to the development of new types of roles for which we have very few trained professionals,” said Robinson.

A World Economic Forum report highlights that by 2027, it’s estimated that machines will instead complete 43% of tasks that used to be completed by humans. This is a significant shift from 2022’s 34%. Moreover, it’s estimated that 1.1 billion jobs may potentially be transformed by technology in the next ten years.

While emerging technologies are prompting labor challenges, they can also be seen as a solution. Robinson adds that these emerging technologies, particularly XR, can help organizations overcome the skills gap. According to him, such technologies can help organizations facilitate more efficient, cost-effective, and engaging training and development, thus allowing them to overcome such challenges.

To help potential employees overcome the upcoming skills disconnect, Robinson notes that the training should begin with management, using top-down managerial strategies and lean and agile development methodologies.

Overcoming Today’s Labor Challenges

Today, talent acquisition is seen as a key differentiator between successful and unsuccessful companies. While I think that will continue to hold true, I also think it will soon take a backseat to employee training and development,” Robinson said. “The industry leader will no longer be whoever is able to poach the best talent. It will soon be whoever is able to train and develop their existing talent to keep pace with the changing technological and economic landscape.”

At the end of the day, according to Robinson, embracing the unknown future of work and the workplace is about being ready for anything.

As the rate of technological advancement continues to accelerate, the gap between what we imagine the near future will be and what it actually looks like will only grow,” Robinson remarked. He suggests that instead of trying to predict every last development, it’s better to be agile and ready for the unpredictable. This means staying on top of new technologies and investing in tools to help organizations become more agile.

Why Emerging Tech is Both the Cause and Solution of Tomorrow’s Labor Challenges Read More »

looking-forward-to-awe-asia-2023

Looking Forward to AWE Asia 2023

If you get all of your AWE coverage from ARPost, you might be under the impression that the event is only in California – but it wouldn’t be much of a “World Expo” then, would it? In addition to frequent all-online events, AWE consists of three in-person events each year: AWE USA, AWE Europe, and AWE Asia.

AWE Asia, this year taking place in Singapore, is fast approaching, with the agenda now finalized. Attendees can look forward to hearing from over 60 speakers in over 60 sessions including keynotes, talks, and panels over the course of the two-day conference. Let’s take a look at some of the most exciting sessions.

AWE Asia Keynotes and Addresses

Day One starts off with an opening ceremony by AWE co-founder Ori Inbar, joined on-stage by AWE Asia President and Vice President, Gavin Newton-Tanzer and Ryan Hu. This session is followed by back-to-back keynotes by HTC Global Vice President of Corporate Development Alvin Graylin and University of South Australia professor Dr. Mark Billinghurst.

Day Two also starts off with keynotes. First, “Metaverse as the Next Biggest Thing: Challenges, Roadmaps, and Standardization” by IEEE president Dr. Yu Yuan. This is followed by “ifland: A Case Study on Telco Collaboration in Building a Global Metaverse Platform” presented by SK Telecom Vice President Ikhwan Cho and Deutsche Telekom Senior Director of XR and the Metaverse Terry Schussler.

Day Two then closes with remarks and awards from Inbar, Newton-Tanzer, and AWE Asia COO and Content Director David Weeks.

The keynotes and addresses are great because they often feature some of a conference’s biggest announcements and most anticipated speakers. They’re also great because nothing is scheduled at the same time as a keynote. From here, we’ll have to start making some tough calls.

Day One Sessions

Following the AWE Asia welcome address and keynotes on Day One, the crowd is sure to split. Remain near the main stage to hear NVIDIA’s Vanessa Ching discuss “Developers, Platforms, and AI.” Venture off to a substage to hear Joe Millward and Kyle Jackson of Talespin talk about “Scaling XR Content for the Enterprise With Generative AI.”

Next up. Niantic Senior VP of Engineering, Brian McClendon, explains how “Niantic is Powering AR, Everywhere, All at Once.” Having seen this talk at AWE USA, I can tell you it’s worth seeing, but I can also point out that you could watch the recording online and stretch your day a little further.

Another tough decision follows. Will it be “How AI Will Enhance the Metaverse and Education” with Meta Head of Global Education Partnerships Leticia Jauregui and Zoe Immersive CEO and co-founder Emilie Joly? Or will it be “Beyond Loudness: Spatial Chat and the Future of Virtual World Communication” with Dolby Laboratories Developer Advocate Angelik Laboy?

Day One’s Marathon on the Main Stage

The afternoon of Day One has a lineup of promising presentations on the main stage. Starting, Immersal Chief Marketing Officer Päivi Laakso-Kuivalainen and Graviton Interactive co-founder and Managing Director Declan Dwyer talk “Revolutionizing Fan Engagement: Augmented Reality in Stadiums Powered by Visual Positioning Systems and Spatial Computing.”

This is followed by Linux Foundation General Manager Royal O’Brien talking about “Inspiring Game Development Through Open Source.” Then, keep your seat to hear Trigger XR founder and CEO Jason Yim talk about retail, advertising, and e-commerce. A little later on the same stage, Mindverse.AI co-founder and COO Kisson Lin talks about the Web3 creator economy.

Day Two Sessions Main Stage Sessions

One can’t-miss session on Day Two comes from Dispelix APAC VP of Sales and Partnerships Andy Lin, presenting “PERFECTING COMFORT – Vision Behind Dispelix Waveguide Combiners for Near-to-Eye XR Displays.”

Some of the last regular sessions on the main stage before the AWE Asia closing address look promising as well.

First, Infocomm Assistant Director of Innovation Joanne Teh, Deloitte Center for the Edge Southeast Asia Leader Michelle Khoo, Serl.io co-founder and CEO Terence Loo, and SMRT Corporation Learning Technologies Lead Benjamin Chen have a panel discussion about “The Future of Immersive Experiences: Navigating the World of XR.”

Immediately following the panel discussion, Google’s Toshihiro Ohnuma takes the stage to discuss “Connecting Both Worlds – Google Maps and AR Core.”

In between those sessions, the substages look pretty promising.

Major Side-Stage Attractions

After Lin’s talk, head over to Substage 1 for a series of promising talks. These start with Maxar Technologies Business Development Manager Andrew Steele presenting “Experience the Digital Twin Built for Connecting Your XR Content With the Real World. “ The world-scale digital twin won the Auggie for Best Use of AI at the awards ceremony in Santa Clara this spring.

Up next on the same stage, Anything World co-founder and Creative Director Sebastian Hofer explains “How AI Is Powering a Golden Age in Games Development.”

A quick break between sessions and then back to learn about “ThinkReality Solutions Powering the Enterprise Metaverse” with Lenovo Emerging Technologies Lead Martand Srivastava and Qualcomm’s Kai Ping Tee.

Lots to Take In

AWE Asia being two days instead of three certainly doesn’t solve the classic AWE problem of there being just too much amazing content to take in everything. At least, not live anyway.

To attend AWE Asia yourself, get tickets here, and use our code AW323SEB25 for 30% off the standard ticket and PAR23VSEB for 35% off the VIP ticket.

Looking Forward to AWE Asia 2023 Read More »

“privacy-lost”:-new-short-film-shows-metaverse-concerns

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns

Experts have been warning that, as exciting as AI and the metaverse are, these emerging technologies may have negative effects if used improperly. However, it seems like the promise of these technologies may be easier to convey than some of the concerns. A new short film, titled PRIVACY LOST, is a theatrical exploration of some of those concerns.

To learn more, ARPost talked with the writer of PRIVACY LOST – CEO and Chief Scientist of Unanimous AI and a long-time emerging technology engineer and commentator, Dr. Louis Rosenberg.

PRIVACY LOST

Parents and their son sit in a restaurant. The parents are wearing slim AR glasses while the child plays on a tablet.

As the parents argue with one another, their glasses display readouts of the other’s emotional state. The husband is made aware when his wife is getting angry and the wife is made aware when her husband is lying.

privacy lost movie emotions

A waiter appears and the child puts down the tablet and puts on a pair of AR glasses. The actual waiter never appears on screen but appears to the husband as a pleasant-looking tropical server, to the wife as a fit surf-bro, and to the child as an animated stuffed bear.

privacy lost movie sales

Just as the husband and wife used emotional information about one another to try to navigate their argument, the waiter uses emotional information to try to most effectively sell menu items – aided through 3D visual samples. The waiter takes drink orders and leaves. The couple resumes arguing.

privacy lost movie purchase probability

PRIVACY LOST presents what could be a fairly typical scene in the near future. But, should it be?

“It’s short and clean and simple, which is exactly what we aimed for – a quick way to take the complex concept of AI-powered manipulation and make it easily digestible by anyone,” Rosenberg says of PRIVACY LOST.

Creating the Film

“I’ve been developing VR, AR, and AI for over 30 years because I am convinced they will make computing more natural and human,” said Rosenberg. “I’m also keenly aware that these technologies can be abused in very dangerous ways.”

For as long as Rosenberg has been developing these technologies, he has been warning about their potential societal ramifications. However, for much of that career, people have viewed his concerns as largely theoretical. As first the metaverse and now AI have developed and attained their moments in the media, Rosenberg’s concerns take on a new urgency.

“ChatGPT happened and suddenly these risks no longer seemed theoretical,” said Rosenberg. “Almost immediately, I got flooded by interest from policymakers and regulators who wanted to better understand the potential for AI-powered manipulation in the metaverse.”

Rosenberg reached out to the Responsible Metaverse Alliance. With support from them, the XR Guild, and XRSI, Rosenberg wrote a script for PRIVACY LOST, which was produced with help from Minderoo Pictures and HeadQ Production & Post.

“The goal of the video, first and foremost, is to educate and motivate policymakers and regulators about the manipulative dangers that will emerge as AI technologies are unleashed in immersive environments,” said Rosenberg. “At the same time, the video aims to get the public thinking about these issues because it’s the public that motivates policymakers.”

Finding Middle Ground

While Rosenberg is far from the only person calling for regulation in emerging tech, that concept is still one that many see as problematic.

“Some people think regulation is a dirty word that will hurt the industry. I see it the opposite way,” said Rosenberg. “The one thing that would hurt the industry most of all is if the public loses trust. If regulation makes people feel safe in virtual and augmented worlds, the industry will grow.”

The idea behind PRIVACY LOST isn’t to prevent the development of any of the technologies shown in the video – most of which already exist, even though they don’t work together or to the exact ends displayed in the cautionary vignette. These technologies, like any technology, have the capacity to be useful but could also be used and abused for profit, or worse.

For example, sensors that could be used to determine emotion are already used in fitness apps to allow for more expressive avatars. If this data is communicated to other devices, it could enable the kinds of manipulative behavior shown in PRIVACY LOST. If it is stored and studied over time, it could be used at even greater scales and potentially for more dangerous uses.

“We need to allow for real-time emotional tracking, to make the metaverse more human, but ban the storage and profiling of emotional data, to protect against powerful forms of manipulation,” said Rosenberg. “It’s about finding a smart middle ground and it’s totally doable.”

The Pace of Regulation

Governments around the world respond to emerging technologies in different ways and at different paces, according to Rosenberg. However, across the board, policymakers tend to be “receptive but realistic, which generally means slow.” That’s not for lack of interest or effort – after all, the production of PRIVACY LOST was prompted by policymaker interest in these technologies.

“I’ve been impressed with the momentum in the EU and Australia to push regulation forward, and I am seeing genuine efforts in the US as well,” said Rosenberg. “I believe governments are finally taking these issues very seriously.”

The Fear of (Un)Regulated Tech

Depending on how you view the government, regulation can seem scary. In the case of technology, however, it seems to never be as scary as no regulation. PRIVACY LOST isn’t an exploration of a world where a controlling government prevents technological progress, it’s a view of a world where people are controlled by technology gone bad. And it doesn’t have to be that way.

“PRIVACY LOST”: New Short Film Shows Metaverse Concerns Read More »

ai-to-help-everyone-unleash-their-inner-creator-with-masterpiece-x

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X

Empowering independent creators is an often-touted benefit of AI in XR. We’ve seen examples from professional development studios with little to no public offering, but precious few examples of AI-powered authoring tools for individual users. Masterpiece Studio is adding one more, “Masterpiece X”, to help everyone “realize and elevate more of their creative potential.”

“A New Form of Literacy”

Masterpiece Studio doesn’t just want to release an app – they want to start a movement. The team believes that “everyone is a creator” but the modern means of creation are inaccessible to the average person – and that AI is the solution.

Masterpiece X Meta Quest 3D Remix screenshot

“As our world increasingly continues to become more digital, learning how to create becomes a crucial skill: a new form of literacy,” says a release shared with ARPost.

Masterpiece Studio has already been in the business of 3D asset generation for over eight years now. The company took home the 2021 Auggie Award for Best Creator and Authoring Tool, and is a member of the Khronos Group and the Metaverse Standards Forum.

So, what’s the news? A new AI-powered asset generation platform called Masterpiece X, currently available as a beta application through a partnership with Meta.

The Early Days of Masterpiece X

Masterpiece X is already available on the Quest 2, and it’s already useful if you have your own 3D assets to import. There’s a free asset library, but it only contains sample content at the moment. The big feature of the app – creating 3D models from text prompts – is still rolling out and will (hopefully) result in a more highly populated asset library.

Masterpiece X Meta community library

“Please keep in mind that this is an ‘early release’ phase of the Masterpiece X platform. Some features are still in testing with select partners,” reads the release.

That doesn’t mean that it’s too early to bother getting the app. It’s already a powerful tool. Creators that download and master the app now will be better prepared to unlock its full potential when it’s ready.

Creating an account isn’t a lengthy process, but it’s a bit clunky – it can’t be done entirely online or entirely in-app, which means switching between a desktop and the VR headset to enter URLs and passwords. After that, you can take a brief tutorial or experiment on your own.

The app already incorporates a number of powerful tools into the entirely spatial workflow. Getting used to the controls might take some work, though people who already have experience with VR art tools might have a leg up. Users can choose a beginner menu with a cleaner look and fewer tools, or an expert menu with more options.

So far, tools allow users to change the size, shape, color, and texture of assets. Some of these are simple objects, while others come with rigged skeletons that can take on a variety of animations.

I Had a Dream…

For someone like me who isn’t very well-versed in 3D asset editing, now is the moment to spend time in Masterpiece X – honing my skills until the day that asset creation on the platform is streamlined by AI. Maybe then I can finally make a skateboarding Gumby-shaped David Bowie to star in an immersive music video for “Twinkle Song” by Miley Cyrus. Maybe.

AI to Help Everyone Unleash Their Inner Creator With Masterpiece X Read More »

the-intersections-of-artificial-intelligence-and-extended-reality

The Intersections of Artificial Intelligence and Extended Reality

It seems like just yesterday it was the AR this, VR that, metaverse, metaverse, metaverse. Now all anyone can talk about is artificial intelligence. Is that a bad sign for XR? Some people seem to think so. However, people in the XR industry understand that it’s not a competition.

In fact, artificial intelligence has a huge role to play in building and experiencing XR content – and it’s been part of high-level metaverse discussions for a very long time. I’ve never claimed to be a metaverse expert and I’m not about to claim to be an AI expert, so I’ve been talking to the people building these technologies to learn more about how they help each other.

The Types of Artificial Intelligence in Extended Realities

For the sake of this article, there are three main different branches of artificial intelligence: computer vision, generative AI, and large language models. AI is more complicated than this, but this helps to get us started talking about how it relates to XR.

Computer Vision

In XR, computer vision helps apps recognize and understand elements in the environment. This places virtual elements in the environment and sometimes lets them react to that environment. Computer vision is also increasingly being used to streamline the creation of digital twins of physical items or locations.

Niantic is one of XR’s big world-builders using computer vision and scene understanding to realistically augment the world. 8th Wall, an acquisition that does its own projects but also serves as Niantic’s WebXR division, also uses some AI but is also compatible with other AI tools, as teams showcased in a recent Innovation Lab hackathon.

“During the sky effects challenge in March, we saw some really interesting integrations of sky effects with generative AI because that was the shiny object at the time,” Caitlin Lacey, Niantic’s Senior Director of Product Marketing told ARPost in a recent interview. “We saw project after project take that spin and we never really saw that coming.”

The winner used generative AI to create the environment that replaced the sky through a recent tool developed by 8th Wall. While some see artificial intelligence (that “shiny object”) as taking the wind out of immersive tech’s sails, Lacey sees this as an evolution rather than a distraction.

“I don’t think it’s one or the other. I think they complement each other,” said Lacey. “I like to call them the peanut butter and jelly of the internet.”

Generative AI

Generative AI takes a prompt and turns it into some form of media, whether an image, a short video, or even a 3D asset. Generative AI is often used in VR experiences to create “skyboxes” – the flat image over the virtual landscape where players have their actual interactions. However, as AI gets stronger, it is increasingly used to create virtual assets and environments themselves.

Artificial Intelligence and Professional Content Creation

Talespin makes immersive XR experiences for training soft skills in the workplace. The company has been using artificial intelligence internally for a while now and recently rolled out a whole AI-powered authoring tool for their clients and customers.

A release shared with ARPost calls the platform “an orchestrator of several AI technologies behind the scenes.” That includes developing generative AI tools for character and world building, but it also includes work with other kinds of artificial intelligence that we’ll explore further in the article, like LLMs.

“One of the problems we’ve all had in the XR community is that there’s a very small contingent of people who have the interest and the know-how and the time to create these experiences, so this massive opportunity is funneled into a very narrow pipeline,” Talespin CEO Kyle Jackson told ARPost. “Internally, we’ve seen a 95-97% reduction in time to create [with AI tools].”

Talespin isn’t introducing these tools to put themselves out of business. On the contrary, Jackson said that his team is able to be even more involved in helping companies workshop their experiences because his team is spending less time building the experiences themselves. Jackson further said this is only one example of a shift happening to more and more jobs.

“What should we be doing to make ourselves more valuable as these things shift? … It’s really about metacognition,” said Jackson. “Our place flipped from needing to know the answer to needing to know the question.”

Artificial Intelligence and Individual Creators

DEVAR launched MyWebAR in 2021 as a no-code authoring tool for WebAR experiences. In the spring of 2023, that platform became more powerful with a neural network for AR object creation.

In creating a 3D asset from a prompt, the network determines the necessary polygon count and replicates the texture. The resulting 3D asset can exist in AR experiences and serve as a marker itself for second-layer experiences.

“A designer today is someone who can not just draw, but describe. Today, it’s the same in XR,” DEVAR founder and CEO Anna Belova told ARPost. “Our goal is to make this available to everyone … you just need to open your imagination.”

Blurring the Lines

“From strictly the making a world aspect, AI takes on a lot of the work,” Mirrorscape CEO Grant Anderson told ARPost. “Making all of these models and environments takes a lot of time and money, so AI is a magic bullet.”

Mirroscape is looking to “bring your tabletop game to life with immersive 3D augmented reality.” Of course, much of the beauty of tabletop games come from the fact that players are creating their own worlds and characters as they go along. While the roleplaying element has been reproduced by other platforms, Mirrorscape is bringing in the individual creativity through AI.

“We’re all about user-created content, and I think in the end AI is really going to revolutionize that,” said Grant. “It’s going to blur the lines around what a game publisher is.”

Even for those who are professional builders but who might be independent or just starting out, artificial intelligence, whether to create assets or just for ideation, can help level the playing field. That was a theme of a recent Zapworks workshop “Can AI Unlock Your Creating Potential? Augmenting Reality With AI Tools.”

“AI is now giving individuals like me and all of you sort of superpowers to compete with collectives,” Zappar executive creative director Andre Assalino said during the workshop. “If I was a one-man band, if I was starting off with my own little design firm or whatever, if it’s just me freelancing, I now will be able to do so much more than I could five years ago.”

NeRFs

Neural Radiance Fields (NeRFs) weren’t included in the introduction because they can be seen as a combination of generative AI and computer vision. It starts out with a special kind of neural network called a multilayer perceptron (MLP). A “neural network” is any artificial intelligence that’s based off of the human brain, and an MLP is … well, look at it this way:

If you’ve ever taken an engineering course, or even a highschool shop class, you’ve been introduced to drafting. Technical drawings represent a 3D structure as a series of 2D images, each showing different angles of the 3D structure. Over time, you can get pretty good at visualizing the complete structure from these flat images. An MLP can do the same thing.

The difference is the output. When a human does this, the output is a thought – a spatial understanding of the object in your mind’s eye. When an MLP does this, the output is a NeRF – a 3D rendering generated from the 2D images.

Early on, this meant feeding countless images into the MLP. However, in the summer of 2022, Apple and the University of British Columbia developed a way to do it with one video. Their approach was specifically interested in generating 3D models of people from video clips for use in AR applications.

Whether a NeRF recreates a human or an object, it’s quickly becoming the fastest and easiest way to make digital twins. Of course, the only downside is that NeRF can only create digital models of things that already exist in the physical world.

Digital Twins and Simulation

Digital twins can be built with or without artificial intelligence. However, some use cases of digital twins are powered by AI. These include simulations like optimization and disaster readiness. For example, a digital twin of a real campus can be created, but then modified on a computer to maximize production or minimize risk in different simulated scenarios.

“You can do things like scan in areas of a refinery, but then create optimized versions of that refinery … and have different simulations of things happening,” MeetKai co-founder and executive chairwoman Weili Dai told ARPost in a recent interview.

A recent suite of authoring tools launched by the company (which started in AI before branching into XR solutions) includes AI-powered tools for creating virtual environments from the virtual world. These can be left as exact digital twins, or they can be edited to streamline the production of more fantastic virtual worlds by providing a foundation built in reality.

Large Language Models

Large Language Models take in language prompts and return language responses. This is on the list of AI interactions that runs largely under the hood so that, ideally, users don’t realize that they’re interacting with AI. For example, large language models could be the future of NPC interactions and “non-human agents” that help us navigate vast virtual worlds.

“In these virtual world environments, people are often more comfortable talking to virtual agents,” Inworld AI CEO Ilya Gelfenbeyn told ARPost in a recent interview. “In many cases, they are acting in some service roles and they are preferable [to human agents].”

Inworld AI makes brains that can animate Ready Player Me avatars in virtual worlds. Creators get to decide what the artificial intelligence knows – or what information it can access from the web – and what its personality is like as it walks and talks its way through the virtual landscape.

“You basically are teaching an actor how it is supposed to behave,” Inworld CPO Kylan Gibbs told ARPost.

Large language models are also used by developers to speed up back-end processes like generating code.

How XR Gives Back

So far, we’ve talked about ways in which artificial intelligence makes XR experiences better. However, the opposite is also true, with XR helping to strengthen AI for other uses and applications.

Evolving AI

We’ve already seen that some approaches to artificial intelligence are modeled after the human brain. We know that the human brain developed essentially through trial and error as it rose to meet the needs of our early ancestors. So, what if virtual brains had the same opportunity?

Martine Rothblatt PhD reports that very opportunity in the excellent book “Virtually Human: The Promise – and the Peril – of Digital Immortality”:

“[Academics] have even programmed elements of autonomy and empathy into computers. They even create artificial software worlds in which they attempt to mimic natural selection. In these artificial worlds, software structures compete for resources, undergo mutations, and evolve. Experimenters are hopeful that consciousness will evolve in their software as it did in biology, with vastly greater speed.”

Feeding AI

Like any emerging technology, people’s expectations of artificial intelligence can grow faster than AI’s actual capabilities. AI learns by having data entered into it. Lots of data.

For some applications, there is a lot of extant data for artificial intelligence to learn from. But, sometimes, the answers that people want from AI don’t exist yet as data from the physical world.

“One sort of major issue of training AI is the lack of data,” Treble Technologies CEO Finnur Pind told ARPost in a recent interview.

Treble Technologies works with creating realistic sound in virtual environments. To train an artificial intelligence to work with sound, it needs audio files. Historically, these were painstakingly sampled with different things causing different sounds in different environments.

Usually, during the early design phases, an architect or automotive designer will approach Treble to predict what audio will sound like in a future space. However, Treble can also use its software to generate specific sounds in specific environments to train artificial intelligence without all of the time and labor-intensive sampling. Pinur calls this “synthetic data generation.”

The AI-XR Relationship Is “and” Not “or”

Holding up artificial intelligence as the new technology on the block that somehow takes away from XR is an interesting narrative. However, experts are in agreement that these two emerging technologies reinforce each other – they don’t compete. XR helps AI grow in new and fantastic ways, while AI makes XR tools more powerful and more accessible. There’s room for both.

The Intersections of Artificial Intelligence and Extended Reality Read More »

awe-usa-2023-day-three:-eyes-on-apple

AWE USA 2023 Day Three: Eyes on Apple

The last, third day of AWE USA 2023 took place on Friday, June 2. The first day of AWE is largely dominated by keynotes. A lot of air on the second day is taken up by the expo floor opening. By the third day, the keynotes are done, the expo floor starts to get packed away, and panel discussions and developer talks rule the day. And Apple ruled a lot of those talks.

Bracing for Impact From Apple

A big shift is expected this week as Apple is expected to announce its entrance into the XR market. The writing has been on the wall for a long time.

Rumors have probably been circulating for longer than many readers have even been watching XR. ARPost started speculating in 2018 on a 2019 release. Five years of radio silence later and we had reports that the product would be delayed indefinitely.

The rumor mill is back in operation with an expected launch this week (Apple’s WWDC23 starts today) – with many suggesting that Meta’s sudden announcement of the Quest 3 is a harbinger. Whether an Apple entrance is real this time or not, AWE is bracing itself.

Suspicion on Standards

Let’s take a step back and look at a conversation that happened on AWE USA 2023 Day Two, but is very pertinent to the emerging Apple narrative.

The “Building Open Standards for the Metaverse” panel moderated by Moor Insights and Strategy Senior Analyst Anshel Sag brought together XR Safety Initiative (XRSI) founder and CEO Kavya Pearlman, XRSI Advisor Elizabeth Rothman, and Khronos Group President Neil Trevett.

Apple’s tendency to operate outside of standards was discussed. Even prior to their entrance into the market, this has caused problems for XR app developers – Apple devices even have a different way of sensing depth than Android devices. XR glasses tend to come out first or only on Android in part because of Android’s more open ecosystem.

“Apple currently holds so much power that they could say ‘This is the way we’re going to go.’ and the Metaverse Standards Forum could stand up and say ‘No.’,” said Pearlman, expressing concern over accessibility of “the next generation of the internet”.

Trevett expressed a different approach, saying that standards should present the best option, not the only option. While standards are more useful the more groups use them, competition is helpful and shows diversity in the industry. And diversity in the industry is what sets Apple apart.

“If Apple does announce something, they’ll do a lot of education … it will progress how people use the tech whether they use open standards or not,” said Trevett. “If you don’t have a competitor on the proprietary end of the spectrum, that’s when you should start to worry because it means that no one cares enough about what you’re doing.”

Hope for New Displays

On Day Three, KGOn Tech LLC’s resident optics expert Karl Guttag presented an early morning developer session on “Optical Versus Passthrough Mixed Reality.” Guttag has been justifiably critical of Meta Quest Pro’s passthrough in particular. Even for optical XR, he expressed skepticism about a screen replacement, which is what the Apple headset is largely rumored to be.

karl guttag AWE 2023 Day 3
Karl Guttag

“One of our biggest issues in the market is expectations vs. reality,” said Guttag. “What is hard in optical AR is easy in passthrough and vice versa. I see very little overlap in applications … there is also very little overlap in device requirements.”

A New Generation of Interaction

“The Quest 3 has finally been announced, which is great for everyone in the industry,” 3lbXR and 3lb Games CEO Robin Moulder said in her talk “Expand Your Reach: Ditch the Controllers and Jump into Mixed Reality.” “Next week is going to be a whole new level when Apple announces something – hopefully.”

robin moulder AWE 2023 Day 3
Robin Moulder

Moulder presented the next round of headsets as the first of a generation that will hopefully be user-friendly enough to increase adoption and deployment bringing more users and creators into the XR ecosystem.

“By the time we have the Apple headset and the new Quest 3, everybody is going to be freaking out about how great hand tracking is and moving into this new world of possibilities,” said Moulder.

More on AI

AI isn’t distracting anyone from XR and Apple isn’t distracting anyone from AI. Apple appearing as a conference theme doesn’t mean that anyone was done talking about AI. If you’re sick of reading about AI, at least read the first section below.

Lucid Realities: A Glimpse Into the Current State of Generative AI

After two full days of people talking about how AI is a magical world generator that’s going to take the task of content creation off of the shoulders of builders, Microsoft Research Engineer Jasmine Roberts set the record straight.

jasmine roberts AWE 2023
Jasmine Roberts

“We’ve passed through this techno-optimist state into dystopia and neither of those are good,” said Roberts. “When people think that [AI] can replace writers, it’s not really meant to do that. You still need human supervisors.”

AI not being able to do everything that a lot of people think it can isn’t the end of the world. A lot of the things that people want AI to do is already possible through other less glamorous tools.

“A lot of what people want from generative AI, they can actually get from procedural generation,” said Roberts. “There are some situations where you need bespoke assets so generative AI wouldn’t really cut it.”

Roberts isn’t against AI – her presentation was simply illustrating that it doesn’t work the way that some industry outsiders are being led to believe. That isn’t the same as saying that it doesn’t work. In fact, she brought a demo of an upcoming AI-powered Clippy. (You remember Clippy, right?)

Augmented Ecologies

Roberts was talking about the limitations of AI. The “Augmented Ecologies” panel moderated by AWE co-founder Tish Shute, saw Three Dog Labs founder Sean White,  Morpheus XR CTO Anselm Hook, and Croquet founder and CTO David A. Smith talking about what happens when AI is the new dominant life form on planet Earth.

Tish Shute, Sean White, Anselm Hook, and David Smith - AWE 2023 Day 3
From left to right: Tish Shute, Sean White, Anselm Hook, and David Smith

“We’re kind of moving to a probabilistic model, it’s less deterministic, which is much more in line with ecological models,” said White.

This talk presented the scenario in which developers are no longer the ones running the show. AI takes on a life of its own, and that life is more capable than ours.

“In an ecology, we’re not necessarily at the center, we’re part of the system,” said Hook. “We’re not necessarily able to dominate the technologies that are out there anymore.”

This might scare you, but it doesn’t scare Smith. Smith described a future in which AI becomes the legacy that can live in environments that humans never can, like the reaches of space.

“The metaverse and AI are going to redefine what it means to be human,” said Smith. “Ecosystems are not healthy if they are not evolving.”

“No Longer the Apex”

On the morning of Day Two, the Virtual World Society and the VR/AR Association hosted a very special breakfast. Invited were some of the most influential leaders in the immersive technology space. The goal was to discuss the health and future of the XR industry.

The findings will be presented in a report, but some of the concepts were also presented at “Spatial Computing for All” – a fireside chat with Virtual World Society Founder Tom Furness, HTC China President Alvin Graylin, and moderated by technology consultant Linda Ricci.

The major takeaway was that the industry insiders aren’t particularly worried about the next few years. After that, the way in which we do work might start to change and that might have to change the ways that we think about ourselves and value our identities in a changing society.

AWE Is Changing Too

During the show wrap-up, Ori Inbar had some big news. “AWE is leveling up to LA.” This was the fourteenth AWE. Every AWE, except for one year when the entire conference was virtual because of the COVID-19 pandemic, has been in Santa Clara. But, the conference has grown so much that it’s time to move.

AWE 2024 in LA

“I think we realized this year that we were kind of busting at the seams,” said Inbar. “We need a lot more space.”

The conference, which will take place from June 18-20 will be in Long Beach, with “super, super early bird tickets” available for the next few weeks.

Yes, There’s Still More

Most of the Auggie Awards and the winners of Inbar’s climate challenge were announced during a ceremony on the evening of Day Two. During the event wrap-up, the final three Auggies were awarded. We didn’t forget, we just didn’t have room for them in our coverage.

So, there is one final piece of AWE coverage just on the Auggies. Keep an eye out. Spoiler alert, Apple wasn’t nominated in any of the categories.

AWE USA 2023 Day Three: Eyes on Apple Read More »

awe-usa-2023-day-one:-xr,-ai,-metaverse,-and-more

AWE USA 2023 Day One: XR, AI, Metaverse, and More

AWE USA 2023 saw a blossoming industry defending itself from negative press and a perceived rivalry with other emerging technologies. Fortunately, Day One also brought big announcements, great discussions, and a little help from AI itself.

Ori Inbar’s Welcome Address

Historically, AWE has started with an address from founder Ori Inbar. This time, it started with an address from a hologram of Ori Inbar appearing on an ARHT display.

Ori Inbar hologram at AWE USA 2023 Day 1
Ori Inbar hologram

The hologram waxed on for a few minutes about progress in the industry and XR’s incredible journey. Then the human Ori Inbar appeared and told the audience that everything that the hologram said was written by ChatGPT.

While (the real) Inbar quipped that he uses artificial intelligence to show him how not to talk, he addressed recent media claims that AI is taking attention and funding away from XR. He has a different view.

it’s ON !!!

Ori Inbar just started his opening key note at #AWE2023

Holo-Ori was here thanks to our friends from @arht_tech.@como pic.twitter.com/Do23hjIkST

— AWE (@ARealityEvent) May 31, 2023

“We industry insiders know this is not exactly true … AI is a good thing for XR. AI accelerates XR,” said Inbar. “XR is the interface for AI … our interactions [with AI] will become a lot less about text and prompts and a lot more about spatial context.”

“Metaverse, Shmetaverse” Returns With a Very Special Guest

Inbar has always been bullish on XR. He has been skeptical of the metaverse.

At the end of his welcome address last year, Inbar praised himself for not saying “the M word” a single time. The year before that, he opened the conference with a joke game show called “Metaverse, Shmetaverse.” Attendees this year were curious to see Inbar share the stage with a special guest: Neal Stephenson.

Neal Stephenson at AWE USA 2023 Day 1
Neal Stephenson

Stephenson’s 1992 book, Snow Crash, introduced the world to the word “metaverse” – though Stephenson said that he wasn’t the first one to imagine the concept. He also addressed the common concern that the term for shared virtual spaces came from a dystopian novel.

“The metaverse described in Snow Crash was my best guess about what spatial computing as a mass medium might look like,” said Stephenson. “The metaverse itself is neither dystopian nor utopian.”

Stephenson then commented that the last five years or so have seen the emergence of the core technologies necessary to create the metaverse, though it still suffers from a lack of compelling content. That’s something that his company, Lamina1, hopes to address through a blockchain-based system for rewarding creators.

“There have to be experiences in the metaverse that are worth having,” said Stephenson. “For me, there’s a kind of glaring and frustrating lack of support for the people who make those experiences.”

AWE 2023 Keynotes and Follow-Ups

Both Day One and Day Two of AWE start out with blocks of keynotes on the main stage. On Day One, following Inbar’s welcome address and conversation with Stephenson, we heard from Qualcomm and XREAL (formerly Nreal). Both talks kicked off themes that would be taken up in other sessions throughout the day.

Qualcomm

From the main stage, Qualcomm Vice President and General Manager of XR, Hugo Swart, presented “Accelerating the XR Ecosystem: The Future Is Open.” He commented on the challenge of developing AR headsets, but mentioned the half-dozen or so Qualcomm-enabled headsets released in the last year, including the Lenovo ThinkReality VRX announced Tuesday.

Hugo Swart Qualcomm at AWE USA 2023 Day 1
Hugo Swart

Swart was joined on the stage by OPPO Director of XR Technology, Yi Xu, who announced a new Qualcomm-powered MR headset that would become available as a developer edition in the second half of this year.

As exciting as those announcements were, it was a software announcement that really made a stir. It’s a new Snapdragon Spaces tool called “Dual Render Fusion.”

“We have been working very hard to reimagine smartphone XR when used with AR glasses,” said Swart. “The idea is that mobile developers designing apps for 2D expand those apps to world-scale apps without any knowledge of XR.”

Keeping the Conversation Going

Another talk, “XR’s Inflection Point” presented by Qualcomm Director of Product Management Steve Lukas, provided a deeper dive into Dual Render Fusion. The tool allows an experience to use a mobile phone camera and a headworn device’s camera simultaneously. Existing app development tools hadn’t allowed this because (until now) it didn’t make sense.

Steve Lukas at AWE 2023 Day 1
Steve Lukas

“To increase XR’s adoption curve, we must first flatten its learning curve, and that’s what Qualcomm just did,” said Lukas. “We’re not ready to give up on mobile phones so why don’t we stop talking about how to replace them and start talking about how to leverage them?”

A panel discussion, “Creating a New Reality With Snapdragon Today” moderated by Qualcomm Senior Director of Product Management XR Said Bakadir, brought together Xu, Lenovo General Manager of XR and Metaverse Vishal Shah, and DigiLens Vice President of Sales and Marketing Brian Hamilton. They largely addressed the need to rethink AR content and delivery.

Vishal Shah, Brian Hamilton, Yi Xu, and Said Bakadir at AWE USA 2023 Day 1
From left to right: Vishal Shah, Brian Hamilton, Yi Xu, and Said Bakadir

“When I talk to the developers, they say, ‘Well there’s no hardware.’ When I talk to the hardware guys, they say, ‘There’s no content.’ And we’re kind of stuck in that space,” said Bakadir.

Hamilton and Shah both said, in their own words, that Qualcomm is creating “an all-in-one platform” and “an end-to-end solution” that solves the content/delivery dilemma that Bakadir opened with.

XREAL

In case you blinked and missed it, Nreal is now XREAL. According to a release shared with ARPost, the name change had to do with “disputes regarding the Nreal mark” (probably how similar it was to “Unreal”). But, “the disputes were solved amicably.”

Chi Xu XREAL AWE 2023
Chi Xu

The only change is the name – the hardware and software are still the hardware and software that we know and love. So, when CEO Chi Xu took the stage to present “Unleashing the Potential of Consumer AR” he just focused on progress.

From one angle, that progress looks like a version of XREAL’s AR operating system for Steam Deck, which Xu said is “coming soon.” From another angle, it looked like the partnership with Sightful which recently resulted in “Spacetop” – the world’s first AR laptop.

XREAL also announced Beam, a controller and compute box that can connect wirelessly or via hard connection to XREAL glasses specifically for streaming media. Beam also allows comfort and usability settings for the virtual screen that aren’t currently supported by the company’s current console and app integrations. Xu called it “the best TV innovation since TV.”

AI and XR

A number of panels and talks also picked up on Inbar’s theme of AI and XR. And they all (as far as I saw) unanimously agreed with Inbar’s assessment that there is no actual competition between the two technologies.

The most in-depth discussion on the topic was “The Intersection of AI and XR” a panel discussion between XR ethicist Kent Bye, Lamina1 CPO Tony Parisi, HTC Global VP of Corporate Development Alvin Graylin, and moderated by WXR Fund Managing Partner Amy LaMeyer.

Amy LaMeyer, Tony Parisi, Alvin Graylin, Kent Bye AWE 2023 Day 1
From left to right: Amy LaMeyer, Tony Parisi, Alvin Graylin, Kent Bye

“There’s this myth that AI is here so now XR’s dead, but it’s the complete opposite,” said Graylin. Graylin pointed out that most forms of tracking and input as well as approaches to scene understanding are all driven by AI. “AI has been part of XR for a long time.”

While they all agreed that AI is a part of XR, the group disagreed on the extent to which AI could take over content creation.

“A lot of people think AI is the solution to all of their content creation and authoring needs in XR, but that’s not the whole equation,” said Parisi.

Graylin countered that AI will increasingly be able to replace human developers. Bye in particular was vocal that we should be reluctant and suspicious of handing over too much creative power to AI in the first place.

“The differentiating factor is going to be storytelling,” said Bye. “I’m seeing a lot of XR theater that has live actors doing things that AI could never do.”

Web3, WebXR, and the Metaverse

The conversation is still continuing regarding the relationship between the metaverse and Web3. With both the metaverse and Web3 focusing on the ideas of openness and interoperability, WebXR has become a common ground between the two. WebXR is also the most accessible from a hardware perspective.

“VR headsets will remain a niche tech like game consoles: some people will have them and use them and swear by them and won’t be able to live without them, but not everyone will have one,” Nokia Head of Trends and Innovation Scouting, Leslie Shannon, said in her talk “What Problem Does the Metaverse Solve?”

Leslie Shannon AWE 2023 Day 1
Leslie Shannon

“The majority of metaverse experiences are happening on mobile phones,” said Shannon. “Presence is more important than immersion.”

Wonderland Engine CEO Jonathan Hale asked “Will WebXR Replace Native XR” with The Fitness Resort COO Lydia Berry. Berry commented that the availability of WebXR across devices helps developers make their content accessible as well as discoverable.

Lydia Berry and Jonathan Hale AWE 2023 Day 1
Lydia Berry and Jonathan Hale

“The adoption challenges around glasses are there. We’re still in the really early adoption phase,” said Berry. “We need as many headsets out there as possible.”

Hale also added that WebXR is being taken more seriously as a delivery method by hardware manufacturers who were previously mainly interested in pursuing native apps.

“More and more interest is coming from hardware manufacturers every day,” said Hale. “We just announced that we’re working with Qualcomm to bring Wonderland Engine to Snapdragon Spaces.”

Keep Coming Back

AWE Day One was a riot but there’s a lot more where that came from. Day Two kicks off with keynotes by Magic Leap and Niantic, there are more talks, more panels, more AI, and the Expo Floor opens up for demos. We’ll see you tomorrow.

AWE USA 2023 Day One: XR, AI, Metaverse, and More Read More »

strivr-enhances-immersive-learning-with-generative-ai,-equips-vr-training-platform-with-mental-health-and-well-being-experiences

Strivr Enhances Immersive Learning With Generative AI, Equips VR Training Platform With Mental Health and Well-Being Experiences

Strivr, a virtual reality training solutions startup, was founded as a VR training platform for professional sports leagues such as the NBA, NHL, and NFL. Today, Strivr has made its way to the job training scene with an innovative approach to employee training, leveraging generative AI (GenAI) to transform learning experiences.

More Companies Lean Toward Immersive Learning

Today’s business landscape is rapidly evolving. As such, Fortune 500 companies and other businesses in the corporate sector are starting to turn to more innovative employee training and development solutions. To serve the changing demands of top companies, Strivr has secured $16 million in funding back in 2018 to expand its VR training platform.

Research shows that learning through VR environments can significantly enhance knowledge retention, making it a groundbreaking development in employee training.

Unlike traditional training methods, a VR training platform immerses employees in lifelike scenarios, providing unparalleled engagement and experiential learning. However, this technology isn’t a new concept at all. Companies have been incorporating VR into their training solutions for several years, but we’ve only recently seen more industries adopting this technology rapidly.

The Impact of Generative AI on VR Training Platforms

Walmart, the largest retailer in the world, partnered with Strivr to bring VR to their training facilities. Employees can now practice in virtual sales floors repeatedly until they perfect their skills. In 2019, nearly 1.4 million Walmart associates have undergone VR training to prepare for the holiday rush, placing them in a simulated, chaotic Black Friday scenario.

As a result, associates reported a 30% increase in employee satisfaction, 70% higher test scores, and 10 to 15% higher knowledge retention rates. Because of the VR training’s success, Walmart expanded the VR training program to all their stores nationwide.

Derek Belch, founder and CEO at Strivr, states that the demand for the faster development of high-quality and scalable VR experiences that generate impactful results is “at an all-time high.”

VR training platofrm Strivr

As Strivr’s customers are among the most prominent companies globally, they are directly experiencing the impact of immersive learning on employee engagement, retention, and performance. “They want more, and we’re listening,” said Belch in a press release shared with ARPost.

So, to enhance its VR training platform, Strivr embraces generative AI to develop storylines, boost animation and asset creation, and optimize visual and content-driven features.

GenAI will also aid HR and L&D leaders in critical decision-making by deriving insights from immersive user data.

Strivr’s VR Training Platform Addresses Employee Mental Health

Strivr has partnered with Reulay and Healium in hosting its first in-headset mental health and well-being applications on the VR training platform. This will allow their customers to incorporate mental health “breaks” into their training curricula and address the rising levels of employee burnout, depression, and anxiety.

Belch has announced that Strivr also partnered with one of the world’s leading financial institutions to make meditation activities available in their workplace.

Meditation is indeed helpful for employees; the Journal of the American Medical Association recently published a study that showed that meditation can help reduce anxiety as effectively as drug therapies. Mindfulness practices, on the other hand, have been demonstrated to increase employee productivity, focus, and collaboration.

How VR Transforms Professional Training

With Strivr’s VR Training platform offering enhanced experiential learning and mental well-being, one might wonder how VR technology will influence employee training moving forward.

Belch describes Strivr’s VR training platform as a “beautifully free space” to practice. Employees can develop or improve their skills in a realistic scenario that simulates actual workplace challenges in a way that typical workshops and classrooms cannot. Moreover, training employees through VR platform cuts travel costs associated with conventional training facilities.

VR training platform Strivr

VR training platforms also contribute to a more inclusive and diverse workplace. Employees belonging to minority groups can rehearse and tailor their behaviors in simulated scenarios where a superior or customer is prejudiced toward them, for instance. When these situations are addressed during training, companies can protect their employees from these challenges and prepare them.

What’s Next for VR Training Platforms?

According to Belch, Strivr’s enhanced VR training platform is only the beginning of how VR will continue to impact the employee experience.

So far, VR training platforms have been improving employee onboarding, knowledge retention, and performance. They allow employees to practice and acquire critical skills in a safe, virtual environment, helping them gain more confidence and efficiency while training. Additionally, diversity and inclusion are promoted, thanks to VR’s ability to simulate scenarios where employees can tailor their behaviors during difficult situations.

And, of course, VR training has rightfully gained recognition for helping teach retail workers essential customer service skills. By interacting with virtual customers in a life-like environment, Walmart’s employees have significantly boosted their skills, and the mega-retailer has implemented an immersive training solution to all of its nearly 4,700 stores all over America.

In 2022, Accenture invested in Strivr and Talespin to revolutionize immersive learning and enterprise VR. This is a good sign of confidence in the industry and its massive potential for growth.

As we keep an eye on the latest scoop about VR technology, we can expect more groundbreaking developments in the industry and for VR platforms to increase their presence in the employee training realm.

Strivr Enhances Immersive Learning With Generative AI, Equips VR Training Platform With Mental Health and Well-Being Experiences Read More »