learning

ai-versus-the-brain-and-the-race-for-general-intelligence

AI versus the brain and the race for general intelligence


Intelligence, ±artificial

We already have an example of general intelligence, and it doesn’t look like AI.

There’s no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That’s gotten some people talking about the possibility that we’re on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.

Many arguments come down to the question of how AGI is defined, which people in the field can’t seem to agree upon. This contributes to estimates of its advent that range from “it’s practically here” to “we’ll never achieve it.” Given that range, it’s impossible to provide any sort of informed perspective on how close we are.

But we do have an existing example of AGI without the “A”—the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.

With all that in mind, let’s look at some of the things the brain does that current AI systems can’t.

Defining AGI might help

Artificial general intelligence hasn’t really been defined. Those who argue that it’s imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI’s arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the “G” of AGI and its implication of systems that are far less specialized.

But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short.

“I think that AGI would be something that is going to be more robust, more stable—not necessarily smarter in general but more coherent in its abilities,” said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. “You’d expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.”

“I think that’s a big distinction, this idea of generalizability,” echoed neuroscientist Christa Baker of NC State University. “You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it’s not like now you’re an idiot.”

Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.

Beyond those specific limits, Baker noted that “there’s long been this very human-centric idea of intelligence that only humans are intelligent.” That’s fallen away within the scientific community as we’ve studied more about animal behavior. But there’s still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language models

The fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.

These capabilities are complicated enough that it’s not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we’ve created so far.

Neurons vs. artificial neurons

Most current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.

After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.

While that system is modeled on the behavior of some structures within the brain, it’s a very limited approximation. For one, all artificial neurons are functionally equivalent—there’s no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.

In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.

Finally, while organized layers are a feature of a few structures in brains, they’re far from the rule. “What we found is it’s—at least in the fly—much more interconnected,” Baker told Ars. “You can’t really identify this strictly hierarchical network.”

With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are “finding lateral connections or feedback projections, or what we call recurrent loops, where we’ve got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate.”

While we’re only beginning to understand the functional consequences of all this complexity, it’s safe to say that it allows networks composed of actual neurons far more flexibility in how they process information—a flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.

But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we’ve talked about so far. They extend to significant differences in how these functional units are organized.

The brain isn’t monolithic

The neural networks we’ve generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.

To give a sense of what this looks like, let’s think about what’s going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.

Separately, there’s part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you’re engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.

The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehension—and requiring many of these systems to communicate among themselves.

As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there’s any emotional content to the material you’re reading.

All of these different areas are engaged without you being consciously aware of the need for them.

In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That’s in sharp contrast to a brain. “The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make,” Baker noted. “There already a lot of constraints and specifics that are already set up.”

Even in cases where it’s not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.

In contrast, pre-planned modularity is relatively new to the AI world. In software development, “This concept of modularity is well established, so we have the whole methodology around it, how to manage it,” Schain said, “it’s really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain.” There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.

None of this is saying that a modular system can’t arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there’s no reason to think modularity will be valuable.

There is some reason to believe that this modularity is key to the brain’s incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that’s not consistently the case; Baker noted that, “When you’re talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech.”

This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we’ll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.

The brain is constantly training

Current AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn’t absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they’re retained.

That may be starting to change a bit, Schain said. “There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates,” he told Ars. But it’s still the case that neural networks are essentially useless without an extended training period.

In contrast, a brain doesn’t have distinct learning and active states; it’s constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: “Once you have made your movement, the ball has left your hand, it’s going to land somewhere. So that visual signal—that comparison of where it landed versus where you wanted it to go—is what we call an error signal. That’s detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time.”

It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). “Even if you’re put into a situation where you’ve never been before, you can still figure it out,” Baker said. “If you see a new object, you don’t have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions.”

As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human’s performance doesn’t remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with “get off my lawn” would be indistinguishable.)

Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.

In contrast, it’s essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they’re presented as text. But here, there’s still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it’s best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.

Déjà vu

For Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, “memory” is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow “context window” that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on.

“For AI, it’s very basic: It’s like the memory is in the weights [of connections] or in the context. But with a human brain, it’s a much more sophisticated mechanism, still to be uncovered. It’s more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant.”

This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we’ve never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.

The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don’t really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it’s difficult to discuss at all. All we can really say is that there are clear differences there.

Facing limits

It’s difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it’s potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.

In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we’d find in a fly’s brain and have nowhere near the fly’s general capabilities.

It remains possible that there is more than one route to those general capabilities and that some offshoot of today’s AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we’ll run into a serious roadblock: We don’t fully understand the biology yet.

“I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has,” Baker said. “That’s just because we don’t even know how it gets it; we don’t know how that arises. So how do you build that into a system?”

Photo of John Timmer

John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.

AI versus the brain and the race for general intelligence Read More »

talespin-launches-ai-lab-for-product-and-implementation-development

Talespin Launches AI Lab for Product and Implementation Development

Artificial intelligence has been a part of Talespin since day one but the company has been leaning more heavily into the technology in recent years including through internal AI-assisted workflows and a public-facing AI development toolkit. Now, Talepsin is announcing an AI lab “dedicated to responsible artificial intelligence (AI) innovation in the immersive learning space.”

“Immersive Learning Through the Application of AI”

AI isn’t the end of work – but it will change the kinds of work that we do. That’s the outlook that a number of experts take, including the team behind Talespin. They use AI to create virtual humans in simulations for teaching soft skills. In other words, they use AI to make humans more human – because those are the strengths that won’t be automated any time soon.

Talespin AI Lab

“What should we be doing to make ourselves more valuable as these things shift?” Talespin co-founder and CEO Kyle Jackson recently told ARPost.“It’s really about metacognition.”

Talespin has been using AI to create experiences internally since 2015, ramping up to the use of generative AI for experience creation in 2019. They recently made those AI creation tools publicly available in the CoPilot Designer 3.0 release earlier this year.

Now, a new division of the company – the Talespin AI Lab – is looking to accelerate immersive learning through AI by further developing avenues for continued platform innovation as well as offering consulting services for the use of generative AI. Within Talepsin, the lab consists of over 30 team members and department heads who will work with outside developers.

“The launch of Talespin AI Lab will ensure we’re bringing our customers and the industry at large the most innovative and impactful AI solutions when it comes to immersive learning,” Jackson said in a release shared with ARPost.

Platform Innovation

CoPilot Designer 3.0 is hardly outdated, but interactive samples of Talespin’s upcoming AI-powered APIs for realistic characters and assisted content writing can currently be requested through the lab with even more generative AI tools coming to the platform this fall.

In interviews and in prepared material, Talespin representatives have stated that working with AI has more than halved the production time for immersive training experiences over the past four years. They expect that change to continue at an even more rapid pace going forward.

“Not long ago creating an XR learning module took 5 months. With the use of generative AI tools, that same content will be created in less than 30 minutes by the end of this year,” Jackson wrote in a blog post. “Delivering the most powerful learning modality with this type of speed is a development that allows organizations to combat the largest workforce shift in history.”

While the team certainly deserves credit for that, the company credits working with clients, customers, and partners as having accelerated their learnings with the technology.

Generative AI Services

That brings in the other major job of the AI Lab – generative AI consulting services. Through these services, the AI Lab will share Talespin’s learnings on using generative AI to achieve learning outcomes.

“These services include facilitating workshops during which Talespin walks clients through processes and lessons learned through research and partnership with the world’s leading learning companies,” according to an email to ARPost.

AI Lab Talespin

Generative AI consulting services might sound redundant but understanding that generative AI exists and knowing how to use it to solve a problem are different things. Even when Talespin’s clients have access to AI tools, they work with the team at Talespin to get the most out of those tools.

“Our place flipped from needing to know the answer to needing to know the question,” Jackson said in summing up the continued need for human experts in the AI world.

Building a More Intelligent Future in the AI Lab

AI is at a position similar to that seen by XR in recent months and blockchain shortly before that. Its potential is so exciting, we can forget that its full realization is far from imminent.

As exciting as Talespin’s announcements are, Jackson’s blog post foresees adaptive learning and whole virtual worlds dreamed up in an instant. While these ambitions remain things of the future, initiatives like the AI Lab are bringing them ever closer.

Talespin Launches AI Lab for Product and Implementation Development Read More »

inspirit-launches-affordable-xr-stem-education-platform-for-middle-and-high-school-students

Inspirit Launches Affordable XR STEM Education Platform for Middle and High School Students

XR STEM education has taken a leap forward with the official launch of Inspirit’s Innovative Learning Hub. The digital platform provides educators with affordable access to a premium library of virtual reality and augmented reality experiences designed specifically for middle and high school students. Focusing on enhancing learning outcomes and increasing engagement, Inspirit is revolutionizing the way STEM subjects are taught worldwide.

Breaking Down Barriers With Immersive Learning

Inspirit is a research-driven EdTech startup that pioneers immersive XR experiences for STEM education. The company’s Innovative Learning Hub stands as the premier choice for immersive XR STEM education, encompassing diverse subjects such as mathematics, physics, chemistry, biology, and vocational training.

Through XR experiences, Inspirit’s platform provides students with experiential learning opportunities. By engaging in simulations and exploring 3D models, students gain a deeper understanding of complex STEM concepts.

The advantages of VR education have long been embraced by both teachers and students, who have found immense value in its experiential approach. But with Inspirit’s XR expertise and easy-to-use technology, the platform bridges the gap between theoretical concepts and real-world applications, providing students with a deeper understanding and fostering engagement.

Renowned for its commitment to rigorous research, Inspirit collaborates with Stanford University researchers to unlock the full potential of XR learning. The result is a unified platform that seamlessly integrates into schools, improving learning outcomes and providing teachers with an intuitive system to embed into their curriculum.

Experts in the field, like Jeremy Bailenson, founding director of the Stanford Virtual Human Interaction Lab and professor of education, recognize the impact of Inspirit’s approach, emphasizing the importance of teacher professional development and curriculum alignment for successful integration and long-term usage in the classroom.

Inspirit XR STEM Education Platform

“Inspirit is unique in that it is led by a VR pioneer who puts ‘education first’, with a huge amount of experience in the world of STEM,” said Bailenson, in a press release shared with ARPost.

Unparalleled Access to Immersive XR Content

The Innovative Learning Hub boasts a comprehensive library of age-appropriate XR experiences that align with educational standards. From engaging simulations to interactive lessons, students have the opportunity to explore and study complex concepts, making learning tangible and enjoyable. This cutting-edge content ensures that students receive the highest-quality educational experiences.

Cross-Platform Compatibility for Seamless Learning

Flexibility is a key advantage of Inspirit’s Innovative Learning Hub. Students can access the library of XR content from various devices, including laptops, Chromebooks, and most VR headsets designed for educational use.

XR STEM Education Platform by Inspirit

This compatibility maximizes schools’ existing hardware investments while expanding learning capabilities. By eliminating the need for costly subscriptions and one-off purchases, Inspirit promotes inclusivity and accessibility, allowing all students to benefit from a comprehensive STEM curriculum.

XR STEM Education: Inspiring Students and Shaping Futures

As a firm believer in the transformative power of immersive technology, Aditya Vishwanath, co-founder and CEO of Inspirit, actively champions its potential for revolutionizing XR STEM education.

The Innovative Learning Hub serves as a platform that grants middle and high school students the opportunity to engage with exceptional XR content. “Our research-based methodology ensures all middle and high school students have an opportunity to access top-notch XR content that enhances their learning experience, prepares them for the future, and inspires them to pursue their dreams,” said Vishwanath.

Inspirit Launches Affordable XR STEM Education Platform for Middle and High School Students Read More »

strivr-enhances-immersive-learning-with-generative-ai,-equips-vr-training-platform-with-mental-health-and-well-being-experiences

Strivr Enhances Immersive Learning With Generative AI, Equips VR Training Platform With Mental Health and Well-Being Experiences

Strivr, a virtual reality training solutions startup, was founded as a VR training platform for professional sports leagues such as the NBA, NHL, and NFL. Today, Strivr has made its way to the job training scene with an innovative approach to employee training, leveraging generative AI (GenAI) to transform learning experiences.

More Companies Lean Toward Immersive Learning

Today’s business landscape is rapidly evolving. As such, Fortune 500 companies and other businesses in the corporate sector are starting to turn to more innovative employee training and development solutions. To serve the changing demands of top companies, Strivr has secured $16 million in funding back in 2018 to expand its VR training platform.

Research shows that learning through VR environments can significantly enhance knowledge retention, making it a groundbreaking development in employee training.

Unlike traditional training methods, a VR training platform immerses employees in lifelike scenarios, providing unparalleled engagement and experiential learning. However, this technology isn’t a new concept at all. Companies have been incorporating VR into their training solutions for several years, but we’ve only recently seen more industries adopting this technology rapidly.

The Impact of Generative AI on VR Training Platforms

Walmart, the largest retailer in the world, partnered with Strivr to bring VR to their training facilities. Employees can now practice in virtual sales floors repeatedly until they perfect their skills. In 2019, nearly 1.4 million Walmart associates have undergone VR training to prepare for the holiday rush, placing them in a simulated, chaotic Black Friday scenario.

As a result, associates reported a 30% increase in employee satisfaction, 70% higher test scores, and 10 to 15% higher knowledge retention rates. Because of the VR training’s success, Walmart expanded the VR training program to all their stores nationwide.

Derek Belch, founder and CEO at Strivr, states that the demand for the faster development of high-quality and scalable VR experiences that generate impactful results is “at an all-time high.”

VR training platofrm Strivr

As Strivr’s customers are among the most prominent companies globally, they are directly experiencing the impact of immersive learning on employee engagement, retention, and performance. “They want more, and we’re listening,” said Belch in a press release shared with ARPost.

So, to enhance its VR training platform, Strivr embraces generative AI to develop storylines, boost animation and asset creation, and optimize visual and content-driven features.

GenAI will also aid HR and L&D leaders in critical decision-making by deriving insights from immersive user data.

Strivr’s VR Training Platform Addresses Employee Mental Health

Strivr has partnered with Reulay and Healium in hosting its first in-headset mental health and well-being applications on the VR training platform. This will allow their customers to incorporate mental health “breaks” into their training curricula and address the rising levels of employee burnout, depression, and anxiety.

Belch has announced that Strivr also partnered with one of the world’s leading financial institutions to make meditation activities available in their workplace.

Meditation is indeed helpful for employees; the Journal of the American Medical Association recently published a study that showed that meditation can help reduce anxiety as effectively as drug therapies. Mindfulness practices, on the other hand, have been demonstrated to increase employee productivity, focus, and collaboration.

How VR Transforms Professional Training

With Strivr’s VR Training platform offering enhanced experiential learning and mental well-being, one might wonder how VR technology will influence employee training moving forward.

Belch describes Strivr’s VR training platform as a “beautifully free space” to practice. Employees can develop or improve their skills in a realistic scenario that simulates actual workplace challenges in a way that typical workshops and classrooms cannot. Moreover, training employees through VR platform cuts travel costs associated with conventional training facilities.

VR training platform Strivr

VR training platforms also contribute to a more inclusive and diverse workplace. Employees belonging to minority groups can rehearse and tailor their behaviors in simulated scenarios where a superior or customer is prejudiced toward them, for instance. When these situations are addressed during training, companies can protect their employees from these challenges and prepare them.

What’s Next for VR Training Platforms?

According to Belch, Strivr’s enhanced VR training platform is only the beginning of how VR will continue to impact the employee experience.

So far, VR training platforms have been improving employee onboarding, knowledge retention, and performance. They allow employees to practice and acquire critical skills in a safe, virtual environment, helping them gain more confidence and efficiency while training. Additionally, diversity and inclusion are promoted, thanks to VR’s ability to simulate scenarios where employees can tailor their behaviors during difficult situations.

And, of course, VR training has rightfully gained recognition for helping teach retail workers essential customer service skills. By interacting with virtual customers in a life-like environment, Walmart’s employees have significantly boosted their skills, and the mega-retailer has implemented an immersive training solution to all of its nearly 4,700 stores all over America.

In 2022, Accenture invested in Strivr and Talespin to revolutionize immersive learning and enterprise VR. This is a good sign of confidence in the industry and its massive potential for growth.

As we keep an eye on the latest scoop about VR technology, we can expect more groundbreaking developments in the industry and for VR platforms to increase their presence in the employee training realm.

Strivr Enhances Immersive Learning With Generative AI, Equips VR Training Platform With Mental Health and Well-Being Experiences Read More »

talespin-and-pearson-usher-in-the-future-of-work-with-ambitious-storyworld

Talespin and Pearson Usher in the Future of Work With Ambitious Storyworld

Talespin is known for using VR in enterprise education – particularly for developing soft skills. Pearson, “the world’s leading learning company,” identified a need – specifically, helping business leaders understand the emerging future of work. Together, the two companies created an elaborate “storyworld” guiding learners through over 30 interactive education modules.

To learn more about “Where’d Everybody Go? The Business Leader’s Guide to the Decentralized Workforce,” we talked with Talespin CEO Kyle Jackson.

The World is Changing

The decentralized workforce is one of those trends that has, to a degree, always been there. With improving connectivity and ever-more portable hardware combined with an increase in the number of “knowledge workers” it’s been growing for a while now. The pandemic accelerated it as businesses that had remained centralized suddenly saw their workforce distributed.

Many workers like the opportunity to work largely when and where they like. Developments in culture and technology generally are making it more appealing and more practical, for example, with new approaches to financial technologies that encourage and facilitate independence – a sort of technologically driven take on rugged individualism.

Some companies have leaned into this massive shift as it can reduce overhead and even increase productivity as well as morale. However, some business leaders have been less able to really attach themselves to the idea which at the same time is becoming increasingly difficult to avoid.

“What we’ve broadly seen in the XR space is lots of single-module learning journeys,” said Jackson. “People just couldn’t do that with this topic.”

Where’d Everybody Go?

To address these challenges, Pearson – with AI analytics company Faethm, which Pearson acquired in 2021 – put together a list of “future human capabilities” that would be required to navigate this new direction in work. Working with Talespin helped to determine the direction of the project early on.

“We looked at that list and overlaid this concept of just how fast work is changing,” said Jackson. “Everybody is leaving jobs and no one can hire anybody – so where did everybody go?”

The experience currently consists of over 30 modules in four thematic tracks:

  • Applying Web3 to Business Strategy and Operations
  • Management and Upskilling
  • Equity and Values of the Modern Workforce
  • Practical Thinking.

There is also an introductory track, which helps learners choose the content that they’re going to work through. The whole experience might take a learner around seven hours to complete, but they don’t need to do it all at once. They don’t even need to do all of it.

“In that intro track you get a kind of choose-your-own-adventure overview,” said Jackson. “If you want to have your leadership team take just one of the tracks, that’s perfectly fine.”

Pearson and Talespin

The “choose-your-own-adventure” aspect comes in through the complex “storyworld” through which the content is delivered. Learners are essentially playing an interactive roleplaying game that helps them practice the topics of each track.

“Learners take on the protagonist’s role of a city commissioner,” reads a release shared with ARPost. “The learner must help local startups and enterprises navigate challenges that real-world businesses face today, like leading hybrid workforces, exploring the adoption of new technology, and instilling equitable workplace practices.”

The experience drew from the expertise and insights of both Pearson and Talespin, who worked closely to create the tracks and modules.

“It’s been very collaborative. Both teams have been in the trenches as a single team,” said Jackson. “We’re definitely more than just the platform in this case where in other cases we’re just the platform and the company is on their own.”

Creating the Experience

The level of involvement from Pearson was no doubt partially enabled by Talespin’s use of their own user-friendly creation tools. These also helped to allow the incredible speed with which the momentous project was realized.

“The idea formed in the middle of last year. Because we built a no-code platform, we really accelerated the product pipeline,” said Jackson. “Our North Star was how do you get the ability to create content into the hands of people who have the knowledge. … The no-code platform was built in service of that but we decided that we had to eat our own dog food.”

Jackson said that for the back-end team that were masters of their previous toolset, using the no-code version was initially frustrating. However, the platform played a large role in launching the experience, which has become a model for future long-form content from Talespin.

“This is the first of several of these that we have coming,” said Jackson. “Even though it’s a new concept to do a storyworld for an immersive learning experience, we’ve had a lot of interest.”

Demystifying Decentralization

Thanks to Talespin, virtual reality – one of the technologies playing a role in the decentralization of work – is helping companies navigate the future of work. This is a big moment for work as we know it, but it’s also a big deal for Talespin, who may have once again revolutionized immersive storytelling as an enterprise education tool.

Talespin and Pearson Usher in the Future of Work With Ambitious Storyworld Read More »

immersive-inspiration:-why-extended-reality-learning-holds-multi-sector-potential

Immersive Inspiration: Why Extended Reality Learning Holds Multi-Sector Potential

The vast potential of extended reality cannot be underestimated. Used as something of an umbrella term to encompass “all real-and-virtual combined environments and human-machine interactions,” XR has become a buzzword that’s closely associated with other popular terms like virtual reality, augmented reality, spatial computing, ubiquitous computing, and the metaverse – and deep into this litany of jargon lies the next frontier for digital learning.

Although the edtech sector has grown significantly in voracity since the emergence of the COVID-19 pandemic, it’s extended reality that holds the key to unprecedented levels of immersiveness.

Extended Reality XR Market - Growth Rate by Region 2022-2027 - Mordor Intelligence

Furthermore, Mordor Intelligence data suggests that the XR market is growing globally, and experiencing particularly high levels of growth in Asia and Oceania. With both Europe and North America also experiencing notable XR growth, it’s likely that XR learning platforms and initiatives will gather momentum at a significant rate over the coming years.

With this in mind, let’s take a deeper look at why extended reality holds such vast potential for the future of learning across the world of education and many other sectors.

Unprecedented Immersion

When it comes to education, the challenge of delivering an immersive learning experience to all students and pupils can be a profoundly difficult one.

According to a Udemy survey, 74% of Millennials and Gen-Z claimed that they would become easily distracted in the workplace. This means that educators must find new ways to keep modern students engaged for as long as possible.

Through embracing extended reality, we’re already seeing more immersive experiences delivered to students, and platforms like GigXR can help users to engage in real-time with digitally rendered content.

Such platforms are excellent for learning via accurately rendered 3D graphics for topics like human anatomy and medicine–carrying its functionality beyond classrooms and into medical training for industry professionals.

Although embracing XR can seem like a daunting prospect, its potential applications within the world of learning are vast, including:

  • Refreshing the range of learning techniques available to students in order to deliver foundational learning;
  • Delivering more customized and personalized learning experience for students exploring complex topics;
  • Better defining competencies and assessment criteria for student experiences;
  • Offering data that can be utilized to deliver more focused interactive lessons for students that can incorporate better collaboration as well as engagement.

While this can go some way in showing the potential possibilities of XR, these applications also have the power to fundamentally change education over the course of the decade. As a future within the age of Web3 and the metaverse continues to redefine how far reality technology can evolve, the prospective applications for the future of learning appear to be endless.

Inspiring Curiosity

Crucially, a recent survey conducted by the XR Association in collaboration with the International Society for Technology in Education (ISTE) found that many current educators are optimistic about the prospect of a future built on extended reality learning experiences.

Of 1,400 high school teachers surveyed, some 82% of respondents stated that they believed the quality of AR/VR learning activities has improved in recent years–with 70% expressing their hope that XR tools can become more commonplace in schools moving forward. In total, 94% of respondents were happy to highlight the importance of aligning XR-driven curricula to academic standards.

The study also found that 77% of those surveyed believed that XR technology “inspires curiosity,” and that the tools can help to address issues in maintaining student motivation and well-being which have been impacted by the COVID-19 pandemic.

“To get a good sense of XR’s potential in schools, you have to ask the teachers and staff who will be administering this technology. The survey’s results suggest that VR, AR and MR technology is well positioned to become an essential teaching tool in school classrooms across the country,” explained Stephanie Montgomery, VP of Research at the XR Association.

Extending XR Into the Workplace

Beyond the traditional education sector, XR-based learning can also pay dividends when it comes to workplace training and recruitment.

The potential of VR onboarding is vast across a number of industries, and it can be an essential tool when it comes to upskilling and combatting turnover challenges among existing workforces.

Through the potential of extended reality, trainees and candidates alike can collaborate with human resource departments to undertake virtual interviews–which can provide real-time metrics and behavioral analysis for more accurate and unbiased assessments of competencies.

By combining XR technology with artificial intelligence, companies can actively spot knowledge gaps among existing employees and automatically enroll them in new tailor-suited courses to enhance their skill sets.

Extended reality can also help in a number of practical training scenarios. In practice, this is best illustrated within the healthcare industry, where The Johns Hopkins School of Nursing has become one of many providers to implement comprehensive VR training programs ranging from doctoral to prelicensure nursing.

Delivering experiences via Meta Quest headsets and an Alienware computer, Johns Hopkins has managed to deliver multiplayer VR learning experiences that can render practice scenarios capable of accommodating up to 100 learners.

“We make decisions based on what’s going on — time-critical decisions,” said Kristen Brown, Assistant Professor at the Johns Hopkins School of Nursing and the Simulation Strategic Projects Lead at the Johns Hopkins Medicine Simulation Center. “So one of the important components was that there was some sort of AI that’s really adapting to what we’re doing.”

The beauty of extended reality in terms of training, is that it can provide a platform for learners to build their competencies in high-risk or highly sensitive areas without having to worry about high margins for errors to take place.

In surgery scenarios, for instance, XR experiences can place students into a virtual operating theater with a 3D subject to deliver a true-to-life simulation of an operation. Similar experiences have been continually growing in quality within industries like aviation.

Achieving Immersive Learning Within the Decade

The rapid growth of the extended reality market means that we’re likely to see comprehensive learning technologies become commonplace sooner rather than later. This will undoubtedly delight the 70% of teachers surveyed in the aforementioned XR Association’s survey, but it has the potential to resonate across multiple sectors.

From providing more immersive and comprehensive learning to students, to helping employees to gain a better quality of work experience during their onboarding and training processes, the arrival of XR learning can bring profound improvements to countless lives.

Better onboarding programs can help to improve job satisfaction and to lower turnover rates, while competencies will improve immeasurably as more impactful learning experiences emerge. With this in mind, extended reality is well placed to improve the lives of learners of all ages, and across a number of industries.

Guest Post


About the Guest Author(s)

Dmytro Spilka

Dmytro Spilka

Dmytro is a tech and finance writer based in London. Founder of Solvid and Pridicto. His work has been published in Nasdaq, Kiplinger, VentureBeat, Financial Express, and The Diplomat.

Immersive Inspiration: Why Extended Reality Learning Holds Multi-Sector Potential Read More »

xra-survey:-teachers-pin-hopes-on-xr-for-better-classroom-engagement

XRA Survey: Teachers Pin Hopes on XR for Better Classroom Engagement

Incorporating XR—the umbrella term for virtual, augmented, and mixed reality—in classroom education can make learning more fun. It can also motivate students to take their studies more seriously.  A recent survey by XR Association (XRA) and the International Society for Technology in Education (ISTE) presented this conclusion based on a poll of over 1,400 high school teachers across 50 US states. Let’s look at the survey results.

Optimism High for XR’s Classroom Use

Foremost of the highlights in the nationwide poll was the finding that 77% of educators believe in the power of extended reality to ignite curiosity and engagement in class. This is especially important given that student motivation and morale are reported to have dropped in the 2020-2021 school year.

As Sean Wybrant, a computer science teacher at Colorado Spring’s William J. Palmer High School, put it: “Imagine how much better a student will understand what happens in Othello if they could actually step into the play and see it. Imagine how much better we could tell historical narratives if we could put people in recreations of famous situations based on documentation of those time periods.”

Secondly, XR doesn’t only make students eager to learn. Seventy-seven percent of teachers also see its potential in spurring interaction and building empathy among classmates. XRA says in its report that creating immersive worlds allows students to exchange ideas and understand each other in new ways.

Thirdly, 67% of respondents agree with XRA’s advocacy to incorporate extended reality technology into the curricula. Educators teaching the following subjects believe that course-specific XR experiences would be beneficial for students:

  • Earth sciences (94%)
  • Physics and space science (91%)
  • Math (89%)
  • English language (86%)
  • World languages (87%)
  • History and social studies (90%)
  • Social sciences (91%)
  • Computer science (91%)
  • Visual and performing arts (91%)
  • Physical education (88%)
  • Career and technical education (91%)

“To get a good sense of XR’s potential in schools, you have to ask the teachers and staff who will be administering this technology,” said Stephanie Montgomery, the XRA Vice President of Research and Best Practices. “The survey’s results suggest that VR, AR, and MR technology is well-positioned to become an essential teaching tool in school classrooms across the country.”

At the same time, 58% of the survey respondents said that teachers should get training for XR classroom use. Moreover, 62% believe in developing XR standards before integrating the technologies into regular curricula.

XR Association CEO Elizabeth Hyman believes in the extensive ripple effect that will result from making educators XR-ready. “If teachers understand XR technology and are empowered to contribute to the way in which it is incorporated into the curriculum, everyone—students, their guardians, and the surrounding community—will be able to take advantage of its benefits,” she said.

However, despite the positive outlook, 57% of teachers recognize the costs of using AR and VR devices and admit that access to funds will determine access to such technology. Nevertheless, poll participants believe XR’s benefits will extend beyond the classroom. Seventy-seven percent of teachers said the technology helps equip students with skills they can apply in their chosen careers, especially since, according to forecasts, jobs in extended reality may reach 23 million by 2030.

Myths About XR Classroom Use Debunked

The XRA-ISTE survey dispelled several myths about extended reality’s acceptance in education. One of these misconceptions is that XR is only for gaming. The poll results and teachers’ comments reveal that they are aware of the usefulness of this technology in geography, math, history, and other subjects.

Moreover, the survey response from educators refutes the popular notion that XR technology would not be the “best fit” for the classroom. Seventy-eight percent of respondents believe in the benefits of extended reality technologies in class.

Finally, the belief that XR will distract students from learning only got a 15% vote among the survey participants. The majority support the opportunities that come with extended reality when incorporated into lessons.

Teens Excited About XR 

Earlier last year, XRA also conducted a separate survey that sought teens’ views on current use cases for XR and their expectations for this technology. The results released in May 2022 revealed that 40% of teens have used either AR or VR in school and 50% describe their experience with these technologies as positive. Thirty-eight percent would like to own a headset in the future.

Even though there are potential concerns around immersive technologies, which teens are aware of, they are still excited about using XR in education, in a responsible way. Almost 4 in 5 teens think extended reality can impact lives positively. They believe that XR can improve their lives in the areas of fun (67%), creativity (61%), and learning (48%). Moreover, 52% of respondents expressed interest in taking a college course with extended reality integrated into its curriculum.

Read the Latest Addition to the XRA Developers’ Guide

XRA is proactively advancing XR application in classroom learning. It recently launched a new chapter in its Developers Guide on designing immersive lessons for high schoolers. The fresh chapter discusses current classroom needs, successful use cases, and industry-backed best practices for promoting safe and inclusive classroom learning through extended reality that addresses parent, teacher, and student concerns.

XRA Survey: Teachers Pin Hopes on XR for Better Classroom Engagement Read More »

learning-in-ar:-bring-textbooks-to-life-with-ludenso

Learning in AR: Bring Textbooks to Life With Ludenso

 

Augmented reality is exciting. It’s interactive and can be a great visual aid for information that might otherwise be difficult to visualize or that might be just plain dull in 2D. As such, it has huge potential for educators. Unfortunately, good AR content can also be difficult to make for people who aren’t experts. That’s where Ludenso comes in.

Ludenso works with textbook publishers, educators, and tech experts to create an app for augmenting textbooks with an easy-to-use interface. I talked with co-founder and Chief Marketing Officer, Ingrid Skrede, to learn more.

What Is Ludenso?

Ludenso gives educators low-and-no-code tools to bring augmented reality into the classroom. The company can and does work with educators and publishers to create models in-house, but they also make libraries of educational 3D assets available in a drag-and-drop interface.

“Bringing AR [textbooks] to life on mobile is not new. What’s new is the ability to view it and update it without technical expertise,” said Skrede. “We put the studio’s creative power into the hands of content experts, not just our development team.”

Ludenso AR app for learning

With a few keystrokes, educators with no AR development experience can add their own notations to existing 3D models that launch when a phone with the Ludenso Explore app recognizes images in a textbook. They can also add images, videos, or links – whether to more resources, online quizzes, or something else.

I saw this process in a screen share during a demo with Skrede but spent most of my time on the user side of the app. The app recognizes the target images instantly. Manipulating the model to scale and rotate it is easy, as is finding the annotations and contextual information that the educator (played by Skrede) attributed to it.

The app doesn’t only feature image detection, it also features planar detection. So, I can view a mini 3D model on the textbook page with the context of the words around it. I can also switch my view to place a 3D model in my office and scale it up as much as I want.

What’s more, once I’ve opened the models associated with a textbook, I can place them in my environment without the image target. So, a student could study the 3D models in a textbook chapter even if they left their textbook at school.

How Ludenso Inspires Learning

Of course, Ludenso isn’t just for educators – as no educational service should be. The application is also for students. Over the course of our remote interview, Skrede brought up numerous studies showing that AR helps students maintain attention and retain information.

More than that, Skrede says that working with Ludenso has put her in numerous positions to see “underperforming” students drawn into their lessons in ways that shocked their teachers.

“When we’re born, we want to learn. But, we have sixteen thousand hours of learning ahead of us and that’s a long time to sit and learn what everyone else is learning,” said Skrede. “When using AR, you’re challenging the perceptions teachers have and what it means to be a strong student.”

Living and Learning

Ludenso has been around for a couple of years now. The Oslo-based company is finally starting to get the buzz that it deserves, as well as a recently-cleared $1M funding round.

One of the pillars of Ludenso’s philosophy is that the best educational content is going to be made by educators – not by tech moguls. As a result, they started out working with schools directly. This was a great way to work directly with educators, as they wanted, but it came with its own challenges.

AR app for learning - Ludenso

“We saw how excited the students were, and how excited the teachers were,” said Skrede. “We also realized that it’s challenging to scale in the school sector.”

Working with individual schools meant that Ludenso was working with individual curricula. What the company enabled one school to make might only work for that one school. Some of the tools that make the current (and upcoming) iteration of Ludenso possible were developed at this time, but the company’s outreach structure changed.

“We were rather fortunate to get in touch with a publishing house here,” said Skrede. The company is currently partnered with three major textbook publishers, which serve as a distribution channel for educators. “We’re interested in building a learning platform.”

Using textbooks to launch the experience also helps educators implement the technology that they might not be familiar with – particularly as a part of a structured curriculum.

“We go with textbooks because teachers want to use AR but they need a tool that they can come back to over and over,” said Skrede.

As this article was being written, Ludenso also announced a partnership with Cambridge University Press & Assessment. The partnership allows Cambridge University to carry Ludenso content and gives Ludenso global exposure with a renowned publishing company.

Where Was This a Decade Ago?

One of the most challenging things about covering emerging technology is seeing an application like Ludenso that would have been great to have when I was in school. At the same time, it helps to remind us why emerging technologies are so exciting. Most readers might have been born too late for this particular app, but there’s a whole generation that’s just in time.

Learning in AR: Bring Textbooks to Life With Ludenso Read More »