AI

talespin-releases-ai-powered,-web-accessible-no-code-creator-platform

Talespin Releases AI-powered, Web-Accessible No-Code Creator Platform

To prepare professionals for tomorrow’s workplace, you need to be able to leverage tomorrow’s technology. Talespin was already doing this with their immersive AI-powered VR simulation and training modules.

Now, they’re taking it a step further by turning over a web-based no-code creator tool. To learn more, we reconnected with Talespin CEO Kyle Jackson to talk about the future of his company and the future of work.

The Road So Far

Talespin has existed as an idea for about ten years. That includes a few years before they started turning out experiences in 2015. In 2019, the company started leveraging AI technology for more nuanced storytelling and more believable virtual characters.

CoPilot Designer 3.0 Talespin

CoPilot Designer, the company’s content creation platform, released in 2021. Since then, it’s gone through big and small updates.

That brings us to the release of CoPilot Designer 3.0 – probably the biggest single change that’s come to the platform so far. This third major version of the tool is accessible on the web rather than as a downloaded app. We’ve already seen what the designer can do, as Talespin has been using it internally, including in its recent intricate story world in partnership with Pearson.

“Our North Star was how do you get the ability to create content into the hands of people who have the knowledge,” Jackson told ARPost this March. “The no-code platform was built in service of that but we decided we had to eat our own dogfood.”

In addition to being completely no-code, CoPilot Designer 3.0 has more AI tools than ever. It also features direct publishing to Quest 2, PC VR headsets, and Mac devices via streaming with support for Lenovo ThinkReality headsets and the Quest Pro coming soon.

Understanding AI in the Designer

The AI that powers CoPilot Designer 3.0 comes in two flavors – the tools that help the creator build the experience, and the tools that help the learner become immersed in the experience.

More generative 3D tools (tools that help the creator build environments and characters) is coming soon. The tools really developing in this iteration of CoPilot Designer are large language models (LLMs) and neural voices.

Talespin CoPilot Designer 3.0

Jackson described LLMs as the context of the content and neural voices as the expression of the content. After all, the average Talespin module could exist as a text-only interaction. But, an experience meant to teach soft skills is a lot more impactful when the situations and characters feel real. That means that the content can’t just be good, it has to be delivered in a moving way.

The Future of Work – and Talespin

While AI develops, Jackson said that the thing that he’s waiting for the most isn’t a new capability of AI. It’s trust.

“Right now, I would say that there’s not much trust in enterprise for this stuff, so we’re working very diligently,” Jackson told ARPost. “Learning and marketing have been two areas that are more flexible … I think that’s going to be where we really see this stuff break out first.”

Right now, that diligence includes maintaining the human component and limiting AI involvement where necessary. Where AI might help creators apply learning material, that learning material is still originally authored by human experts. One day AI might help to write the content too, but that isn’t happening so far.

“If our goal is achieved where we’re actually developing learning on the fly,” said Jackson, “we need to be sure that what it’s producing is good.”

Much of the inspiration behind Talespin in the first place was that as more manual jobs get automated, necessary workplace skills will pivot to soft skills. In short, humans won’t be replaced by machines, but the work that humans do will change.

As his own company relies more on AI for content generation, Jackson has already seen this prediction coming true for his team. As they’ve exponentially decreased the time that it takes for them to create content, they’re more able to work with customers and partners as opposed to largely serving as a platform to create and host content that companies made themselves.

Talepsin CoPilot Designer 3.0 - XR Content Creation Time Graph

Solving the Content Problem

To some degree, Talespin being a pioneer in the AI space is a necessary evolution of the company’s having been an XR pioneer. Some aspects of XR’s frontier struggles are already a thing of the past, but others have a lot to gain from leaning on other emerging technologies.

“At least on the enterprise side, there’s really no one doubting the validity of this technology anymore … Now it’s just a question of how we get that content more distributed,” said Jackson. “It feels like there’s a confluence of major events that are driving us along.”

Talespin Releases AI-powered, Web-Accessible No-Code Creator Platform Read More »

this-‘skyrim-vr’-mod-shows-how-ai-can-take-vr-immersion-to-the-next-level

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level

ChatGPT isn’t perfect, but the popular AI chatbot’s access to large language models (LLM) means it can do a lot of things you might not expect, like give all of Tamriel’s NPC inhabitants the ability to hold natural conversations and answer questions about the iconic fantasy world. Uncanny, yes. But it’s a prescient look at how games might one day use AI to reach new heights in immersion.

YouTuber ‘Art from the Machine’ released a video showing off how they modded the much beloved VR version of The Elder Scrolls V: Skyrim.

The mod, which isn’t available yet, ostensibly lets you hold conversations with NPCs via ChatGPT and xVASynth, an AI tool for generating voice acting lines using voices from video games.

Check out the results in the most recent update below:

The latest version of the project introduces Skyrim scripting for the first time, which the developer says allows for lip syncing of voices and NPC awareness of in-game events. While still a little rigid, it feels like a pretty big step towards climbing out of the uncanny valley.

Here’s how ‘Art from the Machine’ describes the project in a recent Reddit post showcasing their work:

A few weeks ago I posted a video demonstrating a Python script I am working on which lets you talk to NPCs in Skyrim via ChatGPT and xVASynth. Since then I have been working to integrate this Python script with Skyrim’s own modding tools and I have reached a few exciting milestones:

NPCs are now aware of their current location and time of day. This opens up lots of possibilities for ChatGPT to react to the game world dynamically instead of waiting to be given context by the player. As an example, I no longer have issues with shopkeepers trying to barter with me in the Bannered Mare after work hours. NPCs are also aware of the items picked up by the player during conversation. This means that if you loot a chest, harvest an animal pelt, or pick a flower, NPCs will be able to comment on these actions.

NPCs are now lip synced with xVASynth. This is obviously much more natural than the floaty proof-of-concept voices I had before. I have also made some quality of life improvements such as getting response times down to ~15 seconds and adding a spell to start conversations.

When everything is in place, it is an incredibly surreal experience to be able to sit down and talk to these characters in VR. Nothing takes me out of the experience more than hearing the same repeated voice lines, and with this no two responses are ever the same. There is still a lot of work to go, but even in its current state I couldn’t go back to playing without this.

You might notice the actual voice prompting the NPCs is also fairly robotic too, although ‘Art from the Machine’ says they’re using speech-to-text to talk to the ChatGPT 3.5-driven system. The voice heard in the video is generated from xVASynth, and then plugged in during video editing to replace what they call their “radio-unfriendly voice.”

And when can you download and play for yourself? Well, the developer says publishing their project is still a bit of a sticky issue.

“I haven’t really thought about how to publish this, so I think I’ll have to dig into other ChatGPT projects to see how others have tackled the API key issue. I am hoping that it’s possible to alternatively connect to a locally-run LLM model for anyone who isn’t keen on paying the API fees.”

Serving up more natural NPC responses is also an area that needs to be addressed, the developer says.

For now I have it set up so that NPCs say “let me think” to indicate that I have been heard and the response is in the process of being generated, but you’re right this can be expanded to choose from a few different filler lines instead of repeating the same one every time.

And while the video is noticeably sped up after prompts, this mostly comes down to the voice generation software xVASynth, which admittedly slows the response pipeline down since it’s being run locally. ChatGPT itself doesn’t affect performance, the developer says.

This isn’t the first project we’ve seen using chatbots to enrich user interactions. Lee Vermeulen, a long-time VR pioneer and developer behind Modboxreleased a video in 2021 showing off one of his first tests using OpenAI GPT 3 and voice acting software Replica. In Vermeulen’s video, he talks about how he set parameters for each NPC, giving them the body of knowledge they should have, all of which guides the sort of responses they’ll give.

Check out Vermeulen’s video below, the very same that inspired ‘Art from the Machine’ to start working on the Skyrim VR mod:

As you’d imagine, this is really only the tip of the iceberg for AI-driven NPC interactions. Being able to naturally talk to NPCs, even if a little stuttery and not exactly at human-level, may be preferable over having to wade through a ton of 2D text menus, or go through slow and ungainly tutorials. It also offers up the chance to bond more with your trusty AI companion, like Skyrim’s Lydia or Fallout 4’s Nick Valentine, who instead of offering up canned dialogue might actually, you know, help you out every once in a while.

And that’s really only the surface level stuff that a mod like ‘Art from the Machine’ might deliver to existing games that aren’t built with AI-driven NPCs. Imagining a game that is actually predicated on your ability to ask the right questions and do your own detective work—well, that’s a role-playing game we’ve never experienced before, either in VR our otherwise.

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level Read More »

meetkai-launches-new-building-tools

MeetKai Launches New Building Tools

MeetKai has been around since 2018 but some of its first publicly enjoyable content hit the streets a few months ago. Now, the company is releasing a suite of software solutions and developer tools to help the rest of us build the metaverse.

From Innovation to Product

ARPost met MeetKai in July 2022, when the company was launching a limited engagement in Time Square. Since then, the company has been working with the Los Angeles Chargers.

“The purpose of the Time Square activation and campaign was really to test things out in the browser,” CEO and co-founder, James Kaplan, said in a video call. “With 3D spaces, there’s a question of whether the user views it as a game, or as something else.”

MeetKai Metaverse Editor - Los Angeles Chargers
MeetKai Metaverse Editor – Los Angeles Chargers

Those insights have informed their subsequent outward-facing work with the Chargers, but the company has also been working on some more behind-the-scenes products that were just released at CES.

“We’re moving from an innovation technology company to a product company,” co-founder and Executive Chairwoman, Weili Dai, said in the call. “Technology innovation is great, but show me the value for the end user. That’s where MeetKai is.”

Build the Metaverse With MeetKai

At CES, MeetKai announced three new product offerings: MeetKai Cloud AI, MeetKai Reality, and MeetKai Metaverse Editor. The first of those offerings is more in line with the company’s history as a conversational AI service provider. The second two offerings are tools for creating digital twins and for building and editing virtual spaces respectively.

“The biggest request that we get from people is that they want to build their own stuff, they don’t just want to see the stuff that we made,” said Kaplan. “So, we’ve been trying to say ‘how do we let people build things?’ even when they’re not engineers or artists.”

Users of the new tools can use them individually to create projects for internal or outward-facing projects. For example, a user could choose to create an exact digital twin of a physical environment with MeetKai Reality or create an entirely new virtual space with MeetKai Editor.

However, some of the most interesting projects come when the tools are used together. One example of this is an agricultural organization with early access to the products that used these two tools together to create a digital twin of real areas on their premises and then used the Editor for simulation and training use cases.

“AI as an Enabling Tool”

The formula for creating usable but robust tools was to combine conventional building tools like scanning and game engines with some help from artificial intelligence. In that way, these products look a lot less like a deviation from the company’s history and look a lot more like what the company has been doing all along.

MeetKai Cloud AI - Avatar sample
MeetKai Cloud AI – Avatar sample

“We see AI as an enabling tool. That was our premise from the beginning,” said Kaplan. “If you start a project and then add AI, it’s always going to be worse than if you say, ‘What kinds of AI do we have or what kinds of AI can we build?’ and see what kind of products can follow that.”

So the first hurdle is building the tools and the second hurdle is making the tools usable. Most companies in the space either build tools which remain forever overly complex, or they make tools that work but have limited potential because they were only designed for one specific use or for use within one specific environment.

“The core technology is AI and the capability needs to be presented in the most friendly way, and that’s what we do,” said Weili. “The AI capability, the technology, the innovation has to be leading.”

The company’s approach to software isn’t the only way they stand out. They also have a somewhat conservative approach when it comes to the hardware that they build for.

“I think 2025 is going to be the year that a lot of this hardware is going to start to level up. … Once the hardware is available, you have to let people build from day one,” said Kaplan. “Right now a lot of what’s coming out, even from these big companies, looks really silly because they’re assuming that the hardware isn’t going to improve.”

A More Mature Vision of the Metaverse

This duo has a lot to say about the competition. But, fortunately for the rest of us, it isn’t all bad. As they’ve made their way around CES, they’ve made one more observation that might be a nice closing note for this article. It has to do with how companies are approaching “the M-word.”

“Last CES, we saw a lot of things about the metaverse and I think that this year we’re really excited because a lot of the really bad ideas about the metaverse have collapsed,” said Kaplan. “Now, the focus is what brings value to the user as opposed to what brings value to some opaque idea of a conceptual user.”

Kaplan sees our augmented reality future as like a mountain, but the mountain doesn’t just go straight up. We reach apparent summits only to encounter steep valleys between us and the next summit. Where most companies climb one peak at a time, Kaplan and Weili are trying to plan a road across the whole mountain chain which means designing “in parallel.”

“The moment hardware is ready, we’re going to leapfrog … we prepare MeetKai for the long run,” said Weili. “We have partners working with us. This isn’t just a technology demonstration.”

How MeetKai Climbs the Mountain

This team’s journey along that mountain road might be more apparent than we realize. After all, when we last talked to them and “metaverse” was the word on everyone’s lips, they appeared with a ready-made solution. Now as AI developer tools are the hot thing, here they come with a ready-made solution. Wherever we go next, it’s likely MeetKai will have been there first.

MeetKai Launches New Building Tools Read More »

vr-robots:-enhancing-robot-functions-with-vr-technology

VR Robots: Enhancing Robot Functions With VR Technology

 

VR robots are slowly moving into the mainstream with applications that go beyond the usual manufacturing processes. Robots have been in use for years in industrial settings where they perform automated repetitive tasks. But their practical use has been quite limited. Today, however, we see some of them in the consumer sector delivering robotic solutions that require customization.

Augmented by other technologies such as AR, VR, and AI, robots show improved efficiency and safety in accomplishing more complex processes. With VR, humans can supervise the robots remotely to enhance their performance. VR technology provides human operators with a more immersive environment. This enables them to interact with robots better and view the actual surroundings of the robots in real time. Consequently, this opens vast opportunities for practical uses that enhance our lives.

Real-Life Use Cases of VR Robots

1. TX SCARA: Automated Restocking of Refrigerated Shelves

Developed by Telexistence, TX SCARA is powered by three main technologies—robotics, artificial intelligence, and virtual reality. This robot specializes in restocking refrigerated shelves in stores. It relies on GORDON, its AI system, to know when and where to place products. When issues arise due to external factors or system miscalculation, Telexistence employees use VR headsets to control the robot remotely and address the problem.

TX SCARA is present in 300 FamilyMart stores in Japan. Plans to expand their use in convenience stores in the United States are already underway. With TX SCARA capable of working 24/7 with a pace of up to 1,000 bottles or cans per day, it can replace up to three hours of human work each day for a single store alone.

2. Reachy: A Robot That Shows Emotions

Reachy gives VR robots a human side. An expressive humanoid platform, Reachy mimics human expressions and body language. It conveys human emotions through its antennas and motions.

VR robots - Reachy
Reachy

Users operate Reachy remotely using VR equipment that shows the environment surrounding the robot. They can move Reachy’s head, arms, and hands to manipulate objects and interact with people around the robot. They can also control Reachy’s mobile base to move around and explore its environment.

Since it can be programmed with Python and ROS to perform almost any task, its use cases are virtually limitless. It has applications across various sectors, such as research (to explore new frontiers in robotics), healthcare (to replace mechanical tasks), retail (to enhance customer experiences), education (to make learning more immersive), and many others. Reachy is also fully customizable, with many different configurations, modules, and hardware options available.

3. Robotic VR: Haptic Technology for Medical Care

A team of researchers co-led by the City University of Hong Kong has developed an advanced robotic VR system that has great potential for use in healthcare. Robotic VR, an innovative human-machine interface (HMI), can be used to perform medical procedures. This includes conducting swab tests and caring for patients with infectious diseases.

Doctors, nurses, and other health practitioners control the VR robot using a VR headset and flexible electronic skin that enables them to experience tactile sensations while interacting remotely with patients. This allows them to control and adjust the robot’s motion and strength as they collect bio-samples or provide nursing care. Robotic VR can help minimize the risk of infection and prevent contagion.

4. Skippy: Your Neighborhood Delivery Robot

Skippy elevates deliveries to a whole new level. Human operators, called Skipsters, control these VR robots remotely. They use VR headsets to supervise the robots as they move about the neighborhood. When you order food or groceries from a partner establishment, Skippy picks it up and delivers it to your doorstep. Powered by AI and controlled by Skipsters, the cute robot rolls through pedestrian paths while avoiding foot traffic and obstacles.

VR robots - Skippy
Skippy

You can now have Skippy deliver your food orders from a handful of restaurants in Minneapolis and Jacksonville. With its maker, Carbon Origins, planning to expand the fleet this year, it won’t be long until you spot a Skippy around your city.

Watch Out for More VR-Enabled Robots

Virtual reality is an enabling technology in robotics. By merging these two technologies, we’re bound to see more practical uses of VR-enabled robots in the consumer market. As the technologies become more advanced and the hardware required becomes more affordable, we can expect to see more VR robots that we can interact with as we go through our daily lives.

Developments in VR interface and robotics technology will eventually pave the way for advancements in the usability of VR robots in real-world applications.

VR Robots: Enhancing Robot Functions With VR Technology Read More »