Prisms VR, an immersive platform for teaching math, announced it’s raised $12.5 million in a Series A round, which the company says will be used to expand its VR math literacy platform to more schools across the US.
Led by Andreessen Horowitz, the latest funding round brings Prisms VR’s lifetime funding to $19.1 million, according to CrunchBase.
Launched in 2021, Prisms focuses on teaching math in VR through problem-driven, tactile and visual learning. Essentially, it immerses students by confronting them with real-world problems—a far sight from the sort of drab word problems which typically involve far too many watermelons for comfort.
Prisms was founded by Anurupa Ganguly, who has taught math and physics across both Boston and New York City. The app’s development, Ganguly explains, was in response to the US education system, and how math instruction doesn’t appeal to real life situations.
“Technology has failed our students, especially where math is concerned. With new developments in immersive tech, we have the opportunity to make learning experiential and connected to students’ lives,” said Ganguly, founder and CEO at Prisms. “Prisms is the first learning solution that empowers students to experience real-life problems with their bodies versus reading about them divorced from personal experience. They are then able to build up to shorthand abstractions from intuitive visual and tactile experiences that lead to enduring retention and deeper understanding.”
The company says it’s using the funds to accelerate growth and adoption of its product and team in addition to expanding programs to more schools across the US. The company is also currently developing products aimed at higher education and other subjects as well.
The startup’s Meta Quest app is available to parents, tutors and teachers with a seven-day free trial, costing $24 for an annual subscription to its growing library of immersive lessons. For now, it includes around two dozen modules, teaching from middle school fractions all the way to advanced algebra.
To date, Prisms has already been adopted by 100+ school districts across 26 states, the company says, bringing its app to more than 80,000 students.
An EV that cleans the air while driving might seem like a pipe dream , but a student team based at the Eindhoven University of Technology has made it reality. TU/ecomotive — as the team is called — has been creating inspiring, environmentally conscious concept cars for over a decade now.
Among the concept vehicles presented by the students, last year’s Zem — which stands for “zero emission mobility” — is the most outstanding. It’s a passenger EV that not only paves the way towards vehicle carbon neutrality, but also cleans the air while driving, something that, in turn, reduces CO2 emissions.
Credit: TU/ecomotive
Zem was unveiled in July 2022 at the Louwman Museum in the Hague. Its message is clear: if a team of 32 students can create a car like this in under 12 months, then what’s stopping the automotive industry from doing more?
“We were inspired by the EU’s Green Deal,” Louise de Laat, Industrial Design student and team manager of the Zem project, told TNW. “Reducing our CO2 emissions is something very important for us, and we would really like to make a carbon neutral car. And that’s the reason for the recent project’s focus on zero emission mobility,” she explained.
CO2 neutral mobility requires a vehicle to have zero carbon emissions across its entire lifecycle, and Zem is an apt example of how close to this goal an EV like this can get.
In this piece, we’ll look at how Zem achieves this through its use, production, and afterlife — as well as what the car industry can learn from these sorts of schemes.
The air-cleaning technology
As we mentioned at the start, instead of emitting CO2, Zem captures it. Effectively, it cleans the air while driving. That’s thanks to an innovative technology called direct air capture (CAP), which “traps” carbon dioxide into a filter. Companies such as Climeworks and Carbyon have been applying this air-cleaning method via large installations. But the Zem team decided to implement it on the car.
It works like this: while driving, air moves through the car into a self-designed filter, which captures and stores CO2, allowing clear air to flow out of the vehicle. This compensates for the total emissions of all life phases.
Credit: TU/ecomotive
But what happens when the filter is saturated?
“We have designed a special charging pole for this,” Louise explained. “While Zem is charging you can remove the filter and place it in a special tank inside the pole. Cleaning the filter takes about the same time as charging. At the same time, the CO2 absorbed and saved in the tank can be repurposed and used by industries that need it, to make carbon fibers, for instance,” she added.
And to increase the vehicle’s sustainability even when not in use, TU/ecomotive has equipped it bi-directional charging technology to provide electricity to homes, as well as solar panels to store energy.
Maximising sustainable production and afterlife
To achieve a high level of sustainability, the TU/ecomotive opted for a novel production method: additive manufacturing — or simply, 3D printing. The team collaborated with partners — such as CEAD and Royal3D — to develop the car’s fundamental structure. Specifically, the monocoque and the body panels.
As Louise explained, they also 3D-printed parts of the interior, including the car seat shell, the dashboards, the middle console, the steering wheel, and the roof beams.
According to the team, this manufacturing process results in nearly zero waste materials, as the various car parts were printed in the exact shape needed. At the same time, they did the printing using circular plastics. These are granulates that had already been used and can be shredded and reused afresh in other projects.
“You can use that same material again to make the same event over three times before it loses its specifications,” Louise noted.
The vision of circularity has been applied throughout Zem’s design as well.
For example, the seat upholstery is made from the residue released during pineapple production. The roof upholstery and the floor mats consist of ocean plastics. And, through the collaboration with Black Bear Carbon, recycled black carbon from worn tires has been used for the EV’s coating and tires.
As a result, the concept car boasts “as little CO2 emissions as possible” during the production phase. At the same time, the types of materials, their ease of separation, and their circularity, all contribute to keeping CO2 emissions during the end-of-use phase at a lower level — especially when compared to conventional cars.
Credit: TU/ecomotive
But, according to Louise, it proved extremely challenging to give a specific number to Zem’s overall emissions via the Life Cycle Assessment (LCA) method, revealing a gap in the industry.
“We need a lot of data from the partners where we get the parts from and some of them don’t know the exact LCA of their product,” she said. On the upside, she considers it beneficial that this project meant their partners acknowledged the vehicle’s environmental footprint.She also remains hopeful that respective legislation from national governments and the EU in general will standardise the use of LCA.
As per Louise, Zem has succeeded in reaching its goal to drastically lower CO2 emissions to the maximum extent possible. Yet, the EV does come with disadvantages that would require further work to enable its scaleup into a marketable product.
“If you build a car in less than one year, there will be some flaws that you still need to work on,” she noted. “Zem drove smoothly on the DRC track during the US tour, but the closer you get to the vehicle, the easier it is to see its flaws.” And that’s to be expected when you work with new materials and new technologies within a short period of time, Louise added.
A win-win for students and commercial partners
Now that the Zem project has been concluded, a renewed team has started working on the next concept vehicle. Stijn Plekkenpol — a sustainable innovation student — will lead the next project.
“What we really want to do now is build a climate positive car by 2030. This means, a vehicle which is marketable, which could be produced, and actually have a positive impact on the environment instead of any negative ones,” Stijn told TNW.
In the meantime, Louise aims to keep working on the filter technology and would be excited to see Zem turn into a mass-produced car. After all, it’s not uncommon for a student concept to grow into a startup and a real-life product. Think of Lightyear, the now famous solar EV Dutch startup, which was also started by students of the Eindhoven University of Technology.
Credit: Bart van Overbeeke/ TU/ecomotive
While both Louise and Stijn attribute Zem’s success to the students’ team “long working hours and [their] dedication”, they explained the vital role commercial patterns played as well.
“The majority of our partners are from Eindhoven’s Brainport region, which is known for its high density of R&D, and is called the Silicon Valley of the Netherlands,” Louise said.
These partners supported the project by providing parts, materials, knowledge, and financial support. And as for what they gained in return, Louise summarised three main advantages: employee recruitment, exposure, and the enjoyment and inspiration stemming from the collaboration with young people bringing bold ideas to the table.
Both Louise and Stjn have optimistic views on the future of mobility. They believe that cars will remain an integral part of transportation, but that they have the potential to be climate-positive instead of adding to carbon emissions.
And, as Zem showcases, we should trust in the innovative ideas of the younger generations, further seeking the collaboration between daring university projects and commercial partnerships.
The new concept vehicle will be revealed on July 27 — and I, for one, can’t wait to see what the students have in store for us.
Meta today revealed its latest quarterly earnings results, showing that Reality Labs, the company’s XR and metaverse arm, had a smaller holiday season than the last, while operating costs have reached their highest levels yet.
Today during the company’s Q4 earnings call, Meta revealed the latest revenue and operating cost figures for its XR and metaverse division, Reality Labs, providing one of the clearest indicators of the success the company is seeing in this space.
The fourth quarter has consistently been the best performer for Reality Labs, no doubt thanks to the holiday season driving sales of the company’s offerings.
In the fourth quarter of 2022, the company saw $727 million in revenue, which was 17% less compared to the fourth quarter of 2021 when the company pulled in $877 million in revenue.
The fourth quarter of 2021 was a good performer for Reality Labs revenue thanks to the success of Quest 2 which had launched earlier that year.
In the fourth quarter of 2022, the company’s latest headset to launch was Quest Pro, it’s high-end MR headset. Unsurprisingly, the more expensive device—which has yet to find a strong value proposition at $1,500—doesn’t seem to have performed as well as Quest 2 did in its launch year. Just days ago, Meta temporarily discounted the price of the headset to $1,100, appearing to test the waters at that lower price. Granted, XR headsets aren’t the only product Reality Labs offers, which means the division’s other product lines—video calling speakers and smart glasses—may have had a role to play.
In addition to a smaller holiday season than last year, the latest earnings for Reality Labs show the division’s expenses were greater than in any previous quarter, surpassing $4 billion for the first time.
In the face of operating costs far outpacing revenue, Meta CEO Mark Zuckerberg told investors that his management theme for 2023 was “efficiency,” saying he wants to focus the company on streamlining its structure to move faster while being more aggressive about shutting down projects that aren’t performing.
Horizon Worlds, Meta’s social VR platform for Quest users, is expanding with alpha tests of new members-only spaces, allowing creators to manage up to 150 card-carrying members in their private worlds. Meta says it’s also gearing up to release Horizon Worlds on non-Quest devices for the first time.
Meta is now rolling out alpha access to its new members-only worlds, which aims to let creators build and cultivate a space in Horizon Worlds. Each members-only world can have up to 150 members, although only 25 concurrent visitors can gather at any given time.
“Every community develops its own norms, etiquette, and social rules over time as it fosters a unique culture,” the company says in a blogpost. “To enable that, we’ll provide the tools that allow the creators of members-only worlds to set the rules for their communities and maintain those rules for their closed spaces.”
Meta says moderation responsibilities can be shared among trusted members, so creators can better control who gets in and who’s kicked out, however the company says its Code of Conduct for Virtual Experiences is still in effect in privately owned spaces.
What’s more, the Quest-only social platform is also going to be available on the Web and mobile devices “soon”, the company says, adding that rules will be made and enforced “similarly to how mobile operating systems manage experiences on their platforms.”
As it is today, Horizon Worlds plays host to a growing number of user-generated content in addition to first-party worlds. The release of Horizon Worlds outside of Quest would represent a massive potential influx of users and user-generated content, putting it in direct competition with cross-platform social gaming titans such as Roblox and Rec Room.
As a similar free-to-play app, Horizon Worlds offers an Avatar Store featuring premium digital outfits—very likely only a first step in the company’s monetization strategy. For now, the company says it allows creators to earn revenue from purchases people make in their worlds, which includes hardware platform fees and a Horizon Worlds fee, which Meta says is 25 percent.
In late October, Meta showed off a tempting preview of its next-gen avatars, although it’s clear there’s still a ton of work to be done to satisfy its existing userbase. Floating torsos are still very much a thing in Horizon Worlds, and that’s despite Meta CEO Mark Zuckerberg’s insistence that full body tracking was in the works. It was too good to be true.
For now, Horizon Worlds is only available on Quest 2 headsets in the US, Canada, UK, France, Iceland, Ireland and Spain—something we hope they change well before it ushers in flatscreen users.
This story is syndicated from the premium edition of PreSeed Now, a newsletter that digs into the product, market, and founder story of UK-founded startups so you can understand how they fit into what’s happening in the wider world and startup ecosystem.
The reignited excitement around the potential of AI as we hurtle into 2023 brings with it concerns about how best to process all the data needed to make it work. This is far from a new challenge though, and next-generation AI chips are being developed in labs around the world to address the challenge in different ways.
One of the first startups we ever covered at PreSeed Nowtakes a ‘neuromorphic’ approach, influenced by the human brain. Coming from a different direction is a brand new spinout from Newcastle University called Mignon (so new, in fact, that there’s no website yet).
Mignon has developed an artificial intelligence chipset that, according to CEOXavier Parkhouse-Parker, has “in the order of 10,000x performance improvements against alternative neural-network based chips for classification tasks”
Classification is, essentially, the process of figuring out what the AI is looking at, hearing, reading, etc — the first step in understanding the world around it, whatever use case it’s put to. Mignon’s chipset is designed to be used in edge computing as a “classification coprocessor” on devices, rather than in the cloud.
What’s more, Parkhouse-Parker says Mignon’s chipset can also train AI models on the edge, meaning the models can be optimised for the specific, individual environments in which they’re used.
A prototype design of Mignon’s gen-1 chipset
A propositional proposition
What Mignon says gives its tech an advantage over the competition is a less resource-intensive approach based onpropositional logic.
“Neural networks, the dominant algorithm in AI and machine learning today, typically require running many layers of increasingly resource-intensive calculations. They can take a very long time and a huge amount of energy to train and deploy, and they also exist as a black box; you cannot explain why the algorithms have come to a particular conclusion,” Parkhouse-Parker says.
“Mignon is based on an algorithm that can be done in a single layer, using propositional logic, maintaining accuracy but enabling calculations to be run much more quickly, using far less energy.”
And when it comes to launching into the market, Mignon could have a strong advantage, too.
“The investment and commercial scale required for success in the semiconductor industry is significant. Some of the biggest challenges for many other competitors in this sector is that they rely on non-standard, or ‘exotic’, features which are not easily scalable within the current semiconductor manufacturing ecosystem,” says Parkhouse-Parker.
Instead, Mignon’s chipset uses a standardCMOS fabrication approach, meaning mass-production is much more straightforward.
How can it be used?
Edge AI has already made a notable difference to consumers’ lives. Just look at how the likes of Apple and Google have put AI chips into their smartphones to run tasks like face and object recognition in photos or audio transcription locally, increasing privacy and speed, and reducing data transfer costs.
Parkhouse-Parker says Mignon could eventually make a difference here, along with in the next generation of ‘6G’ telecoms networks, where signal processing could be optimised by AI
But the first market they’re looking at is industrial spaces where connectivity and energy resources are low, but there’s a need for high-performance AI classification.
And while the tech isn’t ready for it yet, Parkhouse-Parker says Mignon is working towards another selling point that its offering enables — “explainable AI.” That is, transparency around how and why AI made a particular decision.
To give a timely example, if you ask OpenAI’sChatGPT to explain a concept to you, you can’t see why it comes up with the specific answer it gives. You just get answer based on the pathway it took through its sea of data in response to your prompt.
In an industrial setting, where AI might be making business-critical decisions, or decisions with safety implications, it would be very useful to be able to look back and see how the AI came to the conclusion that it did.
“With neural networks, all of the inferences are done within a black box, and you cannot see how or why this node connects to this node, or how things have been calculated. With Mignon, because it’s based on propositional logic, it allows for a researcher to be able to look in and see exactly where a decision had been made, and why, and what led it to that point,” explains Parkhouse-Parker.
Mignon wants to make it possible for this kind of accountability to be available via software, which could be appealing in fields such as medicine, defence, and the automotive industry.
The brains behind the Mignon product. L-R: Professor Alex Yakovlev and Dr Rishad Shafik
Their research into taking theTsetlin machine and putting it into computational hardware caught the attention of deep tech venture builderCambridge Future Tech, which–among others–also works withGitLife Biotech andMimicrete, who have previously featured in this newsletter.
Since spring last year, Parkhouse-Parker (Cambridge Future Tech’s COO) has been working on developing a commercial proposition for Yakovlev and Shafik’s research. He has taken the CEO role at Mignon as it spins out of the university.
Getting to market
First on the to-do list for the new startup is further refining its technology with the development of a ‘generation-2’ chipset before they bring it to market.
“Even though we’ve got fantastic performance improvements, and it’s actually quite remarkable, this has all been done on the 65-nanometer node, which is an old technology and should mean worse performance improvements, because effectively the transistors are bigger, and that’s what makes us really remarkable,” says Parkhouse-Parker.
“We think that when we move to a 28-nanometer node, that all of the numbers we have the benchmarks are going to be significantly greater at this scale.”
Commercial validation is obviously another important step after that. The eventual goal is to partner with fabless chip companies to build the Mignon technology into a commercially available system-on-a-chip. Mignon has a number of hires planned for the near future to help it get there.
Mignon CEO, Xavier Parkhouse-Parker
Investment plans and future potential
Parkhouse-Parker expects the spin-out process to be complete in March this year, after which they will formally open a £2.55 million funding round.
This will be used to expand the team, develop, test, and fabricate the next generation of chipset, and to get commercial validation in a number of verticals. Software to allow AI development on the chipset is also a key part of the roadmap.
Eventually, Parkhouse-Parker wants Mignon’s combination of low-power performance and widespread compatibility to usher in whole new opportunities for AI
“What Mignon does is open up a possibility for what is genuinely a completely new world of devices that people haven’t even thought about yet. Think about the opportunities that would be there with product people like a Steve Jobs or a Jony Ive that could use this and run wild with the potential. I think there really is a completely new world of possibilities.”
The big “hump”
There’s no clear road from where Mignon is now to that future. Aside from the additional development work to refine the chipset, there’s a shift in mindset required from the people who build AI applications.
“The big ‘hump’, as one of our advisors calls it, is that it’s a new way of doing artificial intelligence,” says Parkhouse-Parker. “The transition between neural networks and Tsetlin is not incredibly significant, but it will require a little bit of a mindset difference. It may require new ways of thinking around how artificial intelligence problems can be designed and how these things can be brought into market.
“There’s a great community already being built around this, but that’s one of the biggest challenges — building a Tsetlin ecosystem and transitioning things that are neural networks into Tsetlin.”
But despite these challenges, Parkhouse-Parker believes Mignon’s vision is very much achievable.
“Several orders of magnitude improvement warrant a look at something that’s new, novel, and exciting.”
The article you just read is from the premium edition of PreSeed Now. This is a newsletter that digs into the product, market, and story of startups that were founded in the UK. The goal is to help you understand how these businesses fit into what’s happening in the wider world and startup ecosystem.
NotGames, the indie studio behind ingenious propaganda simulator Not For Broadcast (2022), announced it’s releasing a separate VR version in March, coming to SteamVR and Meta Quest 2.
Releasing on Steam and the Quest Store on March 23rd, Not For Broadcast VR is putting the power of mass media into your hands, as you control what people see and how they see it in your very own TV studio control booth, set in an alternate ’80s timeline in Britain.
Promising all of the original game’s dystopian tale of power, greed and resistance, the VR adaptation seems like a natural fit for the seated, button-heavy game—looking a bit like Please, Don’t Touch Anything.
The game is chock full of egotistical celebrities, dishonest politicians, and strange sponsors—and the show must go on uninterrupted. Pop in your lineup of VHS tapes, frame and edit shots, bleep out expletives, and keep everything moving smoothly—even as disaster strikes outside your window. Whatever you do, your mission is to keep those ratings up.
You can wishlist the game now on Steam. We’re still waiting for the Store link for Quest, however we’ll update this article when we see it. In addition to its VR launch, the game is also coming to PlayStation and Xbox on March 23rd as well.
At the time of this writing, the flatscreen version of Not For Broadcast has garnered an ‘Overwhelmingly Positive’ user review score on Steam, coming from over 7,000 players.
A number of the attractions of watching live sports carry over into esports. However, unless you’re watching an esports tournament in person, a lot of those attractions go away. Interactions with other fans are limited. The game view is limited. The game is flattened and there’s little environment ambiance. Virtex wants to fix that.
A History of Virtex
Virtex co-founders Tim Mcguinness and Christoph Ortlepp met at an esports event in 2019. Mcguinness presented the idea of “taking that whole experience that we were doing there in the physical world and bringing it into the virtual world,” Ortlepp said in a video call with ARPost. The two officially launched the company in 2020.
“The first thing we had to do was get something that we could show to Meta,” said Ortlepp. “For us, Echo was a good community to start with.”
Virtex got the green light from Meta. It also got Jim Purbrick who had previously been a technical director at Linden Lab and an engineering manager for Oculus.
“Moderation is an area where he had a big impact on us,” said Ortlepp. “We need live moderators to keep people safe… If now we have two or three hundred people in the platform, what if we have ten thousand people? Can we keep users safe and prevent a toxic environment?”
Meta’s support also meant that Virtex could finally launch its beta application. The beta is still technically closed – meaning that it isn’t on any app store, and you have to go through the Virtex website to access it. However, the closed beta isn’t limited. Testers have the opportunity to participate in “test sessions” – live streamed games every Thursday.
The platform held its first major tournament in December, with another about to kick off as this article was being written. Games are scheduled every week into the spring.
A Tour of the Stadium
Right now, the Virtex virtual world consists of a stadium entrance, a lounge area, and a commentator booth in addition to the stadium itself.
“The purpose [of the entrance and lounge] is really to set the stage for the user, to welcome them,” said Ortlepp.
In the lounge, users can socialize, modify their avatars (through a Ready Player Me integration), and even watch a miniaturized version of the live match. The lounge itself is still being developed with plans for mini-games and walls of fame. Connected areas including a virtual store and bar area are also in the works.
In the stadium itself, users can see and interact with other spectators. They can watch a 3D reproduction of the live game in real time, or watch a Twitch stream of the game on a jumbo screen above the stadium floor.
“We feature the video because we didn’t want to take away from esports viewers what they’re currently used to,” said Ortlepp. Virtex wants to give spectators options to explore viewing in new ways, without leaving them in an entirely unfamiliar setting.
A teleport system allows faster movement to different areas of the stadium, including the stadium floor to watch from within the game or even follow players through the action. This is possible thanks to the unique solution that Virtex has developed for recreating the game within the virtual stadium.
The studio also adds special recording and hosting tools like camera bots for streaming games within the stadium to Twitch and YouTube. Aspects of the stadium’s appearance can even be changed to match whatever game is being played.
“We are the platform. Ideally, we don’t ever want to be the content creators,” said Ortlepp. “So we have certain user modes for the ones that are actually operating the tournaments.”
When Can We Expect an App?
Virtex Stadium is up and running. But, the team plans to spend at least the next few months in their “closed” beta phase. For one thing, they really want to have their moderation plan in place before making the app more discoverable. They’re also still collecting feedback on their production tools – and thinking of new ones.
Further, while the platform currently has a decent schedule, the team wants to work with more games and more gaming communities. That includes other VR titles as well as more traditional esports. Ideally, one day, something will be happening in Virtex no matter when a user signs in.
“Where do we take it from here? There are no standards – no one has done this before,” said Ortlepp. “The virtual home of esports is basically the vision. It’s something we don’t claim yet – we have to earn it.”
It’s Not Too Early to Check It Out
Everything about Virtex is exciting, from their plans for the virtual venue itself, to their passion and concern for their community. Ortlepp said that the company is “careful about making dated timeline promises.” In a way that’s a little frustrating but it’s only because the company would rather hold off on something amazing than push something that falls short of their vision.
Ioanna is a writer at SHIFT. She likes the transition from old to modern, and she’s all about shifting perspectives. Ioanna is a writer at SHIFT. She likes the transition from old to modern, and she’s all about shifting perspectives.
By 2027, Europe has the potential to fully rely on domestic production of battery cells, meeting its EV and energy storage demands without any Chinese imports. That’s according to the latest forecast by Transport & Environment (T&E), a campaign group, which analyzed a range of manufacturer reports and press releases.
The European NGO further estimates that, in 2030, the companies with the largest battery cell production in the continent will be CATL, Northvolt, ACC, Freyr, and the Volkswagen Group.
About two-thirds of Europe’s needs for cathodes — an integral battery part — could also be produced in-house, the report finds. So far, 12 companies plan to become active in this part of the battery supply chain, with 17 plants announced in the region. Existing and scheduled projects include Umicore in Poland, Northvolt in Sweden, and BASF in Germany.
Northvolt’s first battery cell produced at the company’s Ett gigafactory in Sweden. Credit: Northvolt
Projections about the refining and processing of lithium are optimistic as well. While 100% of the refined lithium required for European batteries is imported from China and other countries, the bloc is expected to meet 50% of its demand by 2030. T&E has identified 24 projects so far, including Vulcan Energy Resources in Germany and Eramet in France.
The NGO warns, however, that these scenarios will not be realized unless backed by sufficient and timely funding, highlighting that the US’ Inflation Reduction Act (IRA) could attract European talent and factories to America.
“Europe needs the financial firepower to support its green industries in the global race with America and China,” Julia Poliscanova, senior director for vehicles and e-mobility at T&E, said. “A European Sovereignty Fund would support a truly European industrial strategy and not just countries with deep pockets. But spending rules need to be streamlined so that building a battery plant does not take the same amount of time as a coal plant.”
From steam power and electricity to computers and the internet, technological advancements have always disrupted labor markets, pushing out some careers while creating others. Artificial intelligence remains something of a misnomer — the smartest computer systems still don’t actually know anything — but the technology has reached an inflection point where it’s poised to affect new classes of jobs: artists and knowledge workers.
Specifically, the emergence of large language models – AI systems that are trained on vast amounts of text – means computers can now produce human-sounding written language and convert descriptive phrases into realistic images. The Conversation asked five artificial intelligence researchers to discuss how large language models are likely to affect artists and knowledge workers. And, as our experts noted, the technology is far from perfect, which raises a host of issues — from misinformation to plagiarism — that affect human workers.
To jump ahead to each response, here’s a list of each:
Lynne Parker, Associate Vice Chancellor, University of Tennessee
Large language models are making creativity and knowledge work accessible to all. Everyone with an internet connection can now use tools like ChatGPT or DALL-E 2 to express themselves and make sense of huge stores of information by, for example, producing text summaries.
These new AI tools can’t read minds, of course. A new, yet simpler, kind of human creativity is needed in the form of text prompts to get the results the human user is seeking. Through iterative prompting — an example of human-AI collaboration — the AI system generates successive rounds of outputs until the human writing the prompts is satisfied with the results. For example, the (human) winner of the recent Colorado State Fair competition in the digital artist category, who used an AI-powered tool, demonstrated creativity, but not of the sort that requires brushes and an eye for color and texture.
While there are significant benefits to opening the world of creativity and knowledge work to everyone, these new AI tools also have downsides. First, they could accelerate the loss of important human skills that will remain important in the coming years, especially writing skills. Educational institutes need to craft and enforce policies on allowable uses of large language models to ensure fair play and desirable learning outcomes.
As society navigates the implications of these new AI tools, the public seems ready to embrace them. The chatbot ChatGPT went viral quickly, as did image generator Dall-E mini and others. This suggests a huge untapped potential for creativity, and the importance of making creative and knowledge work accessible to all.
Potential inaccuracies, biases and plagiarism
Daniel Acuña, Associate Professor of Computer Science, University of Colorado Boulder
I am a regular user of GitHub Copilot, a tool for helping people write computer code, and I’ve spent countless hours playing with ChatGPT and similar tools for AI-generated text. In my experience, these tools are good at exploring ideas that I haven’t thought about before.
I’ve been impressed by the models’ capacity to translate my instructions into coherent text or code. They’re useful for discovering new ways to improve the flow of my ideas, or creating solutions with software packages that I didn’t know existed. Once I see what these tools generate, I can evaluate their quality and edit heavily. Overall, I think they raise the bar on what is considered creative.
But I have several reservations.
One set of problems is their inaccuracies — small and big. With Copilot and ChatGPT, I’m constantly looking for whether ideas are too shallow — for example, text without much substance or inefficient code, or output that is just plain wrong, such as wrong analogies or conclusions, or code that doesn’t run. If users aren’t critical of what these tools produce, the tools are potentially harmful.
Another problem is biases. Language models can learn from the data’s biases and replicate them. These biases are hard to see in text generation but very clear in image generation models. Researchers at OpenAI, creators of ChatGPT, have been relatively careful about what the model will respond to, but users routinely find ways around these guardrails.
Plagiarism is easier to see in images than in text. Is ChatGPT paraphrasing as well? Somepalli, G., et al., CC BY
These tools are in their infancy, given their potential. For now, I believe there are solutions to their current limitations. For example, tools could fact-check generated text against knowledge bases, use updated methods to detect and remove biases from large language models, and run results through more sophisticated plagiarism detection tools.
With humans surpassed, niche and ‘handmade’ jobs will remain
Kentaro Toyama, Professor of Community Information, University of Michigan
We human beings love to believe in our specialness, but science and technology have repeatedly proven this conviction wrong. People once thought that humans were the only animals to use tools, to form teams or to propagate culture, but science has shown that other animals do each of thesethings.
Meanwhile, technology has quashed, one by one, claims that cognitive tasks require a human brain. The first adding machine was invented in 1623. This past year, a computer-generated work won an art contest. I believe that the singularity — the moment when computers meet and exceed human intelligence — is on the horizon.
How will human intelligence and creativity be valued when machines become smarter and more creative than the brightest people? There will likely be a continuum. In some domains, people still value humans doing things, even if a computer can do it better. It’s been a quarter of a century since IBM’s Deep Blue beat world champion Garry Kasparov, but human chess — with all its drama — hasn’t gone away.
In other domains, human skill will seem costly and extraneous. Take illustration, for example. For the most part, readers don’t care whether the graphic accompanying a magazine article was drawn by a person or a computer — they just want it to be relevant, new and perhaps entertaining. If a computer can draw well, do readers care whether the credit line says Mary Chen or System X? Illustrators would, but readers might not even notice.
And, of course, this question isn’t black or white. Many fields will be a hybrid, where some Homo sapiens find a lucky niche, but most of the work is done by computers. Think manufacturing — much of it today is accomplished by robots, but some people oversee the machines, and there remains a market for handmade products.
If history is any guide, it’s almost certain that advances in AI will cause more jobs to vanish, that creative-class people with human-only skills will become richer but fewer in number, and that those who own creative technology will become the new mega-rich. If there’s a silver lining, it might be that when even more people are without a decent livelihood, people might muster the political will to contain runaway inequality.
Old jobs will go, new jobs will emerge
Mark Finlayson, Associate Professor of Computer Science, Florida International University
Large language models are sophisticated sequence completion machines: Give one a sequence of words (“I would like to eat an …”) and it will return likely completions (“… apple.”). Large language models like ChatGPT that have been trained on record-breaking numbers of words (trillions) have surprised many, including many AI researchers, with how realistic, extensive, flexible, and context-sensitive their completions are.
Like any powerful new technology that automates a skill — in this case, the generation of coherent, albeit somewhat generic, text — it will affect those who offer that skill in the marketplace. To conceive of what might happen, it is useful to recall the impact of the introduction of word processing programs in the early 1980s. Certain jobs like typist almost completely disappeared. But, on the upside, anyone with a personal computer was able to generate well-typeset documents with ease, broadly increasing productivity.
Further, new jobs and skills appeared that were previously unimagined, like the oft-included resume item MS Office. And the market for high-end document production remained, becoming much more capable, sophisticated and specialized.
I think this same pattern will almost certainly hold for large language models: There will no longer be a need for you to ask other people to draft coherent, generic text. On the other hand, large language models will enable new ways of working, and also lead to new and as yet unimagined jobs.
To see this, consider just three aspects where large language models fall short. First, it can take quite a bit of (human) cleverness to craft a prompt that gets the desired output. Minor changes in the prompt can result in a major change in the output.
Second, large language models can generate inappropriate or nonsensical output without warning.
These failings are opportunities for creative and knowledge workers. For much content creation, even for general audiences, people will still need the judgment of human creative and knowledge workers to prompt, guide, collate, curate, edit, and especially augment machines’ output. Many types of specialized and highly technical language will remain out of reach of machines for the foreseeable future. And there will be new types of work — for example, those who will make a business out of fine-tuning in-house large language models to generate certain specialized types of text to serve particular markets.
In sum, although large language models certainly portend disruption for creative and knowledge workers, there are still many valuable opportunities in the offing for those willing to adapt to and integrate these powerful new tools.
Leaps in technology lead to new skills
Casey Greene, Professor of Biomedical Informatics, University of Colorado Anschutz Medical Campus
Technology changes the nature of work, and knowledge work is no different. The past two decades have seen biology and medicine undergoing transformation by rapidly advancing molecular characterization, such as fast, inexpensive DNA sequencing, and the digitization of medicine in the form of apps, telemedicine and data analysis.
Some steps in technology feel larger than others. Yahoo deployed human curators to index emerging content during the dawn of the World Wide Web. The advent of algorithms that used information embedded in the linking patterns of the web to prioritize results radically altered the landscape of search, transforming how people gather information today.
Just as the skills for finding information on the internet changed with the advent of Google, the skills necessary to draw the best output from language models will center on creating prompts and prompt templates that produce desired outputs.
For the cover letter example, multiple prompts are possible. “Write a cover letter for a job” would produce a more generic output than “Write a cover letter for a position as a data entry specialist.” The user could craft even more specific prompts by pasting portions of the job description, resume, and specific instructions — for example, “highlight attention to detail.”
As with many technological advances, how people interact with the world will change in the era of widely accessible AI models. The question is whether society will use this moment to advance equity or exacerbate disparities.
Psytec Games announced its grappling hook-flavored platforming adventure Windlands 2 (2018) is finally swinging its way onto Quest 2, coming to Meta’s standalone for the first time on February 2nd.
Like the original Windlands (2016), which is currently available through App Lab on Quest, Windlands 2 is all about swinging from tree to tree in a large, open world filled with a ton of nooks and crannies to parkour around.
But the sequel changes things up a fair bit by fleshing out the fallen world of Windlands with the addition of quests and some pretty epic boss fights, which you can take down with your trusty bow—either solo or with friends in four-player co-op. Besides following the main story, there’s also a ton of races and collection challenges with leaderboards.
Image courtesy Psytec Games
Originally released on SteamVR and Rift in November 2018, and later on PSVR in November 2021, Windlands 2 is heading to the official store for Quest 2 come February 2nd where it will be priced at $24.99. You can wishlist the game on the Quest Store here.
Notably, Windlands 2 is coming to Quest 2 (re: not Quest 1) without cross-buy, or cross-platform support. That means if you own the Quest 2 version, you can only play with other Quest 2 owners; the same goes for the PC version and PSVR versions, respectively.
When we reviewed Windlands 2 for PC VR in 2018, we called it “the true starting point” for the series, as it sets up a much larger world and story that feels like the beginning of a more expansive adventure than its zen-like forebear. Check out our full review of Windlands 2, where we gave it an [8/10] for deftly translating Windlands unique grappling hook locomotion into a vibrant combat platformer in its own right.
Make no mistake, the original is still very much worth playing for puzzling and parkour purists, although number two really seems to expand the world by filling it with quest-giving NPCs, villains, and boss battles galore, taking around six hours to complete.
At CES 2023 Pimax was showing off its latest high-resolution headset, the Pimax Crystal, which uses new lenses and new displays for what the company says is its clearest looking image yet. And while it’s definitely an improvement in many areas over the company’s headsets, there’s a key flaw that I hope the Pimax will be able to address.
Pimax Crystal employs new lenses and promises to be rid of glare and god rays that were apparent in prior Pimax headsets (and many others) which used Fresnel lenses. That, along with high-resolution displays, purported HDR capability, swappable lenses (to trade field-of-view for pixel density), and up to a 160Hz refresh rate. For a full breakdown of the headset’s spec, see our announcement article.
At CES 2023 I got to see the headset myself for the first time. Although the headset is technically capable of running in standalone mode, I saw it running as a PC VR headset with SteamVR Tracking.
Pimax Crystal (pictured without the SteamVR Tracking faceplate) | Photo by Road to VR
Naturally, the demo I was shown was running Half-Life: Alyx—arguably VR’s best looking game—to show off the detail the headset can reproduce with its 2,880 × 2,880 (8.3MP) per-eye displays. From the quick hands-on I got with Pimax Crystal, I could see this was a big step up in clarity over the company’s prior headsets, especially with regards to edge-to-edge clarity. The visual basics were solid too in terms of pupil swim, geometric distortion, and chromatic aberration. There was a little mura visible on this headset but nothing egregious as far as I could tell.
But there was one thing that immediately stood out to my eyes which otherwise foils a good looking image: blur during head movement. While the static image seen through the headset looks quite sharp, as soon as you start moving your head to look around the world you’ll see a lot of blur—that’s a problem for VR considering that your head is very frequently in motion.
Photo by Road to VR
My best guess is this is being caused by persistence blur; a display artifact that’s mostly solved on other headsets and is thus rarely seen anymore. Persistence blurring is is caused by the display staying lit for too long, such that as you turn your head the pixels remain lit even while their position becomes inaccurate (because they are ‘frozen’ in place each frame, until the next frame comes along and updates the position to account for your head movement). Most headsets employ a form of ‘low-persistence’ which counteracts this issue by illuminating the display for only a fraction of the time between frames, such that as you move your head the pixels aren’t ‘frozen’ in place, but are actually unlit, leaving your brain to fill in the gaps without seeing the pixels blur between frames.
The amount of blur I saw through Pimax Crystal I would say notably compromises what is otherwise an impressively clean image, though there’s a chance that Pimax could fix this issue, depending upon exactly what’s causing it.
For one, it’s possible that the headsets being shown at CES 2023 were still not fully tuned and that low-persistence hasn’t been properly tuned (or maybe isn’t even enabled yet). In that case it might be a matter of final tweaks before they get the correct display behavior which could reduce persistence blur.
Another factor could be the headset’s ‘HDR’ capability. While I don’t believe Pimax has shared any information on peak brightness, it’s possible that the display can’t do both low-persistence and HDR brightness at the same time (indeed this is a challenge because HDR needs high brightness while low-persistence needs pixels to be illuminated only for a minimal amount of time).
And still there’s other possibilities—this might not be persistence blur at all, but simply slow pixel switching time causing some form of ghosting, which could be an inherent limitation of the display or maybe something that could be tweaked.
– – — – –
Ultimately I’m pretty impressed with the clarity and wide field-of-view of the Pimax Crystal, but the blur I’ve seen during head movement compromises the image in my book. My gut says this is probably a persistence blurring issue, though it could be something else. We’ll have to wait to see what Pimax says about this and if they’re able to make improvements by the time Crystal ships.
Photo by Road to VR
Speaking about Crystal shipping; the headset was originally planned for release in Q3 of 2022, but that date has slipped. Although the company hosted a ‘Pimax Crystal Launch Event‘ back in November, at CES 2023 Pimax said the first headsets will start being delivered at the end of this month, though the company also indicates that it won’t reach full production capacity until the middle of the year. Even when the first units do start shipping, key accessories and features, like the headset’s standalone mode—which makes up about half of its value proposition—aren’t expected to be available until unspecified points in the future.
Gorilla Tag is undoubtedly a hit. Its primate-centric locomotion style and infectious game of tag has vaulted it into the top spot as the most-rated game on the Quest Store, surpassing even the Meta-owned rhythm game Beat Saber. Now, the indie team behind Quest’s most popular game revealed they’ve generated over $26 million with Gorilla Tag.
Speaking to VentureBeat, developer Another Axiom has reported that its gorilla-themed game has not only brought it home big with $26 million from in-app purchases, but it’s also attracted a larger glut of players than previously reported.
Having initially launched on App Lab in March 2021 and later released on the official Quest Store this past December, devs behind the free-to-play game say it’s managed to reach a peak monthly active user count of 2.3 million now. On Christmas, which is when Meta typically sees a big influx of users, over 760,000 users played Gorilla Tag.
It is free-to-play on Quest—its biggest platform—although a paid Steam Early Access version is available as well for PC VR headsets, costing $20, which comes along with an equal value of its in-game currency, shiny rocks.
Therein lies Gorilla Tag’s monetization strategy, as in-app purchases include a range of cosmetic items such as hats, glasses, and seasonal items like Santa beards and candy canes.
Developer Kerestell Smith told Road to VR last month that its main driver to get players in the door (and spending cash) was via some well-timed virality on TikTok, with the hashtag #gorillatag seeing 4.4 billion views to date.
Today, the game sits at over 52,000 reviews, ranking above Beat Saber’s 46,000 reviews, making it the most-rated game on the platform. At the time of this writing, Gorilla Tag is the fourth best-rated free game on Quest, sitting behind GYM CLASS – BASKETBALL VR, Innerworld, and First Steps for Quest 2.
Check out the full rankings from this month, which we break down into best and most rated games for both paid and free titles on Quest.