Artificial Intelligence

italy-to-launch-e150m-fund-for-ai-startups

Italy to launch €150M fund for AI startups

Italy to launch €150M fund for AI startups

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

Italy is the latest country looking to quickly shore up domestic development of an AI ecosystem. As part of its Strategic Program for Artificial Intelligence, the government will “soon” launch a €150 million fund to support startups in the field, backed by development bank Cassa Depositi e Prestiti (CDP). 

As reported by Corriere Communazione, Alessio Butti, Italy’s cabinet undersecretary in charge of technological innovation, relayed the news of the state-backed fund yesterday. While he didn’t provide specific details on the amount to be made available, government sources subsequently told Reuters the figure being discussed in Rome was in the vicinity of €150 million. 

“Our goal is to increase the independence of Italian industry and cultivate our national capacity to develop skills and research in the sector,” Butti said. “This is why we are working with CDP on the creation of an investment fund for the most innovative startups, so that study, research, and programming on AI can be promoted in Italy.”

Navigating regulation and support

Indeed, the AI boom is here in earnest. Yesterday, Nvidia became the first chipmaker to hit $1 trillion in valuation. The boost to stocks followed a prediction of sales reaching $11 billion in Q2 off the back of the company’s chips powering OpenAI’s ChatGPT (which, coincidentally got off on a bit of a bad foot with Italy).

Those who do not yet have their hands in the (generative) AI pie are now racing to be part of the algorithm-driven gold rush of the 21st Century. 

While intent on regulatory oversight, governments are also, for various reasons, keen on supporting domestic developers in the field of artificial intelligence. Last month, the UK made £100 million in funding available for a task force to help build and adopt the “next generation of safe AI.” 

Italy is also looking to set up its own “ad hoc” task force. Butti stated, “In Italy we must update the strategy of the sector, and therefore the Department for Digital Transformation is working on the establishment of an authoritative group of Italian experts and scholars.”

Part of national AI strategy

Italy adopted the Strategic Program for Artificial Intelligence 2022-2024 in 2021 but, of course, the industry is evolving at breakneck speed. The strategy is a joint project between the ministries for university and research, economic development, and technological innovation and digital transition. Additionally, it is guided by a working group on the national strategy for AI. 

The program outlines 24 policies the government will have implemented over the course of the three years. Beyond measures to support the domestic development of AI, these include promotion of STEM subjects, and increasing the number of doctorates to attract international researchers. Furthermore, they target the creation of data infrastructure for public administration and specific support for startups working in GovTech and looking to solve critical problems in the public sector.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Italy to launch €150M fund for AI startups Read More »

google-launches-e10m-social-innovation-ai-fund-for-european-entrepreneurs

Google launches €10M social innovation AI fund for European entrepreneurs

Google launches €10M social innovation AI fund for European entrepreneurs

Linnea Ahlgren

Story by

Linnea Ahlgren

In conjunction with a visit of CEO Sundar Pichai’s visit to Stockholm yesterday, Google announced the launch of the second Google.org Social Innovation Fund on AI to “help social enterprises solve some of Europe’s most pressing challenges.” 

Through the fund, Google is making €10 million available, along with mentoring and support, for entrepreneurs from underserved backgrounds. The aim is to help them develop transformative AI solutions that specifically target problems they face on a daily basis.

The fund will provide capital via a grant to INCO for the expansion of Social Tides, an accelerator program funded by Google.org, that will provide cash support of up to $250,000 (€232,000). 

In 2021, Google put up €20 million for European AI social innovation startups through the same mechanism. Among the beneficiaries at that time was The Newsroom in Portugal, which uses an AI-powered app to encourage a more contextualised reading experience to take people out of their bubble and reduce polarisation.

Mini-European tour ahead of AI Act

Of the money offered by the tech giant this time around €1 million will be earmarked for nonprofits that are helping to strengthen and grow social entrepreneurship in Sweden.

During his brief stay, Pichai met with the country’s prime minister and visited the KTH Royal Institute of Technology to meet with students and professors.

Googles vd Sundar Pichai gästade KTH och pratade om artificiell intelligens. Han konstaterar att det är ok att vara rädd om rädslan används till någonting vettigt. https://t.co/imbtxxbSVn pic.twitter.com/oWal43dc2a

— KTH Royal Institute of Technology (@KTHuniversity) May 24, 2023

Sweden currently holds the six-month-long rotating Presidency of the European Union. Pichai’s visit to Stockholm preceded a trip to meet with European Commission deputy chief Vera Jourova and EU industry chief Thierry Breton on Wednesday. 

Breton is one of the drivers behind the EU’s much-anticipated AI Act, a world-first attempt at far-reaching AI regulation. One of the biggest sources of contention — and surely subject to much lobbying from the industry — is whether so-called general purpose AI, such as the technology behind ChatGPT or Google’s Bard should be considered “high-risk.” 

Speaking to Swedish news outlet SVT on the day of his visit, Pichai stated that he believes that AI is indeed too important not to regulate, and to regulate well. “It is definitely going to involve governments, companies, academic universities, nonprofits, and other stakeholders,” Google’s top executive said. 

However, he may be doing some convincing of his own in Brussels, further adding, “These AI systems are going to be used for everything, from recommending a nearby coffee shop to potentially recommending a health treatment for you. As you can imagine, these are very different applications. So where we could get it wrong is to apply a high-risk assessment to all these use cases.” 

Will Pichai be successful in convincing the Commission? Then, just maybe, Bard will launch in Europe too

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Google launches €10M social innovation AI fund for European entrepreneurs Read More »

tech’s-role-in-the-quest-for-climate-justice:-what-not-to-miss-at-tnw-conference

Tech’s role in the quest for climate justice: What not to miss at TNW Conference

Tech’s role in the quest for climate justice: What not to miss at TNW Conference

Linnea Ahlgren

Story by

Linnea Ahlgren

Award-winning innovators Caroline Lair and Lucia Gallardo will be speaking at TNW Conference, which takes place on June 15 & 16 in Amsterdam. If you want to experience the event (and say hi to our editorial team!), we’ve got something special for our loyal readers. Use the promo code READ-TNW-25 and get a 25% discount on your business pass for TNW Conference. See you in Amsterdam!

Social inequality and climate risk have become central to understanding what will drive innovation – and investment – for the future. On day two of TNW Conference, Caroline Lair, founder of startup and scaleup communities The Good AI and Women in AI, and Lucia Gallardo, founder and CEO of impact innovation “socio-technological experimentation lab” Emerge, will be on the Growth Quarters stage for a session titled Technology-Driven Climate Justice.” 

The climate crisis is itself the result of a deeply embedded and systemic exploitation of nature and people in the name of profit. Its impact is already being felt disproportionately over the world, with severe heat waves, droughts, and entire nations disappearing below sea level. What’s more, the people worst affected are those who have contributed little to the greenhouse gas emissions driving global warming.

Climate justice is the idea that climate change is not just an environmental but also a social justice issue, and aims to ensure that the transition to a low-carbon economy is equitable and benefits everyone. Lair and Gallardo will specifically speak about how technologies such as AI, blockchain, and Web3 can play a crucial role in addressing these issues. 

AI for good

Artificial intelligence can be applied in the quest for climate justice in several ways, given that it is implemented in a way that ensures transparency, accountability, and fairness. These include data analysis and prediction, discovering patterns and informing policies, as well as evaluating their effectiveness. 

It can also enhance climate modelling capabilities, crucial for developing adaptation strategies. Furthermore, AI-powered technologies can monitor, for instance, weather systems with real-time data and also optimise resource allocation and energy distribution.

Reimagining value

Emerge’s objective is to “reimagine impact innovation with regenerative monetisation models.” Regenerative finance goes beyond traditional models that focus on profit, taking into account broader social, environmental, and economic impacts. 

Blockchain technology can, for instance, offer transparency for transactions, ensuring that funds are indeed directed to regenerative investments. It can also tokenise regenerative assets such as renewable energy installations, sustainable agriculture initiatives, or ecosystem restoration projects, representing them as digital tokens and making them more accessible to a broader range of investors. 

Meanwhile, in the words of Gallardo, “Integrating crypto into existing ecological initiatives doesn’t automatically mean it is applied regenerative finance. We must be intentional about how we’re reimagining value.”

Reclaiming an equitable future

Why am I looking forward to this session? The theme of this year’s TNW Conference is “Reclaim The Future”. In all honesty, I belong to a generation that, while I hopefully have several decades more of on-earth experience ahead of me, will most likely not have to deal with full-on dystopian scenarios, battling to survive climate catastrophe.

I am also privileged in terms of geographical location and socioeconomic status not to have to worry about immediate drought and famine. (Flooding may be another matter, but as someone said when convincing me to move to Amsterdam – “wouldn’t you prefer to live in a place that is already used to keeping water out?”) 

However, this does not mean that we who enjoy such privileges get to simply shrug our shoulders and carry on indulging in business as usual. TNW has always been about the good technology can do in the world. And what is better than employing it in service of one of the greatest challenges of our time?

The latest version of the project introduces Skyrim scripting for the first time, which the developer says allows for lip syncing of voices and NPC awareness of in-game events. While still a little rigid, it feels like a pretty big step towards climbing out of the uncanny valley.

Here’s how ‘Art from the Machine’ describes the project in a recent Reddit post showcasing their work:

A few weeks ago I posted a video demonstrating a Python script I am working on which lets you talk to NPCs in Skyrim via ChatGPT and xVASynth. Since then I have been working to integrate this Python script with Skyrim’s own modding tools and I have reached a few exciting milestones:

NPCs are now aware of their current location and time of day. This opens up lots of possibilities for ChatGPT to react to the game world dynamically instead of waiting to be given context by the player. As an example, I no longer have issues with shopkeepers trying to barter with me in the Bannered Mare after work hours. NPCs are also aware of the items picked up by the player during conversation. This means that if you loot a chest, harvest an animal pelt, or pick a flower, NPCs will be able to comment on these actions.

NPCs are now lip synced with xVASynth. This is obviously much more natural than the floaty proof-of-concept voices I had before. I have also made some quality of life improvements such as getting response times down to ~15 seconds and adding a spell to start conversations.

When everything is in place, it is an incredibly surreal experience to be able to sit down and talk to these characters in VR. Nothing takes me out of the experience more than hearing the same repeated voice lines, and with this no two responses are ever the same. There is still a lot of work to go, but even in its current state I couldn’t go back to playing without this.

You might notice the actual voice prompting the NPCs is also fairly robotic too, although ‘Art from the Machine’ says they’re using speech-to-text to talk to the ChatGPT 3.5-driven system. The voice heard in the video is generated from xVASynth, and then plugged in during video editing to replace what they call their “radio-unfriendly voice.”

And when can you download and play for yourself? Well, the developer says publishing their project is still a bit of a sticky issue.

“I haven’t really thought about how to publish this, so I think I’ll have to dig into other ChatGPT projects to see how others have tackled the API key issue. I am hoping that it’s possible to alternatively connect to a locally-run LLM model for anyone who isn’t keen on paying the API fees.”

Serving up more natural NPC responses is also an area that needs to be addressed, the developer says.

For now I have it set up so that NPCs say “let me think” to indicate that I have been heard and the response is in the process of being generated, but you’re right this can be expanded to choose from a few different filler lines instead of repeating the same one every time.

And while the video is noticeably sped up after prompts, this mostly comes down to the voice generation software xVASynth, which admittedly slows the response pipeline down since it’s being run locally. ChatGPT itself doesn’t affect performance, the developer says.

This isn’t the first project we’ve seen using chatbots to enrich user interactions. Lee Vermeulen, a long-time VR pioneer and developer behind Modboxreleased a video in 2021 showing off one of his first tests using OpenAI GPT 3 and voice acting software Replica. In Vermeulen’s video, he talks about how he set parameters for each NPC, giving them the body of knowledge they should have, all of which guides the sort of responses they’ll give.

Check out Vermeulen’s video below, the very same that inspired ‘Art from the Machine’ to start working on the Skyrim VR mod:

As you’d imagine, this is really only the tip of the iceberg for AI-driven NPC interactions. Being able to naturally talk to NPCs, even if a little stuttery and not exactly at human-level, may be preferable over having to wade through a ton of 2D text menus, or go through slow and ungainly tutorials. It also offers up the chance to bond more with your trusty AI companion, like Skyrim’s Lydia or Fallout 4’s Nick Valentine, who instead of offering up canned dialogue might actually, you know, help you out every once in a while.

And that’s really only the surface level stuff that a mod like ‘Art from the Machine’ might deliver to existing games that aren’t built with AI-driven NPCs. Imagining a game that is actually predicated on your ability to ask the right questions and do your own detective work—well, that’s a role-playing game we’ve never experienced before, either in VR our otherwise.

This ‘Skyrim VR’ Mod Shows How AI Can Take VR Immersion to the Next Level Read More »

german-creatives-wants-eu-to-address-chatgpt-copyright-concerns

German creatives wants EU to address ChatGPT copyright concerns

German creatives wants EU to address ChatGPT copyright concerns

Linnea Ahlgren

Story by

Linnea Ahlgren

ChatGPT has had anything but a triumphant welcome tour around Europe. Following grumbling regulators in Italy and the European Parliament, the turn has come for German trade unions to express their concerns over potential copyright infringement. 

No less than 42 trade organisations representing over 140,000 of the country’s authors and performers have signed a letter urging the EU to impose strict rules for the AI’s use of copyrighted material. 

As reported first by Reuters, the letter, which underlined increasing concerns about copyright and privacy issues stemming from the material used to train the large language model (LLM), stated, 

“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs.”

Signatories include major German trade unions Verdi and DGB, as well as other associations for photographers, designers, journalists and illustrators. The letter’s authors further added that, 

“Generative AI needs to be at the centre of any meaningful AI market regulation.”

ChatGPT is not the only target of copyright contention. In January, visual media company Getty Images filed a copyright claim against Stability AI. According to the lawsuit, the image making tool developer allegedly copied over 12 million photos, captions, and metadata without permission.  

LLM training offers diminishing returns

The arrival of OpenAI’s ChatGPT has sparked a flurry of concerns. Thus far, these have covered everything from aggressive development due to a commercially motivated AI “arms race,” to matters of privacy, data protection and copyright. The latest model, GPT-4, was trained using over a trillion words. 

Meanwhile, one of the originators of the controversy, the company’s CEO Sam Altman, stated last week that the amplified machine learning strategy behind ChatGPT has run its course. Indeed, OpenAI forecasts diminishing returns on scaling up model size. The company trained its latest model, GPT-4, using over a trillion words at the cost of about $100 million. 

At the same time, the EU’s Artificial Intelligence Act is nearing its home stretch. While it may well set a global regulatory standard, the question is how well it will be able to adapt as developers find other new and innovative ways of making algorithms more efficient.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


German creatives wants EU to address ChatGPT copyright concerns Read More »

meta-shows-new-progress-on-key-tech-for-making-ar-genuinely-useful

Meta Shows New Progress on Key Tech for Making AR Genuinely Useful

Meta has introduced the Segment Anything Model, which aims to set a new bar for computer-vision-based ‘object segmentation’—the ability for computers to understand the difference between individual objects in an image or video. Segmentation will be key for making AR genuinely useful by enabling a comprehensive understanding of the world around the user.

Object segmentation is the process of identifying and separating objects in an image or video. With the help of AI, this process can be automated, making it possible to identify and isolate objects in real-time. This technology will be critical for creating a more useful AR experience by giving the system an awareness of various objects in the world around the user.

The Challenge

Imagine, for instance, that you’re wearing a pair of AR glasses and you’d like to have two floating virtual monitors on the left and right of your real monitor. Unless you’re going to manually tell the system where your real monitor is, it must be able to understand what a monitor looks like so that when it sees your monitor it can place the virtual monitors accordingly.

But monitors come in all shapes, sizes, and colors. Sometimes reflections or occluded objects make it even harder for a computer-vision system to recognize.

Having a fast and reliable segmentation system that can identify each object in the room around you (like your monitor) will be key to unlocking tons of AR use-cases so the tech can be genuinely useful.

Computer-vision based object segmentation has been an ongoing area of research for many years now, but one of the key issues is that in order to help computers understand what they’re looking at, you need to train an AI model by giving it lots images to learn from.

Such models can be quite effective at identifying the objects they were trained on, but if they will struggle on objects they haven’t seen before. That means that one of the biggest challenges for object segmentation is simply having a large enough set of images for the systems to learn from, but collecting those images and annotating them in a way that makes them useful for training is no small task.

SAM I Am

Meta recently published work on a new project called the Segment Anything Model (SAM). It’s both a segmentation model and a massive set of training images the company is releasing for others to build upon.

The project aims to reduce the need for task-specific modeling expertise. SAM is a general segmentation model that can identify any object in any image or video, even for objects and image types that it didn’t see during training.

SAM allows for both automatic and interactive segmentation, allowing it to identify individual objects in a scene with simple inputs from the user. SAM can be ‘prompted’ with clicks, boxes, and other prompts, giving users control over what the system is attempting to identifying at any given moment.

It’s easy to see how this point-based prompting could work great if coupled with eye-tracking on an AR headset. In fact that’s exactly one of the use-cases that Meta has demonstrated with the system:

Here’s another example of SAM being used on first-person video captured by Meta’s Project Aria glasses:

You can try SAM for yourself in your browser right now.

How SAM Knows So Much

Part of SAM’s impressive abilities come from its training data which contains a massive 10 million images and 1 billion identified object shapes.  It’s far more comprehensive than contemporary datasets, according to Meta, giving SAM much more experience in the learning process and enabling it to segment a broad range of objects.

Image courtesy Meta

Meta calls the SAM dataset SA-1B, and the company is releasing the entire set for other researchers to build upon.

Meta hopes this work on promptable segmentation, and the release of this massive training dataset, will accelerate research into image and video understanding. The company expects the SAM model can be used as a component in larger systems, enabling versatile applications in areas like AR, content creation, scientific domains, and general AI systems.

Meta Shows New Progress on Key Tech for Making AR Genuinely Useful Read More »

these-are-the-new-jobs-generative-ai-could-create-in-the-future

These are the new jobs generative AI could create in the future

Search interest in ChatGPT has reached a 2,633% boost in interest since last December, shortly after its launch. For the artificial intelligence and machine learning industry, and for those working in tech as a whole, OpenAI’s chatbot represents a true crossing of the Rubicon.

A generative form of AI, it uses prompts to produce content and conversations, whereas traditional AI looks at things such as pattern detection, decision making, or classifying data. We already benefit from artificial intelligence, whether we realise it or not—from Siri in our Apple phones to the choices Netflix or Amazon Prime make for us to the personalisations and cyber protection that lie behind our commercial interactions.

ChatGPT is just one of an increasing number of generative AI tools, including Bing Chat and Google Bard. DeepMind’s Alpha Code writes computer programs at a competitive level; Jasper is an AI copywriter, and DALL-E, MidJourney and Stable Diffusion can all create realistic images and art from a description you give them.

As a result, generative AI is now firmly embedded in the mainstream consciousness, with much credit going to ChatGPT’s easy to use interface, and its ability to produce results that can be as sublime as they are ridiculous. Want it to produce some Python code? Sure thing—and it can generate you a funny Limerick too, if you’d like.

How generative AI will impact the job market

According to Salesforce, 57% of senior IT leaders believe generative AI is a game changer, and because it is intuitive and helpful, end users like it as well.

While your job may be safe from AI (for the moment), ChatGPT-generated content has gotten into the the top 20% of all candidates shortlisted for a communications consultant role at marketing company Schwa, and it has also passed Google’s level 3 engineering coding interview.

Roles that are likely to resist the advent of generative AI include graphic designers, programmers (though they are likely to adopt AI tools that speed up their process) and blockchain developers, but many other jobs are likely to be performed by AI in the (near) future.

These include customer service jobs—chatbots can do this efficiently. Bookkeeping or accounts roles are also likely to be replaced as software can do many of these tasks. Manufacturing will see millions of jobs replaced with smart machinery that does the same job, but faster.

But, while AI may replace some jobs, it will also generate a slew of new ones.

The World Economic Forum predicts that the technology will create 97 million new jobs by 2025. Jobs specifically related to the development and maintenance of AI and automation will see growing adoption as AI integrates across multiple industries.

These could include data detectives or scientists, prompt engineers, robotics engineers, machine managers, and programmers, particularly those who can code in Python which is key for AI development. AI trainers and those with capabilities related to modelling, computational intelligence, machine learning, mathematics, psychology, linguistics, and neuroscience will also be in demand.

Healthcare looks set to benefit too, with PwC estimating that AI-assisted healthcare technician jobs will see an upward surge. A sector that is already creating new jobs is automated transportation with Tesla, Uber, and Google investing billions into AI-driven self-driving cars and trucks.

If you want to work in AI now, there are plenty of jobs on offer. Discover three below, or check out the House of Talent Job Board for many more opportunities.

Staff Data Engineer, Data & ML Products, Adevinta Group, Amsterdam

Adevinta is on the lookout for a top-notch Staff Data Engineer to join the team and make a global impact in an exciting and dynamic environment. You will build and run production-grade data and machine learning pipelines and products at scale in an agile setup. You will work closely with data scientists, engineers, architects, and product managers to create the technology that generates and transforms data into applications, insights, and experiences for users. You should be familiar with privacy regulation, be an ambassador of privacy by design, and actively participate in department-wide, cross-functional tech initiatives. Discover more here.

AIML – Annotation Analyst, German Market, Apple, Barcelona

Apple’s AIML team is passionate about technology with a focus on enriching the customer experience. It is looking for a motivated Annotation Analyst who can demonstrate active listening, integrity, acute attention to detail, and is passionate about impacting customers’ experience. You’ll need fluency in the German language with excellent comprehension, grammar, and proofreading skills, as well as excellent English reading comprehension and writing skills. You should also have excellent active listening skills, with the ability to understand verbal nuances. Find out more about the job here.

Artificial Intelligence Product Owner – M/F, BNP Paribas, Paris

As Artificial Intelligence Product Owner, you’ll report to the head of the CoE IA, ensuring improvements to data science tools (Stellar, Domino, D3) to integrate the needs of data scientists and data analysts in particular. You will also participate in all the rituals of Agile methodology and will organise sprint planning, sprint review, retrospective, and more for team members. You will also be the Jira and Confluence expert. If this sounds like a position for you, you can find more information here.

These are the new jobs generative AI could create in the future Read More »

what-to-expect-from-ai-in-2023

What to expect from AI in 2023

Here we go again! For the sixth year running, we present Neural’s annual AI predictions. 2022 was an incredible year for the fields of machine learning and artificial intelligence. From the AI developer who tried to convince the world that one of Google’s chatbots had become sentient to the recent launch of OpenAI’s ChatGPT, it’s been 12 months of non-stop drama and action. And we have every reason to believe that next year will be both bigger and weirder.

That’s why we reached out to three thought leaders whose companies are highly invested in artificial intelligence and the future. Without further ado, here are the predictions for AI in 2023:

First up, Alexander Hagerup, co-founder and CEO at Vic.ai, told us that we’d continue to see the “progression from humans using AI and ML software to augment their work, to humans relying on software to autonomously do the work for them.” According to him, this will have a lot to do with generative AI for creatives — we’re pretty sure he’s talking about the ChatGPTs and DALL-Es of the AI world — as well as “reliance on truly autonomous systems for finance and other back-office functions.”

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

He believes a looming recession could increase this progress as much as two-fold, as businesses may be forced to find ways to cut back on labor costs.

Next, we heard from Jonathan Taylor, Chief Technology Officer at Zoovu. He’s predicting global disruption for the consumer buyer experience in 2023 thanks to “innovative zero-party solutions, leveraging advanced machine learning techniques and designed to interact directly and transparently with consumers.” I know that sounds like corporate jargon, but the fact of the matter is sometimes marketing-speak hits the nail on the head.

Consumers are sick and tired of the traditional business interaction experience. We’ve been on hold since we were old enough to pay bills. It’s a bold new world and the companies that know how to use machine learning to make us happy will be the cream that rises to the top in 2023 and beyond.

Jonathan Taylor, Chief Technology Officer
Jonathan Taylor, Chief Technology Officer at Zoovu

Taylor also predicts that Europe’s world-leading consumer protection and data privacy legislation will force companies large and small to “adopt these new approaches before the legacy approaches either become regulated out of existence by government or mandated out of existence by consumers.”

The writing’s on the wall. As he puts it, “the only way to make these zero-party solutions truly scalable and as effective as the older privacy-invading alternatives, will be to use advanced machine learning and transfer learning techniques.”

Finally, we got in touch with Gabriel Mecklenburg, co-founder at Hinge Health. He told us that the future of AI in 2023 is diversity. In order for the field to progress, especially when it comes to medicine, machine learning needs to work for everyone.

In his words, “AI is clearly the future of motion tracking for health and fitness, but it’s still extremely hard to do well. Many apps will work if you’re a white person with an average body and a late-model iPhone with a big screen. However, equitable access means that AI-powered care experiences must work on low-end phones, for people of all shapes and colors, and in real environments.”

Gabriel Mecklenburg, Co-Founder and Executive Chairman of Hinge Health
Gabriel Mecklenburg, co-founder of Hinge Health

Mecklenburg explained that more than one in five people suffer from musculoskeletal conditions such as neck, back, and joint pain. According to him, “it is a global crisis with a severe human and economic toll.”

He believes that, with AI, medical professionals have what they need to help those people. “For example,” says Mecklenberg, “AI technology can now help identify and track many unique joints and reference points on the body using just the phone camera.”

But, as mentioned above, this only matters if these tools work for everyone. Per Mecklenburg, “we must ensure AI is used to bridge the care gap, not widen it.”

From the editor of Neural:

It’s been a privilege curating and publishing these predictions all these years. When we started, over half a decade ago, we made the conscious decision to highlight voices from smaller companies. And, as long-time readers might recall, I even ventured a few predictions myself back in 2019.

But, considering we spent all of 2020 in COVID lockdown, I’m reticent to tempt fate yet again. I won’t venture any predictions for AI in 2023 save one: the human spirit will endure.

When we started predicting the future of AI here at Neural, a certain portion of the population found it clever to tell creatives to “learn to code.” At the time, it seemed like journalists and artists were on the verge of being replaced by machines.

Yet, six years later, we still have journalists and artists. That’s the problem with humans: we’re never satisfied. Build an AI that understands us today, and it’ll be out of date tomorrow.

The future is all about finding ways to make AI work for us, not the other way around.

What to expect from AI in 2023 Read More »

ukraine-has-become-the-world’s-testing-ground-for-military-robots

Ukraine has become the world’s testing ground for military robots

The war in Ukraine has become the largest testing ground for artificial intelligence-powered autonomous and uncrewed vehicles in history. While the use of military robots is nothing new — World War II saw the birth of remote-controlled war machines and the US has deployed fully-autonomous assault drones as recently as 2020 — what we’re seeing in Ukraine is the proliferation of a new class of combat vehicle. 

This article discusses the “killer robot” technology being used by both sides in Russia’s war in Ukraine. Our main takeaway is that the “killer” part of “killer robots” doesn’t apply here. Read on to find out why. 

Uncrewed versus autonomous

This war represents the first usage of the modern class of uncrewed vehicles and automated weapons platforms in a protracted invasion involving forces with relatively similar tech. While Russia’s military appears, on paper, to be superior to Ukraine’s, the two sides have fielded forces with similar capabilities. Compared to forces Russia faced during its involvement in the Syrian civil war or, for example, those faced by the US during the Iraq and Afghanistan engagements, what’s happening on the ground in Ukraine right now demonstrates a more paralleled engagement theater. 

Greetings, humanoids

Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.

It’s important, however, to mention that this is not a war being fought by machines. It’s unlikely that autonomous or uncrewed weapons and vehicles will have much impact in the war, simply because they’re untested and, currently, unreliable. 

Uncrewed vehicles and autonomous vehicles aren’t necessarily the same thing. While almost all autonomous vehicles — those which can operate without human intervention — are uncrewed, many uncrewed vehicles can only be operated remotely by humans. Perhaps most importantly, many of these vehicles have never been tested in combat. This means that they’re more likely to be used in “support” roles than as autonomous combat vehicles, even if that’s what they were designed to do. 

But, before we get into the how’s and why’s behind the usage of military robots in modern warfare, we need to explain what kind of vehicles are currently in use. There are no “killer robots” in warfare. That’s a catch-all term used to describe military vehicles both autonomous and uncrewed.

These include uncrewed aerial vehicles (UAVs), uncrewed ground vehicles (UGVs), and uncrewed surface vehicles (USVs, another term for uncrewed maritime or water-based vehicles).

So, the first question we have to answer is: why not just turn the robots into killers and let them fight the war for us? You might be surprised to learn that the answer has very little to do with regulations or rules regarding the use of “killer robots.” 

To put it simply: militaries have better things to do with their robots than just sending fire downrange. That doesn’t mean they won’t be tested that way, there’s already evidence that’s happened

A British “Harrier” USV, credit: Wikicommons

However, we’ve seen all that before. The use of “killer robots” in warfare is old hat now. The US deployed drones in Iraq and Afghanistan and, as we reported here at TNW, it even sent a Predator drone to autonomously assassinate an Iranian general.

What’s different in this war is the proliferation of UAVs and UGVs in combat support roles. We’ve seen drones and autonomous land vehicles in war before, but never at this scale. Both forces are using uncrewed vehicles to perform tasks that, traditionally, either couldn’t be done or require extra humanpower. It does also bear mentioning that they’re using gear that’s relatively untested, which explains why we’re not seeing either country deploying these units enmasse.

A developmental crucible

Developing wartime technology is a tricky gambit. Despite the best assurances of the manufacturers, there’s simply no way to know what could possibly go wrong until a given tech sees actual field use.

In the Vietnam war, we saw a prime example of this paradigm in the debut of the M-16 rifle. It was supposed to replace the trusty old M-14. But, as the first soldiers to use the new weapon tragically found out, it wasn’t suitable for use in the jungle environment without modifications to its design and special training for the soldiers who’d use it. A lot of soldiers died as a result.

A US Marine cleaning their M16 during the US-Vietnam War, credit: Wikicommons

That’s one of the many reasons why a number of nations who’ve so far refused any direct involvement in the war are eager to send cutting-edge robots and weapons to the Ukrainian government in hopes of testing out their tech’s capabilities without risking their own soldiers’ skin. 

TNW spoke with Alex Stronell, a Land Platforms Analyst and UGV lead at Janes, the defense intelligence provider. They explained that one of the more interesting things to note about the use of UGVs, in particular, in the war in Ukraine, is the absence of certain designs we might have otherwise expected.

“For example, an awful lot of attention has been paid inside and outside of Russia to the Uran-9 … It certainly looks like a menacing vehicle, and it has been touted as the world’s most advanced combat UGV,” Stronell told us, before adding “however, I have not seen any evidence that the Russians have used the Uran-9 in Ukraine, and this could be because it still requires further development.”

Uran-9 armed combat robot UGV Unmanned Ground Vehicle Rosboronexport Russia Russian Defense Industry – YouTube

On the other side, Stronell previously wrote that Ukrainian forces will soon wield the world’s largest complement of THeMIS UGVs (see the video below). That’s exceptional when you consider that the nation’s arsenal is mostly lend-leased from other countries. 

Milrem, the company that makes the THeMIS UGV, recently announced that the German Ministry of Defence ordered 14 of its vehicles to be sent to the Ukrainian forces for immediate use. According to Stronell, these vehicles will not be armed. They’re equipped for casualty evacuation, and for finding and removing landmines and similar devices. 

Milrem Robotics’ THeMIS UGVs used in a live-fire manned-unmanned teaming exercise – YouTube

But it’s also safe to say that the troops on the ground will find other uses for them. As anyone who’s ever deployed to a combat zone can tell you, space is at a premium and there’s no point in bringing more than you can carry.

The THeMIS, however, is outfitted with Milrem’s “Intelligence Function Kit,” which includes the “follow me” ability. This means that it would make for an excellent battle mule to haul ammo and other gear. And there’s certainly nothing stopping anyone from rekitting the THeMIS with combat modules or simply strapping a homemade autonomous weapon system to the top of it.

D.I.Y. Scrap Metal Auto-Turret (RaspberryPi Auto-Tracking Airsoft Sentry?!) – YouTube

On-the-job training

As much as the world fears the dawning of the age of killer robots in warfare, the current technology just simply isn’t there yet. Stronell waved off the idea that a dozen or so UGVs could, for example, be outfitted as killer guard robots that could be deployed in the defense of strategic points. Instead, he described a hybrid human/machine paradigm referred to as “manned-unmanned teaming, or M-UMT,” where-in, as described above, unmounted infantry address the battlefield with machine support. 

In the time since the M-16 was mass-adopted during an ongoing conflict, the world’s militaries have refined the methodology they use to deploy new technologies. Currently, the war in Ukraine is teaching us that autonomous vehicles are useful in support roles.

The simple fact of the matter is that we’re already exceptionally good at killing each other when it comes to war. And it’s still cheaper to train a human to do everything a soldier needs to do than it is to build massive weapons platforms for every bullet we want to send downrange. The actual military need for “killer robots” is likely much lower than the average civilian might expect. 

However, AI’s gifts when it comes to finding needles in haystacks, for example, make it the perfect recon unit, but soldiers have to do a lot more than just identify the enemy and pull a trigger.

However, that’s something that will surely change as AI technology matures. Which is why, Stronell told us, other European countries are either currently in the process of adopting autonomous weaponry or already have. 

In the Netherlands, for example, the Royal Army has engaged in training ops in Lithuania to test their own complement of THeMIS units in what they’re referring to as a “pseudo-operational” theater. Due to the closeness of the war in Ukraine and its ongoing nature, nearby nations are able to run analogous military training operations based on up-to-the-minute intel of the ongoing conflict. In essence, the rest of Europe’s watching what Ukraine and Russia do with their robots and simulating the war at home. 

Soldiers in the Netherlands Royal Army in front of a Netherlands Royal Air Force AH-64 Apache helicopter, credit: Wikicommons

This represents an intel bonanza for the related technologies and there’s no telling how much this period of warfare will advance things. We could see innumerable breakthroughs in both military and civilian artificial intelligence technology as the lessons learned from this war begin to filter out. 

To illustrate this point, it bears mention that Russia’s put out a one million ruble bounty (about €15,000) to anyone who captures a Milrem THeMIS unit from the battlefield in Ukraine. These types of bounties aren’t exactly unusual during war times, but the fact that this particular one was so publicized is a testament to how desperate Russia is to get its hands on the technology. 

An eye toward the future

It’s clear that not only is the war in Ukraine not a place where we’ll see “killer robots” deployed enmasse to overwhelm their fragile, human, enemy soldier counterparts, but that such a scenario is highly unlikely in any form of modern warfare.

However, when it comes to augmenting our current forces with UGVs or replacing crewed aerial and surface recon vehicles with robots, military leaders are excited about AI’s potential usefulness. And what we’re seeing right now in the war in Ukraine is the most likely path forward for the technology. 

That’s not to say that the world shouldn’t be worried about killer robots or their development and proliferation through wartime usage. We absolutely should be worried, because Russia’s war in Ukraine has almost certainly lowered the world’s inhibitions surrounding the development of autonomous weapons. 

Ukraine has become the world’s testing ground for military robots Read More »