Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.
Italy is the latest country looking to quickly shore up domestic development of an AI ecosystem. As part of its Strategic Program for Artificial Intelligence, the government will “soon” launch a €150 million fund to support startups in the field, backed by development bank Cassa Depositi e Prestiti (CDP).
As reported by Corriere Communazione, Alessio Butti, Italy’s cabinet undersecretary in charge of technological innovation, relayed the news of the state-backed fund yesterday. While he didn’t provide specific details on the amount to be made available, government sources subsequently told Reuters the figure being discussed in Rome was in the vicinity of €150 million.
“Our goal is to increase the independence of Italian industry and cultivate our national capacity to develop skills and research in the sector,” Butti said. “This is why we are working with CDP on the creation of an investment fund for the most innovative startups, so that study, research, and programming on AI can be promoted in Italy.”
Navigating regulation and support
Indeed, the AI boom is here in earnest. Yesterday, Nvidia became the first chipmaker to hit $1 trillion in valuation. The boost to stocks followed a prediction of sales reaching $11 billion in Q2 off the back of the company’s chips powering OpenAI’s ChatGPT (which, coincidentally got off on a bit of a bad foot with Italy).
Those who do not yet have their hands in the (generative) AI pie are now racing to be part of the algorithm-driven gold rush of the 21st Century.
While intent on regulatory oversight, governments are also, for various reasons, keen on supporting domestic developers in the field of artificial intelligence. Last month, the UK made £100 million in funding available for a task force to help build and adopt the “next generation of safe AI.”
Italy is also looking to set up its own “ad hoc” task force. Butti stated, “In Italy we must update the strategy of the sector, and therefore the Department for Digital Transformation is working on the establishment of an authoritative group of Italian experts and scholars.”
Part of national AI strategy
Italy adopted the Strategic Program for Artificial Intelligence 2022-2024 in 2021 but, of course, the industry is evolving at breakneck speed. The strategy is a joint project between the ministries for university and research, economic development, and technological innovation and digital transition. Additionally, it is guided by a working group on the national strategy for AI.
The program outlines 24 policies the government will have implemented over the course of the three years. Beyond measures to support the domestic development of AI, these include promotion of STEM subjects, and increasing the number of doctorates to attract international researchers. Furthermore, they target the creation of data infrastructure for public administration and specific support for startups working in GovTech and looking to solve critical problems in the public sector.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
In conjunction with a visit of CEO Sundar Pichai’s visit to Stockholm yesterday, Google announced the launch of the second Google.org Social Innovation Fund on AI to “help social enterprises solve some of Europe’s most pressing challenges.”
Through the fund, Google is making €10 million available, along with mentoring and support, for entrepreneurs from underserved backgrounds. The aim is to help them develop transformative AI solutions that specifically target problems they face on a daily basis.
The fund will provide capital via a grant to INCO for the expansion of Social Tides, an accelerator program funded by Google.org, that will provide cash support of up to $250,000 (€232,000).
In 2021, Google put up €20 million for European AI social innovation startups through the same mechanism. Among the beneficiaries at that time was The Newsroom in Portugal, which uses an AI-powered app to encourage a more contextualised reading experience to take people out of their bubble and reduce polarisation.
Mini-European tour ahead of AI Act
Of the money offered by the tech giant this time around €1 million will be earmarked for nonprofits that are helping to strengthen and grow social entrepreneurship in Sweden.
During his brief stay, Pichai met with the country’s prime minister and visited the KTH Royal Institute of Technology to meet with students and professors.
Googles vd Sundar Pichai gästade KTH och pratade om artificiell intelligens. Han konstaterar att det är ok att vara rädd om rädslan används till någonting vettigt. https://t.co/imbtxxbSVnpic.twitter.com/oWal43dc2a
— KTH Royal Institute of Technology (@KTHuniversity) May 24, 2023
Sweden currently holds the six-month-long rotating Presidency of the European Union. Pichai’s visit to Stockholm preceded a trip to meet with European Commission deputy chief Vera Jourova and EU industry chief Thierry Breton on Wednesday.
Breton is one of the drivers behind the EU’s much-anticipated AI Act, a world-first attempt at far-reaching AI regulation. One of the biggest sources of contention — and surely subject to much lobbying from the industry — is whether so-called general purpose AI, such as the technology behind ChatGPT or Google’s Bard should be considered “high-risk.”
Speaking to Swedish news outlet SVT on the day of his visit, Pichai stated that he believes that AI is indeed too important not to regulate, and to regulate well. “It is definitely going to involve governments, companies, academic universities, nonprofits, and other stakeholders,” Google’s top executive said.
However, he may be doing some convincing of his own in Brussels, further adding, “These AI systems are going to be used for everything, from recommending a nearby coffee shop to potentially recommending a health treatment for you. As you can imagine, these are very different applications. So where we could get it wrong is to apply a high-risk assessment to all these use cases.”
Award-winning innovators Caroline Lair and Lucia Gallardo will be speaking atTNW Conference, which takes place on June 15 & 16 in Amsterdam. If you want to experience the event (and say hi to our editorial team!), we’ve got something special for our loyal readers. Use the promo code READ-TNW-25and get a 25% discount on your business pass for TNW Conference. See you in Amsterdam!
Social inequality and climate risk have become central to understanding what will drive innovation – and investment – for the future. On day two of TNW Conference, Caroline Lair, founder of startup and scaleup communities The Good AI and Women in AI, and Lucia Gallardo, founder and CEO of impact innovation “socio-technological experimentation lab” Emerge, will be on the Growth Quarters stage for a session titled “Technology-Driven Climate Justice.”
The climate crisis is itself the result of a deeply embedded and systemic exploitation of nature and people in the name of profit. Its impact is already being felt disproportionately over the world, with severe heat waves, droughts, and entire nations disappearing below sea level. What’s more, the people worst affected are those who have contributed little to the greenhouse gas emissions driving global warming.
Climate justice is the idea that climate change is not just an environmental but also a social justice issue, and aims to ensure that the transition to a low-carbon economy is equitable and benefits everyone. Lair and Gallardo will specifically speak about how technologies such as AI, blockchain, and Web3 can play a crucial role in addressing these issues.
AI for good
Artificial intelligence can be applied in the quest for climate justice in several ways, given that it is implemented in a way that ensures transparency, accountability, and fairness. These include data analysis and prediction, discovering patterns and informing policies, as well as evaluating their effectiveness.
It can also enhance climate modelling capabilities, crucial for developing adaptation strategies. Furthermore, AI-powered technologies can monitor, for instance, weather systems with real-time data and also optimise resource allocation and energy distribution.
Reimagining value
Emerge’s objective is to “reimagine impact innovation with regenerative monetisation models.” Regenerative finance goes beyond traditional models that focus on profit, taking into account broader social, environmental, and economic impacts.
Blockchain technology can, for instance, offer transparency for transactions, ensuring that funds are indeed directed to regenerative investments. It can also tokenise regenerative assets such as renewable energy installations, sustainable agriculture initiatives, or ecosystem restoration projects, representing them as digital tokens and making them more accessible to a broader range of investors.
Meanwhile, in the words of Gallardo, “Integrating crypto into existing ecological initiatives doesn’t automatically mean it is applied regenerative finance. We must be intentional about how we’re reimagining value.”
Reclaiming an equitable future
Why am I looking forward to this session? The theme of this year’s TNW Conference is “Reclaim The Future”. In all honesty, I belong to a generation that, while I hopefully have several decades more of on-earth experience ahead of me, will most likely not have to deal with full-on dystopian scenarios, battling to survive climate catastrophe.
I am also privileged in terms of geographical location and socioeconomic status not to have to worry about immediate drought and famine. (Flooding may be another matter, but as someone said when convincing me to move to Amsterdam – “wouldn’t you prefer to live in a place that is already used to keeping water out?”)
However, this does not mean that we who enjoy such privileges get to simply shrug our shoulders and carry on indulging in business as usual. TNW has always been about the good technology can do in the world. And what is better than employing it in service of one of the greatest challenges of our time?
The female lead
Caroline Lair is the founder of The Good AI, a community of AI talent, startups, and scaleups that are committed to helping companies transition toward a more responsible and sustainable business. She is also the co-founder of nonprofit Women in AI, a platform where women in artificial intelligence can come together to share, learn, and support each other. Furthermore, she has worked at Snips, building private-by-design AI Voice Assistant, which was acquired by Sonos in 2019, and was an investor and partner of HCVC venture capital firm.
Lucia Gallardo is the founder and CEO of Emerge, which calls itself an experimental technologies lab at the convergence of sustainable development and social impact. She also sits on advisory boards and standard-setting committees and councils such as at the InterAmerican Development Bank and World Economic Forum. Among many other accolades, she has been named MIT Innovator under 35, and worked with clients including the US State Department, Hard Rock International, and the United Nations Development Program.
Caroline Lair and Lucia Gallardo are only two of the amazing speakers we have lined up at TNW Conference this year. You can find more on theevent agenda— and remember: for a 25% discount on business passes, use the promo code READ-TNW-25
Get the TNW newsletter
Get the most important tech news in your inbox each week.
During its I/O 2023 event yesterday, Google announced it had officially removed the waitlist for its AI-powered chatbot Bard and made the service available in 180 countries and territories.
Sadly for most Europeans keen on testing the tech giant’s contribution to the generative AI race, the countries of the European Union are not included in the list.
The company has not made any comments on why the EU has been left out. However, it would not be too far-fetched to assume it has something to do with how members of the bloc have reacted to the introduction of OpenAI’s ChatGPT. In all likelihood, Google is also waiting for the finalisation of the EU’s much-anticipated AI Act, before unleashing Bard across the continent. The leading European Parliament committees gave their approval for the act earlier today, with a tentative plenary adoption date scheduled for 14 June.
While not offering any specific plans for increased geographical access, Google says it will “gradually expand to more countries and territories in a way that is consistent with local regulations and our AI principles.”
Trained on Google’s new model
Along with the release of Bard to much of the world (and sharp VPN wielders), Google also introduced a range of new features to the chatbot. First of all, it is now powered by Google’s newest large language model: PaLM2, an upgraded version of PaLM, released in April. Meanwhile, Bard was still introduced as a “conversational AI experiment.”
According to Sissie Hsiao, Google VP and General Manager for Google Assistant and Bard, the chatbot has now been trained in 20 programming languages. This means that users can ask it to produce, debug and improve code in, for instance, C++, Python, and JavaScript.
In addition, users can now switch to the apparently much-requested dark mode. But what’s more, they can also create images through Bard, using Adobe’s AI art generator Firefly via an extension feature that allows it to integrate with third party apps and platforms.
Soon you can ask Bard/Firefly to generate unicorns and cakes for you. Credit: Google
Thus far, Bard is available in English, Japanese, and Korean, but Google says it is on track to support 40 languages.
Will it be up to snuff?
In a move generally considered to have been premature, Bard was released two months ago for select users in the US and the UK. Consensus has been that in effort to keep up with competitors, Google rushed the introduction of the chatbot before it was ready.
As a result, the company faced the ridicule of not only tech savvy commentators, but also its own employees. As reported by Bloomberg, phrases such as “pathological liar” and “cringe-worthy” were thrown about on internal messaging boards. But what is one of the big five to do when its very core business is under threat?
To say that Google is enamoured by artificial intelligence at the moment would be something of an understatement. For I/O 2023, it came armed with a ton of new AI announcements, beyond Bard. In fact, Sundar Pichai opened the event by once more stating that Google has “reimagined” all its core products.
And speaking of core businesses, Google Search is getting something the company calls “AI-powered snapshots.” When users opt in for the brand new Search Generative Experience, the search engine will produce AI-powered answers at the top of the results.
Other products that are getting an AI makeover are Gmail and Docs, where you can prompt AI to “help me write” things such as potentially tricky emails or job applications. Sheets now has a function called “help me create” to help you set up tables with anything you may need when it comes to, say, running a business (dog walking was the example offered by Google during the presentation probably because, well, dogs).
Maps is getting something called Immersive View, which will allow you to visually walk, cycle, or drive a specific route complete with predicted weather conditions, before you actually get out the door. It will be rolled out across 15 cities, including Amsterdam, Berlin, Dublin, Florence, London, Paris, and Venice by the end of the year.
Whether or not much of Europe will get to test the mettle of the ‘new and improved’ Bard by then is another matter.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
According to a study by the International Institute for Democracy and Electoral Assistance (IDEA) released late last week, digital technologies will become an increasing factor in European democracy in the coming decade. This is perhaps not entirely surprising; after all, the pandemic shifted much of our lives into the digital realm, why shouldn’t our political participation?
The report, based on interviews with more than 50 government and industry representatives, finds that the market for online participation and deliberation in Europe is expected to grow to €300mn in the next five years, whereas the market for e-voting will grow to €500mn. The respondents also state that there is a “window of opportunity” for European providers of democracy technology to expand beyond Europe.
Authors of the report further believe that digital democracy technology can support outreach to demographics that may otherwise be difficult to reach, such as youth and immigrant communities. This also includes broader populations under difficult circumstances, such as those brought on by the pandemic and Russia’s war of aggression against Ukraine.
“In case of war, electronic democracy tools have to be even stronger. Because we understand we have to live for the society and give citizens tools,” said Oleg Polovynko, Director of IT at Kyiv Digital, City Council of Kyiv, and one of the speakers at the TNW Conference 2023.
Not without controversy
Digital democracy refers to the use of digital technologies and platforms to enhance democratic processes and increase citizens’ participation in government decision-making. This is also referred to as civic tech (not to be confused with govtech, which focuses on technologies that help governments perform their functions more efficiently).
Examples of tools include online petitions, open data portals, and participatory budgeting systems, where citizens come together to discuss community needs and priorities and then allocate public funds accordingly.
In a best-case scenario, it has the potential to reinvigorate democracy by allowing citizens to participate from anywhere at any time. In a worst-case scenario, it could be used for disinformation or just plain good old online toxic behaviour.
Furthermore, the discussion of a potential ‘digital divide – who will benefit and who will be excluded due to access or lack thereof to technology – is not one that is easily settled.
Inviting AI into collective decision making
IDEA states that there are more than 100 vendors in Europe in the online participation, deliberation and voting sector, most of whom are active on a national level. The majority of those operating internationally are startups with between 10 and 60 employees, but expanding quickly.
Many of these democracy technology platforms have already begun taking advantage of the recent step-change developments in artificial intelligence to introduce new features or enhance existing ones.
“We foresee a future where citizens and AI collaboratively engage with governments to address intricate social issues by merging collective intelligence with artificial intelligence,” Robert Bjarnason, co-founder and President of Citizens.is tells TNW.
“We advocate for a model in which citizens work alongside powerful AI systems to help shape policy, rather than allowing centralised government AI models to exert excessive influence.”
Following the collapse of Icelandic banks in 2008, distrust of politicians was at an all-time high in the Nordic island nation. Together with a fellow programmer, Gunnar Grímsson, Bjarnasson created a software platform called Your Priorities that allows citizens to suggest laws and policies that can then be up- or down-voted by other users.
Just before local elections in 2010, the open-source software was used to set up the Better Reykjavik portal. Five years later, a poll on the site managed to name a street in the Icelandic capital after Darth Vader (well, his Icelandic moniker of Svarthöfði, or Black-cape, which already fitted well with the names of the streets in the area).
Of course, there have been much ‘weightier’ decisions influenced by the platform, such as crowdsourcing ideas on how to prioritise the City’s educational objectives.
Thus far, over 70,000 of the capital’s inhabitants have engaged with Better Reykjavik. Pretty impressive for a population of 120,000. Furthermore, Your Priorities has been trialled in Malta, Norway, Scotland, and Estonia.
The Baltic tech-forward nation has adopted several laws suggested through the platform, which features a unique debating system, crowdsourcing of content and prioritisation, a ‘toxicity sensor’ to alert admins about potentially abusive content – and extensive use of AI. In fact, Citizens.is recently entered into collaboration with OpenAI, and has deployed GPT-4 for its AI assistant – in Icelandic.
Don’t worry if the language barrier felt a little steep. Citizens.is has been kind enough to provide TNW with a screenshot of the company’s AI assistant in action from a project in Oakland, California.
Credit: Citizens.is
Other examples of civic tech focused companies in Europe include Belgium-founded scaleup CitizenLab, which now works with more than 300 local governments and organisations across 18 countries, and Berlin-based non-profit Liquid Democracy. Liquid’s open source deliberation and collaborative decision-making Adhocracy+ software platform also helps facilitate face-to-face meetings throughout the timeline of participation projects.
Gaining the trust of the citizen
The main product trends identified in the IDEA study are: artificial intelligence, voting, and administration and reporting. Meanwhile, it also found that it is important to address issues around inclusiveness, data usage, accountability and transparency, and to develop security standards for end-to-end verified voting.
One solution proposed is the introduction of a Europe-wide quality trust mark for democracy technologies.
“If a citizen can trust the banking application to make transactions, then equivalently our service can be trusted to make the citizen’s voice heard,” stated Nicholas Tsounis, CEO of online voting platform Electobox. “We want people to trust this application because we know that it is there for them to protect the right to speak and vote.”
Get the TNW newsletter
Get the most important tech news in your inbox each week.
Much of the world may currently be fretting about how to limit the impact (lack of privacy, copyright issues, loss of jobs, world domination, etc.) of artificial intelligence. However, that does not mean that there isn’t enormous potential for AI to improve quality of life on earth.
One such application is healthcare. With the ability to process big data sets, the deployment of AI could lead to significant advances in predictive diagnostics, including early detection of cancer. While more research is needed, one of the latest studies in the field shows promising results for AI-assisted diagnosis of lung cancer.
Doctors and researchers at the Royal Marsden NHS foundation trust, the Institute of Cancer Research, and Imperial College London have built an AI algorithm they say can diagnose cancerous growths more efficiently than current methods.
In the study named OCTAPUS-AI, researchers used imaging and clinical data from over 900 patients from the UK and Netherlands following curative radiotherapy to develop and test ML algorithms to see how accurately the models could predict recurrence.
Specifically, the study looked at if AI could help identify the risk of cancer returning in non-small cell lung cancer (NSCLC) patients. Researchers used CT scans to develop an AI algorithm using radiomics. This is a quantitative approach which extracts novel data and predictive biomarkers from medical imaging.
Research algorithm superior to current technology
NSCLC patients make up 85% of lung cancer cases. While the disease is often treatable when caught early, in over a third of patients, the cancer returns. The study found that using the algorithm, clinicians may eventually be able to identify recurrence earlier in high-risk patients.
The scientists used a measure called area under the curve (AUC) to see how efficient the model was at detecting cancer. A perfect 100% accuracy score would be a 1, whereas a model that was purely guessing 50-50 would get 0.5. In the study, the AI algorithm built by the researchers scored 0.87. This can be compared to the 0.67 score of the technology currently in use.
“Next, we want to explore more advanced machine learning techniques, such as deep learning, to see if we can get even better results,” Dr Sumeet Hindocha, Clinical Oncology Specialist Registrar at The Royal Marsden NHS Foundation Trust, and Clinical Research Fellow at Imperial College London, said. “We then want to test this model on newly diagnosed NSCLC patients and follow them to see if the model can accurately predict their risk of recurrence.”
Support for practitioners – and patients
Rather than believing it will replace doctors, most now view AI in healthtech as a tool that will assist practitioners in providing the best possible care – including improved bedside manners. Despite investors growing gradually more risk-averse over the past year, the healthcare AI sector is still expected to grow from close to $14 billion in 2023 to $103 billion by 2028.
The UK is teeming with AI healthtech startups. Many are focused on drug development, genomic analysis or more consumer-centric telehealth symptom checking and wearables. However, some are intent on improving disease detection and diagnosis. These include the likes of Mendelian, who just received close to £1.5 million to roll out its AI-based solution for rare disease diagnosis as part of the government’s investment into AI technology within the NHS.
The rest of Europe also has its fair share of diagnostic AI startups. Among them are Liége-based Radiomics. The company focuses on the detection and phenotypic quantification of solid tumours based on standard-of-care imaging. In Norway, DoMore diagnostics is using AI and deep learning to increase the prognostic and predictive value of cancer tissue biopsies. The company’s founders also say it could help guide the selection of therapy to avoid over- and undertreatment.
Meanwhile, a few percentage points of more accurate diagnosis, vital though they may be for the affected individual, may not be the only positive impact AI could have on our care systems.
According to Eric Topol, the author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, “the greatest opportunity offered by AI is not reducing errors or workloads, or even curing cancer: it is the opportunity to restore the precious and time-honoured connection and trust—the human touch—between patients and doctors.”
Get the TNW newsletter
Get the most important tech news in your inbox each week.
ChatGPT isn’t perfect, but the popular AI chatbot’s access to large language models (LLM) means it can do a lot of things you might not expect, like give all of Tamriel’s NPC inhabitants the ability to hold natural conversations and answer questions about the iconic fantasy world. Uncanny, yes. But it’s a prescient look at how games might one day use AI to reach new heights in immersion.
YouTuber ‘Art from the Machine’ released a video showing off how they modded the much beloved VR version of The Elder Scrolls V: Skyrim.
The mod, which isn’t available yet, ostensibly lets you hold conversations with NPCs via ChatGPT and xVASynth, an AI tool for generating voice acting lines using voices from video games.
Check out the results in the most recent update below:
The latest version of the project introduces Skyrim scripting for the first time, which the developer says allows for lip syncing of voices and NPC awareness of in-game events. While still a little rigid, it feels like a pretty big step towards climbing out of the uncanny valley.
Here’s how ‘Art from the Machine’ describes the project in a recent Reddit post showcasing their work:
A few weeks ago I posted a video demonstrating a Python script I am working on which lets you talk to NPCs in Skyrim via ChatGPT and xVASynth. Since then I have been working to integrate this Python script with Skyrim’s own modding tools and I have reached a few exciting milestones:
NPCs are now aware of their current location and time of day. This opens up lots of possibilities for ChatGPT to react to the game world dynamically instead of waiting to be given context by the player. As an example, I no longer have issues with shopkeepers trying to barter with me in the Bannered Mare after work hours. NPCs are also aware of the items picked up by the player during conversation. This means that if you loot a chest, harvest an animal pelt, or pick a flower, NPCs will be able to comment on these actions.
NPCs are now lip synced with xVASynth. This is obviously much more natural than the floaty proof-of-concept voices I had before. I have also made some quality of life improvements such as getting response times down to ~15 seconds and adding a spell to start conversations.
When everything is in place, it is an incredibly surreal experience to be able to sit down and talk to these characters in VR. Nothing takes me out of the experience more than hearing the same repeated voice lines, and with this no two responses are ever the same. There is still a lot of work to go, but even in its current state I couldn’t go back to playing without this.
You might notice the actual voice prompting the NPCs is also fairly robotic too, although ‘Art from the Machine’ says they’re using speech-to-text to talk to the ChatGPT 3.5-driven system. The voice heard in the video is generated from xVASynth, and then plugged in during video editing to replace what they call their “radio-unfriendly voice.”
And when can you download and play for yourself? Well, the developer says publishing their project is still a bit of a sticky issue.
“I haven’t really thought about how to publish this, so I think I’ll have to dig into other ChatGPT projects to see how others have tackled the API key issue. I am hoping that it’s possible to alternatively connect to a locally-run LLM model for anyone who isn’t keen on paying the API fees.”
Serving up more natural NPC responses is also an area that needs to be addressed, the developer says.
ChatGPT has had anything but a triumphant welcome tour around Europe. Following grumbling regulators in Italy and the European Parliament, the turn has come for German trade unions to express their concerns over potential copyright infringement.
No less than 42 trade organisations representing over 140,000 of the country’s authors and performers have signed a letter urging the EU to impose strict rules for the AI’s use of copyrighted material.
As reported first by Reuters, the letter, which underlined increasing concerns about copyright and privacy issues stemming from the material used to train the large language model (LLM), stated,
“The unauthorised usage of protected training material, its non-transparent processing, and the foreseeable substitution of the sources by the output of generative AI raise fundamental questions of accountability, liability and remuneration, which need to be addressed before irreversible harm occurs.”
Signatories include major German trade unions Verdi and DGB, as well as other associations for photographers, designers, journalists and illustrators. The letter’s authors further added that,
“Generative AI needs to be at the centre of any meaningful AI market regulation.”
ChatGPT is not the only target of copyright contention. In January, visual media company Getty Images filed a copyright claim against Stability AI. According to the lawsuit, the image making tool developer allegedly copied over 12 million photos, captions, and metadata without permission.
LLM training offers diminishing returns
The arrival of OpenAI’s ChatGPT has sparked a flurry of concerns. Thus far, these have covered everything from aggressive development due to a commercially motivated AI “arms race,” to matters of privacy, data protection and copyright. The latest model, GPT-4, was trained using over a trillion words.
Meanwhile, one of the originators of the controversy, the company’s CEO Sam Altman, stated last week that the amplified machine learning strategy behind ChatGPT has run its course. Indeed, OpenAI forecasts diminishing returns on scaling up model size. The company trained its latest model, GPT-4, using over a trillion words at the cost of about $100 million.
At the same time, the EU’s Artificial Intelligence Act is nearing its home stretch. While it may well set a global regulatory standard, the question is how well it will be able to adapt as developers find other new and innovative ways of making algorithms more efficient.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
Meta has introduced the Segment Anything Model, which aims to set a new bar for computer-vision-based ‘object segmentation’—the ability for computers to understand the difference between individual objects in an image or video. Segmentation will be key for making AR genuinely useful by enabling a comprehensive understanding of the world around the user.
Object segmentation is the process of identifying and separating objects in an image or video. With the help of AI, this process can be automated, making it possible to identify and isolate objects in real-time. This technology will be critical for creating a more useful AR experience by giving the system an awareness of various objects in the world around the user.
The Challenge
Imagine, for instance, that you’re wearing a pair of AR glasses and you’d like to have two floating virtual monitors on the left and right of your real monitor. Unless you’re going to manually tell the system where your real monitor is, it must be able to understand what a monitor looks like so that when it sees your monitor it can place the virtual monitors accordingly.
But monitors come in all shapes, sizes, and colors. Sometimes reflections or occluded objects make it even harder for a computer-vision system to recognize.
Having a fast and reliable segmentation system that can identify each object in the room around you (like your monitor) will be key to unlocking tons of AR use-cases so the tech can be genuinely useful.
Computer-vision based object segmentation has been an ongoing area of research for many years now, but one of the key issues is that in order to help computers understand what they’re looking at, you need to train an AI model by giving it lots images to learn from.
Such models can be quite effective at identifying the objects they were trained on, but if they will struggle on objects they haven’t seen before. That means that one of the biggest challenges for object segmentation is simply having a large enough set of images for the systems to learn from, but collecting those images and annotating them in a way that makes them useful for training is no small task.
SAM I Am
Meta recently published work on a new project called the Segment Anything Model (SAM). It’s both a segmentation model and a massive set of training images the company is releasing for others to build upon.
The project aims to reduce the need for task-specific modeling expertise. SAM is a general segmentation model that can identify any object in any image or video, even for objects and image types that it didn’t see during training.
SAM allows for both automatic and interactive segmentation, allowing it to identify individual objects in a scene with simple inputs from the user. SAM can be ‘prompted’ with clicks, boxes, and other prompts, giving users control over what the system is attempting to identifying at any given moment.
It’s easy to see how this point-based prompting could work great if coupled with eye-tracking on an AR headset. In fact that’s exactly one of the use-cases that Meta has demonstrated with the system:
Part of SAM’s impressive abilities come from its training data which contains a massive 10 million images and 1 billion identified object shapes. It’s far more comprehensive than contemporary datasets, according to Meta, giving SAM much more experience in the learning process and enabling it to segment a broad range of objects.
Image courtesy Meta
Meta calls the SAM dataset SA-1B, and the company is releasing the entire set for other researchers to build upon.
Meta hopes this work on promptable segmentation, and the release of this massive training dataset, will accelerate research into image and video understanding. The company expects the SAM model can be used as a component in larger systems, enabling versatile applications in areas like AR, content creation, scientific domains, and general AI systems.
Search interest in ChatGPT has reached a 2,633% boost in interest since last December, shortly after its launch. For the artificial intelligence and machine learning industry, and for those working in tech as a whole, OpenAI’s chatbot represents a true crossing of the Rubicon.
A generative form of AI, it uses prompts to produce content and conversations, whereas traditional AI looks at things such as pattern detection, decision making, or classifying data. We already benefit from artificial intelligence, whether we realise it or not—from Siri in our Apple phones to the choices Netflix or Amazon Prime make for us to the personalisations and cyber protection that lie behind our commercial interactions.
ChatGPT is just one of an increasing number of generative AI tools, including Bing Chat and Google Bard. DeepMind’s Alpha Code writes computer programs at a competitive level; Jasper is an AI copywriter, and DALL-E, MidJourney and Stable Diffusion can all create realistic images and art from a description you give them.
As a result, generative AI is now firmly embedded in the mainstream consciousness, with much credit going to ChatGPT’s easy to use interface, and its ability to produce results that can be as sublime as they are ridiculous. Want it to produce some Python code? Sure thing—and it can generate you a funny Limerick too, if you’d like.
How generative AI will impact the job market
According to Salesforce, 57% of senior IT leaders believe generative AI is a game changer, and because it is intuitive and helpful, end users like it as well.
While your job may be safe from AI (for the moment), ChatGPT-generated content has gotten into the the top 20% of all candidates shortlisted for a communications consultant role at marketing company Schwa, and it has also passed Google’s level 3 engineering coding interview.
Roles that are likely to resist the advent of generative AI include graphic designers, programmers (though they are likely to adopt AI tools that speed up their process) and blockchain developers, but many other jobs are likely to be performed by AI in the (near) future.
These include customer service jobs—chatbots can do this efficiently. Bookkeeping or accounts roles are also likely to be replaced as software can do many of these tasks. Manufacturing will see millions of jobs replaced with smart machinery that does the same job, but faster.
But, while AI may replace some jobs, it will also generate a slew of new ones.
The World Economic Forum predicts that the technology will create 97 million new jobs by 2025. Jobs specifically related to the development and maintenance of AI and automation will see growing adoption as AI integrates across multiple industries.
These could include data detectives or scientists, prompt engineers, robotics engineers, machine managers, and programmers, particularly those who can code in Python which is key for AI development. AI trainers and those with capabilities related to modelling, computational intelligence, machine learning, mathematics, psychology, linguistics, and neuroscience will also be in demand.
Healthcare looks set to benefit too, with PwC estimating that AI-assisted healthcare technician jobs will see an upward surge. A sector that is already creating new jobs is automated transportation with Tesla, Uber, and Google investing billions into AI-driven self-driving cars and trucks.
If you want to work in AI now, there are plenty of jobs on offer. Discover three below, or check out the House of Talent Job Board for many more opportunities.
Staff Data Engineer, Data & ML Products, Adevinta Group, Amsterdam
Adevinta is on the lookout for a top-notch Staff Data Engineer to join the team and make a global impact in an exciting and dynamic environment. You will build and run production-grade data and machine learning pipelines and products at scale in an agile setup. You will work closely with data scientists, engineers, architects, and product managers to create the technology that generates and transforms data into applications, insights, and experiences for users. You should be familiar with privacy regulation, be an ambassador of privacy by design, and actively participate in department-wide, cross-functional tech initiatives. Discover more here.
AIML – Annotation Analyst, German Market, Apple, Barcelona
Apple’s AIML team is passionate about technology with a focus on enriching the customer experience. It is looking for a motivated Annotation Analyst who can demonstrate active listening, integrity, acute attention to detail, and is passionate about impacting customers’ experience. You’ll need fluency in the German language with excellent comprehension, grammar, and proofreading skills, as well as excellent English reading comprehension and writing skills. You should also have excellent active listening skills, with the ability to understand verbal nuances. Find out more about the job here.
Artificial Intelligence Product Owner – M/F, BNP Paribas, Paris
As Artificial Intelligence Product Owner, you’ll report to the head of the CoE IA, ensuring improvements to data science tools (Stellar, Domino, D3) to integrate the needs of data scientists and data analysts in particular. You will also participate in all the rituals of Agile methodology and will organise sprint planning, sprint review, retrospective, and more for team members. You will also be the Jira and Confluence expert. If this sounds like a position for you, you can find more information here.
Here we go again! For the sixth year running, we present Neural’s annual AI predictions. 2022 was an incredible year for the fields of machine learning and artificial intelligence. From the AI developer who tried to convince the world that one of Google’s chatbots had become sentient to the recent launch of OpenAI’s ChatGPT, it’s been 12 months of non-stop drama and action. And we have every reason to believe that next year will be both bigger and weirder.
That’s why we reached out to three thought leaders whose companies are highly invested in artificial intelligence and the future. Without further ado, here are the predictions for AI in 2023:
First up, Alexander Hagerup, co-founder and CEO at Vic.ai, told us that we’d continue to see the “progression from humans using AI and ML software to augment their work, to humans relying on software to autonomously do the work for them.” According to him, this will have a lot to do with generative AI for creatives — we’re pretty sure he’s talking about the ChatGPTs and DALL-Es of the AI world — as well as “reliance on truly autonomous systems for finance and other back-office functions.”
Greetings, humanoids
Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.
He believes a looming recession could increase this progress as much as two-fold, as businesses may be forced to find ways to cut back on labor costs.
Next, we heard from Jonathan Taylor, Chief Technology Officer at Zoovu. He’s predicting global disruption for the consumer buyer experience in 2023 thanks to “innovative zero-party solutions, leveraging advanced machine learning techniques and designed to interact directly and transparently with consumers.” I know that sounds like corporate jargon, but the fact of the matter is sometimes marketing-speak hits the nail on the head.
Consumers are sick and tired of the traditional business interaction experience. We’ve been on hold since we were old enough to pay bills. It’s a bold new world and the companies that know how to use machine learning to make us happy will be the cream that rises to the top in 2023 and beyond.
Jonathan Taylor, Chief Technology Officer at Zoovu
Taylor also predicts that Europe’s world-leading consumer protection and data privacy legislation will force companies large and small to “adopt these new approaches before the legacy approaches either become regulated out of existence by government or mandated out of existence by consumers.”
The writing’s on the wall. As he puts it, “the only way to make these zero-party solutions truly scalable and as effective as the older privacy-invading alternatives, will be to use advanced machine learning and transfer learning techniques.”
Finally, we got in touch with Gabriel Mecklenburg, co-founder at Hinge Health. He told us that the future of AI in 2023 is diversity. In order for the field to progress, especially when it comes to medicine, machine learning needs to work for everyone.
In his words, “AI is clearly the future of motion tracking for health and fitness, but it’s still extremely hard to do well. Many apps will work if you’re a white person with an average body and a late-model iPhone with a big screen. However, equitable access means that AI-powered care experiences must work on low-end phones, for people of all shapes and colors, and in real environments.”
Gabriel Mecklenburg, co-founder of Hinge Health
Mecklenburg explained that more than one in five people suffer from musculoskeletal conditions such as neck, back, and joint pain. According to him, “it is a global crisis with a severe human and economic toll.”
He believes that, with AI, medical professionals have what they need to help those people. “For example,” says Mecklenberg, “AI technology can now help identify and track many unique joints and reference points on the body using just the phone camera.”
But, as mentioned above, this only matters if these tools work for everyone. Per Mecklenburg, “we must ensure AI is used to bridge the care gap, not widen it.”
From the editor of Neural:
It’s been a privilege curating and publishing these predictions all these years. When we started, over half a decade ago, we made the conscious decision to highlight voices from smaller companies. And, as long-time readers might recall, I even ventured a few predictions myself back in 2019.
But, considering we spent all of 2020 in COVID lockdown, I’m reticent to tempt fate yet again. I won’t venture any predictions for AI in 2023 save one: the human spirit will endure.
When we started predicting the future of AI here at Neural, a certain portion of the population found it clever to tell creatives to “learn to code.” At the time, it seemed like journalists and artists were on the verge of being replaced by machines.
Yet, six years later, we still have journalists and artists. That’s the problem with humans: we’re never satisfied. Build an AI that understands us today, and it’ll be out of date tomorrow.
The future is all about finding ways to make AI work for us, not the other way around.
The war in Ukraine has become the largest testing ground for artificial intelligence-powered autonomous and uncrewed vehicles in history. While the use of military robots is nothing new — World War II saw the birth of remote-controlled war machines and the US has deployed fully-autonomous assault drones as recently as 2020 — what we’re seeing in Ukraine is the proliferation of a new class of combat vehicle.
This article discusses the “killer robot” technology being used by both sides in Russia’s war in Ukraine. Our main takeaway is that the “killer” part of “killer robots” doesn’t apply here. Read on to find out why.
Uncrewed versus autonomous
This war represents the first usage of the modern class of uncrewed vehicles and automated weapons platforms in a protracted invasion involving forces with relatively similar tech. While Russia’s military appears, on paper, to be superior to Ukraine’s, the two sides have fielded forces with similar capabilities. Compared to forces Russia faced during its involvement in the Syrian civil war or, for example, those faced by the US during the Iraq and Afghanistan engagements, what’s happening on the ground in Ukraine right now demonstrates a more paralleled engagement theater.
Greetings, humanoids
Subscribe to our newsletter now for a weekly recap of our favorite AI stories in your inbox.
It’s important, however, to mention that this is not a war being fought by machines. It’s unlikely that autonomous or uncrewed weapons and vehicles will have much impact in the war, simply because they’re untested and, currently, unreliable.
Uncrewed vehicles and autonomous vehicles aren’t necessarily the same thing. While almost all autonomous vehicles — those which can operate without human intervention — are uncrewed, many uncrewed vehicles can only be operated remotely by humans. Perhaps most importantly, many of these vehicles have never been tested in combat. This means that they’re more likely to be used in “support” roles than as autonomous combat vehicles, even if that’s what they were designed to do.
But, before we get into the how’s and why’s behind the usage of military robots in modern warfare, we need to explain what kind of vehicles are currently in use. There are no “killer robots” in warfare. That’s a catch-all term used to describe military vehicles both autonomous and uncrewed.
These include uncrewed aerial vehicles (UAVs), uncrewed ground vehicles (UGVs), and uncrewed surface vehicles (USVs, another term for uncrewed maritime or water-based vehicles).
So, the first question we have to answer is: why not just turn the robots into killers and let them fight the war for us? You might be surprised to learn that the answer has very little to do with regulations or rules regarding the use of “killer robots.”
To put it simply: militaries have better things to do with their robots than just sending fire downrange. That doesn’t mean they won’t be tested that way, there’s already evidence that’s happened.
A British “Harrier” USV, credit: Wikicommons
However, we’ve seen all that before. The use of “killer robots” in warfare is old hat now. The US deployed drones in Iraq and Afghanistan and, as we reported here at TNW, it even sent a Predator drone to autonomously assassinate an Iranian general.
What’s different in this war is the proliferation of UAVs and UGVs in combat support roles. We’ve seen drones and autonomous land vehicles in war before, but never at this scale. Both forces are using uncrewed vehicles to perform tasks that, traditionally, either couldn’t be done or require extra humanpower. It does also bear mentioning that they’re using gear that’s relatively untested, which explains why we’re not seeing either country deploying these units enmasse.
A developmental crucible
Developing wartime technology is a tricky gambit. Despite the best assurances of the manufacturers, there’s simply no way to know what could possibly go wrong until a given tech sees actual field use.
In the Vietnam war, we saw a prime example of this paradigm in the debut of the M-16 rifle. It was supposed to replace the trusty old M-14. But, as the first soldiers to use the new weapon tragically found out, it wasn’t suitable for use in the jungle environment without modifications to its design and special training for the soldiers who’d use it. A lot of soldiers died as a result.
A US Marine cleaning their M16 during the US-Vietnam War, credit: Wikicommons
That’s one of the many reasons why a number of nations who’ve so far refused any direct involvement in the war are eager to send cutting-edge robots and weapons to the Ukrainian government in hopes of testing out their tech’s capabilities without risking their own soldiers’ skin.
TNW spoke with Alex Stronell, a Land Platforms Analyst and UGV lead at Janes, the defense intelligence provider. They explained that one of the more interesting things to note about the use of UGVs, in particular, in the war in Ukraine, is the absence of certain designs we might have otherwise expected.
“For example, an awful lot of attention has been paid inside and outside of Russia to the Uran-9 … It certainly looks like a menacing vehicle, and it has been touted as the world’s most advanced combat UGV,” Stronell told us, before adding “however, I have not seen any evidence that the Russians have used the Uran-9 in Ukraine, and this could be because it still requires further development.”
On the other side, Stronell previously wrote that Ukrainian forces will soon wield the world’s largest complement of THeMIS UGVs (see the video below). That’s exceptional when you consider that the nation’s arsenal is mostly lend-leased from other countries.
Milrem, the company that makes the THeMIS UGV, recently announced that the German Ministry of Defence ordered 14 of its vehicles to be sent to the Ukrainian forces for immediate use. According to Stronell, these vehicles will not be armed. They’re equipped for casualty evacuation, and for finding and removing landmines and similar devices.
But it’s also safe to say that the troops on the ground will find other uses for them. As anyone who’s ever deployed to a combat zone can tell you, space is at a premium and there’s no point in bringing more than you can carry.
The THeMIS, however, is outfitted with Milrem’s “Intelligence Function Kit,” which includes the “follow me” ability. This means that it would make for an excellent battle mule to haul ammo and other gear. And there’s certainly nothing stopping anyone from rekitting the THeMIS with combat modules or simply strapping a homemade autonomous weapon system to the top of it.
As much as the world fears the dawning of the age of killer robots in warfare, the current technology just simply isn’t there yet. Stronell waved off the idea that a dozen or so UGVs could, for example, be outfitted as killer guard robots that could be deployed in the defense of strategic points. Instead, he described a hybrid human/machine paradigm referred to as “manned-unmanned teaming, or M-UMT,” where-in, as described above, unmounted infantry address the battlefield with machine support.
In the time since the M-16 was mass-adopted during an ongoing conflict, the world’s militaries have refined the methodology they use to deploy new technologies. Currently, the war in Ukraine is teaching us that autonomous vehicles are useful in support roles.
The simple fact of the matter is that we’re already exceptionally good at killing each other when it comes to war. And it’s still cheaper to train a human to do everything a soldier needs to do than it is to build massive weapons platforms for every bullet we want to send downrange. The actual military need for “killer robots” is likely much lower than the average civilian might expect.
However, AI’s gifts when it comes to finding needles in haystacks, for example, make it the perfect recon unit, but soldiers have to do a lot more than just identify the enemy and pull a trigger.
However, that’s something that will surely change as AI technology matures. Which is why, Stronell told us, other European countries are either currently in the process of adopting autonomous weaponry or already have.
In the Netherlands, for example, the Royal Army has engaged in training ops in Lithuania to test their own complement of THeMIS units in what they’re referring to as a “pseudo-operational” theater. Due to the closeness of the war in Ukraine and its ongoing nature, nearby nations are able to run analogous military training operations based on up-to-the-minute intel of the ongoing conflict. In essence, the rest of Europe’s watching what Ukraine and Russia do with their robots and simulating the war at home.
Soldiers in the Netherlands Royal Army in front of a Netherlands Royal Air Force AH-64 Apache helicopter, credit: Wikicommons
This represents an intel bonanza for the related technologies and there’s no telling how much this period of warfare will advance things. We could see innumerable breakthroughs in both military and civilian artificial intelligence technology as the lessons learned from this war begin to filter out.
To illustrate this point, it bears mention that Russia’s put out a one million ruble bounty (about €15,000) to anyone who captures a Milrem THeMIS unit from the battlefield in Ukraine. These types of bounties aren’t exactly unusual during war times, but the fact that this particular one was so publicized is a testament to how desperate Russia is to get its hands on the technology.
An eye toward the future
It’s clear that not only is the war in Ukraine not a place where we’ll see “killer robots” deployed enmasse to overwhelm their fragile, human, enemy soldier counterparts, but that such a scenario is highly unlikely in any form of modern warfare.
However, when it comes to augmenting our current forces with UGVs or replacing crewed aerial and surface recon vehicles with robots, military leaders are excited about AI’s potential usefulness. And what we’re seeing right now in the war in Ukraine is the most likely path forward for the technology.
That’s not to say that the world shouldn’t be worried about killer robots or their development and proliferation through wartime usage. We absolutely should be worried, because Russia’s war in Ukraine has almost certainly lowered the world’s inhibitions surrounding the development of autonomous weapons.