Poland’s data protection watchdog is investigating OpenAI’s ChatGPT after an unnamed complainant accused the company of GDPR breaches.
“The case concerns the violation of many provisions on the protection of personal data, which is why we will ask OpenAI to answer a number of questions in order to thoroughly conduct the administrative proceedings,” said Jan Nowak, president of the country’s Personal Data Protection Office (UODO).
He added that “these aren’t the first doubts” about the AI tool’s compliance with European principles of data privacy and security.
According to the UODO, the accusations are structured around “unlawful and unreliable” data processing and lack of transparency.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Specifically, the complainant alleged that ChatGPT generated false information about them and did not proceed to the required correction following a formal request. They also claimed that they were unable to find which of their personal data was processed by the company, and received “evasive, misleading, and internally contradictory” answers to questions.
In response, the Polish watchdog will examine the case and clarify any doubts surrounding OpenAI’s systemic approach to European personal data protection rules. Nevertheless, the proceedings “will be difficult” as the company is located outside of the Union, said the regulator.
“The development of new technologies must respect the rights of individuals arising from, among others, the GDPR,” noted the agency’s deputy president Jakub Groszkowski. “The task of the European personal data protection authorities is to protect European Union citizens against the negative effects of information processing technologies.”
This isn’t the first time OpenAI has faced scrutiny in the bloc.
Last March, Italy became the first Western country to impose a temporary ban on ChatGPT, after its data protection agency (again) accused the company of “unlawful” collection of personal data and the absence of an age verification system for minors. In the same month, the European Consumer Organisation (BEUC) called for EU and national authorities to investigate OpenAI’s system.
On Wednesday, the UK’s Home Secretary Suella Braverman unveiled a new campaign against Meta, urging the tech giant to rethink its plan to roll out end-to-end encryption (E2EE) on Facebook Messenger and Instagram.
The company aims to finalise the encryption rollout later this year, but the British government is worried that the move will hinder the detection of child sexual abuse.
According to the Home Office, 800 predators are currently arrested per month and up to 1,200 children are protected from sexual abuse following the information provided by social media companies. If Meta’s encryption moves forward, the National Crime Agency (NCA) estimated that 92% of Messenger and 85% Instagram direct referrals could be lost.
Based on these risks, Braverman is asking Meta to implement “robust safety measures” that ensure minor protection, or halt the encryption rollout altogether.
“The use of strong encryption for online users remains a vital part of our digital world and I support it, so does the government, but it cannot come at a cost to our children’s safety,” Braveman said in a statement.
The Home Secretary first outlined her concerns in a letter to Meta thispast July. But the company “has failed to provide assurances” ensuring protection from “sickening abusers,” she now noted, adding that “appropriate safeguards” are an essential requirement for its end-to-end encryptionplans.
In response (or even anticipation) of the government’s attack, Meta published yesterday an updated report on its safety policy for the messaging platforms.
“We are committed to our continued engagement with law enforcement and online safety, digital security, and human rights experts to keep people safe,” the company writes in the report — which includes measures such as restricting adults from messaging teens they’re not connected to on Messenger.
Nevertheless, the tech giant stresses its commitment to delivering end-to-end encryption as standard for Messenger and Instagram.
“We strongly believe that E2EE is critical to protecting people’s security. Breaking the promise of E2EE — whether through backdoors or scanning of messages without the user’s consent and control — directly impacts user safety,” argues the report.
But the UK government may be holding the upper hand in the dispute, now armed with the Online Safety Bill, passed by the parliament on Tuesday.
The legislation sets sweeping content rules for social media, even empowering Britain’s comms regulator, Ofcom, to force tech companies to monitor messaging services for child sexual abuse content — a provision that has, reasonably, sparked controversy.
The computer systems of the International Criminal Court (ICC), the world’s most high-profile war tribunal, were hacked last week, according to a statement released by the court yesterday.
The tribunal said that its services detected “anomalous activity” affecting its information systems and that “immediate measures” were adopted to respond to this cybersecurity incident and to mitigate its impact.
The ICC, headquartered in The Hague, Netherlands, is the only permanent war crimes tribunal and handles extremely sensitive data about some of the world’s most horrific atrocities.
At this time, the nature of last week’s cybersecurity incident remains unclear and it’s not yet known whether any data held on the ICC’s systems was accessed or exfiltrated.
However a source told Dutch broadcaster NOS that a large number of sensitive documents have been captured, but the ICC has not confirmed this. A spokesperson for the court did not immediately respond to our request for comment.
The court did say however that it is currently investigating the incident in collaboration with Dutch authorities, and that it “continues to analyse and mitigate the impact of this incident.” It also added that it will build on its existing work to strengthen its cybersecurity framework, including increased adoption of cloud technology.
Established in 2002, the court is currently investigating crimes against humanity in 17 states including Ukraine, Uganda, Venezuela, and Afghanistan. For instance, in March 2023, the ICC issued an arrest warrant for Russian President Vladimir Putin concerning crimes linked to Russia’s invasion of Ukraine.
The Dutch intelligence agency said in its 2022 annual report that the ICC was “of interest to Russia because it is investigating possible Russian war crimes in Georgia and Ukraine.”
In August 2023, ICC Prosecutor Karim Khan warned that cyber attacks could be part of future war crimes investigations and that the ICC itself could be vulnerable and should strengthen its defences.
“In all probability this is a nation state attack [by Russia] happening just a week after the ICC established a field office in Kyiv to track Russian war crimes,” Jamie Moles, senior technical marketing manager at US-based cybersecurity firm ExtraHop, told TNW.
He continued: “It seems the ICC may have lost significant volumes of data in an attack, the details of which it refuses to disclose at this time. Too often we see institutions fail to properly secure their networks and data leading to breaches and stolen data. No one is exempt from bad actors, which is why every organisation should prepare to be attacked.”
Britain’s controversial Online Safety Bill will soon become law after passing through parliament on Tuesday.
The sweeping legislation places strict news content moderation rules on social media companies. Platforms will become legally responsible for the material they host.
Under the new rules, platforms will have to quickly remove any illegal content — or stop it from appearing in the first place. They also must prevent children from accessing harmful and age-inappropriate content, and enforce age limits and age-checking measures.
Those that fail to take rapid action face fines up to £18mn (€20.8mn) or 10% of their global annual revenue — whichever is biggest. In some cases, executives of platforms could even be imprisoned.
Michelle Donelan, the UK’s technology minister, said the rules would “make the UK the safest place in the world to be online.”
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
“The Online Safety Bill is a game-changing piece of legislation,” she said.
Children’s charities have welcomed the legislation, but digital rights activities and tech companies have raised alarm about certain implications.
Messaging platforms have been particularly opposed to the potential scanning of encrypted messages, while privacy advocates fear that free speech will be restricted. Wikipedia, meanwhile, has warned that it won’t comply with the requirement for age checks. The online encyclopedia has even threatened to withdraw from the UK over the rules.
In the six years since the bill was first proposed, some of the concerns have been addressed by amendments. Notably, lawmakers last year replaced the focus on “legal but harmful” content with an emphasis on child protection and illegal content. The government has also promised to protect end-to-end encryption, but critics have dismissed the pledges as “delusional.”
Donelan has sought to allay their fears.
“Our common-sense approach will deliver a better future for British people, by making sure that what is illegal offline is illegal online,” she said. “It puts protecting children first, enabling us to catch keyboard criminals and crack down on the heinous crimes they seek to commit.”
The fight against “fake news” appears to have overwhelming support in the EU.
According to a new study, 85% of the bloc’s citizens want policymakers to take more action against disinformation, while 89% want increased efforts from platform operators. Just 7% do not feel that stronger responses are required.
The findings emerged from surveys by Bertelsmann Stiftung, a pro-business German think tank with close ties to the EU.
Across the EU, 54% of respondents said they were “often” or “very often” unsure whether information they found online was true. However, only 44% of them had recently fact-checked the content they’d seen.
Younger and more educated people were more likely to take active response to false information. Those in favour of combating disinformation tended to be further to the political left.
Responses to false information (figures in percentages). Credit: (EU-wide and by country). Bertelsmann Stiftung
In response to the findings, Bertelsmann Stiftung made the following recommendations:
Establish an effective system for monitoring disinformation both in Germany and across Europe.
Raise public awareness about the issue of disinformation.
Promote media literacy among people of all age groups.
Ensure consistent and transparent content creation on digital platforms.
Such interventions, however, have proven divisive. Around the world, politicians have been accused of exploiting concerns around disinformation to suppress dissent and control narratives.
In the UK, campaigners found that government anti-fake news units have surveilled citizens, public figures, and media outlets for merely criticising state policies. The units also reportedly facilitated censorship of legal content on social media.
The EU, meanwhile, recently adopted the Digital Services Act (DSA), which requires platforms to mitigate the risks of disinformation. Opponents of the law fear that it will lead to state censorship.
Critics have also raised alarm about tech firms acting as arbiters of truth. But Bertelsmann Stiftung’s researchers argue that more intervention is essential.
“People in Europe are very uncertain about which digital content they can trust and which has been intentionally manipulated,” Kai Unzicker, the study author, said in a statement.
“Anyone who wants to protect and strengthen democracy cannot leave people to deal with disinformation on their own.”
Stronger responses could also bolster the growing flock of anti-disinformation startups.
The emerging sector is dominated by the US, but a hub is also emerging in Ukraine, where technologists are turning lessons from fighting Russian propaganda into new businesses.
Europeans have become “pioneers in online rights” and now want to lead a “global framework for AI,” the EU’s top official said today.
Ursula von der Leyen, the European Commission’s president, revealed the bloc’s digital plans during her State of the Union address in Strasbourg. She used the speech to flaunt the achievements of her three-year reign.
A particularly large spotlight was shone on her tech policies.
“We have set the path for the digital transition and become global pioneers in online rights,” von der Leyen said.
The former German defence minister praised the bloc’s work on semiconductor self-sufficiency, which centres on the Chips Act. Backed with €43bn of funding, the legislation aims to double the EU’s market share in semiconductors to at least 20% by 2030.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
Von der Leyen also touted the union’s clean tech industry, as well as the digital projects in NextGenerationEU, a COVID-19 recovery plan. Her biggest brag, however, involved digital safety.
“Europe has led on managing the risks of the digital world,” she said.
To the chagrin of Silicon Valley, the EU has become the world’s most formidable tech regulator. Tough laws on privacy, tax avoidance, antitrust, and online content have led to eye-popping fines for some of the biggest companies in the US. Von der Leyen warned them that more rules are coming.
To justify the intervention, she argued that disinformation, data exploitation, and “harmful content” have reduced the public’s trust and breached their rights.
“In response, Europe has become the global pioneer of citizen’s rights in the digital world,” she said.
As evidence for this claim, von der Leyen pointed to two recent regulations: the Digital Services Act (DSA), which imposes rules on content moderation, and the Digital Market Act (DMA), which aims to reign in big tech’s dominance.
Her next big target is artificial intelligence.
“We need an open dialogue with those that develop and deploy AI.
In recent months, concerns have grown about AI causing job losses, discrimination, surveillance, and even extinction. To mitigate the threats, the EU will soon adopt the AI Act, the first-ever comprehensive legislation for the tech.
Von der Leyen described the rules as “a blueprint for the whole world.” She also laid out the next steps of the EU’s plan.
“I believe Europe, together with partners, should lead the way on a new global framework for AI, built on three pillars: guardrails, governance, and guiding innovation,” she said.
The main guardrails will be provided by the AI Act. For governance, von der Leven called for the creation of a global panel of scientists, tech companies and independent experts. Together, they would inform policymakers about developments in the field.
On innovation, she announced a project that will enable AI startups to train their models on the EU’s high-performance computers. The private sector, however, will likely want further support. In an open letter published in June, some of Europe’s biggest companies warn that the AI Act will inhibit innovation and jeopardise the continent’s businesses.
Von der Leyen, meanwhile, called for closer collaboration with the private sector.
“We need an open dialogue with those that develop and deploy AI,” she said. “It happens in the United States, where seven major tech companies have already agreed to voluntary rules around safety, security and trust.
“It happens here, where we will work with AI companies, so that they voluntarily commit to the principles of the AI Act before it comes into force. Now we should bring all of this work together towards minimum global standards for safe and ethical use of AI.”
A Swiss startup has unveiled a solution to the global shortage of security guards: an autonomous patrol robot.
Named the Ascento Guard, the two-wheeled sentinel is equipped with thermal and infrared cameras, speakers, a microphone, and GPS tracking. The bidepal design promises all-terrain mobility, fall recovery from any position, and top speeds of 5km/h.
Using these features, the Ascento Guard can spot trespassers, monitor parking lots, and record property lights. It can also identify floods and fires, as well as check that doors and windows are closed.
When an incident is detected, an alarm is sent to an operator. Only then is a human security guard sent onsite to take action.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The bot is the brainchild of Ascento, a Zurich-based developer of bipedal security robots. Alessandro Morra, the startup’s CEO, told TNW that Ascento Guard is designed for large outdoor premises.
“Instead of installing many fixed cameras or sending human guards in harsh weather conditions and at night for patrol, Ascento Guard can secure the assets,” he said.
Ascento co-founders (left to right) Dominik-Mannhart, Alessandro-Morra, Ciro Salzmann, and Miguel de la Iglesia Valls. Credit: Ascento
Ascento’s founding team combines experience as security guards with robotics expertise honed at ETH Zurich, a renowned research university.
Their inventions have been deployed at large outdoor warehouses, industrial manufacturing sites, and pharma campuses. Since the start of this year, the robots have secured over 3,000 km of outdoor premises.
The Ascento Guard is the latest addition to the portfolio. According to its creators, the bot can be installed and deployed within a few hours.
Just like a human security officer, the Ascento Guard can be hired by the hour. Autonomous charging will then keep the device running at speeds of up to 5km/h.
Operators can monitor the surveillance in a web interface. Credit: Ascento
A companion app extends the robot’s capabilities. The app integrates with existing video management systems, offers end-to-end encrypted two-way communication, and generates security reports.
Morra is particularly excited about the system’s AI analytics. He envisions them identifying suspicious patterns, such as specific locations and times of incidents, or cars that consistently park in distinctive places.
“This robot design is just the beginning,” Morra said. “We are seeing multiple opportunities for how we can complement our robot to offer an indoor, aerial application integration of our technology.”
The web interface also provides a live view of footage from the cameras. Credit: Ascento
Alongside the new robot launch, Ascento today announced that it’s received another $4.3mn in funding. The pre-seed round was led by VC firms Wingman Ventures and Playfair Capital.
You can review their investment for yourself by watching the video below:
TikTok has launched its first European data centre, as part of its efforts to address Western fears over surveillance-related privacy risks.
The Chinese-owned company says it has started migrating European user information to its new data centre in Dublin. Two more centres under construction, one in Norway and another one in Ireland.
TikTok first announced its plan to localise European data storing in March, under the name “Project Clover.” The full migration of the app’s 150 millions users in the region is expected by the end of 2024. Currently, TikTok stores its global user data in Singapore, Malaysia, and the US.
The move came in response to rising concerns by governments and regulators in the West, alleging that TikTok user data is being accessed by the Chinese government — which the company has denied.
Nevertheless, a series of institutions including the EU Commision, the UK Parliament, and the French government have banned use of the app on work-related devices.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
To further ease fears, the social media app has assigned Project Clover’s oversight to a third-party European cybersecurity firm, UK-based NCC Group.
NCC Group will audit TikTok’s data controls and protections, monitor data flows, provide independent verification, and report any incidents.
“All of these controls and operations are designed to ensure that the data of our European users is safeguarded in a specially-designed protective environment, and can only be accessed by approved employees subject to strict independent oversight and verification,” said Theo Bertram, TikTok’s VP of Public Policy in Europe.
Get the TNW newsletter
Get the most important tech news in your inbox each week.
When Russian troops flooded into Ukraine last year, an army of propagandists followed them. Within hours, Kremlin-backed media were reporting that President Zelenskyy had fled the country. Weeks later, a fake video of Zelenskyy purportedly surrendering went viral. But almost as soon as they emerged, the lies were disproven.
Government campaigns had prepared Ukrainians for digital disinformation. When the crude deepfake appeared, the clip was quickly debunked, removed from social media platforms, and disproven by Zelenskyy in a genuine video.
The incident became a symbol of the wider information war. Analysts had expected Russia’s propaganda weapons to wreak havoc, but Ukraine was learning to disarm them. Those lessons are now fostering a new sector for startups: counter-disinformation.
Experts feared the Zelenskyy deepfake was merely the tip of the iceberg, but the iceberg is yet to emerge.
Like much of Ukrainian society, the country’s tech workers has adopted aspects of military ethos. Some have enlisted in the IT Army of volunteer hackers or applied their skills to defence technologies. Others have joined the information war.
In the latter group are the women who founded Dattalion. A portmanteau of data and battalion, the project provides the world’s largest free and independent open-source database of photo and video footage from the war. All media is classified as official, trusted, or not verified. By preserving and authenticating the material, the platform aims to disprove false narratives and propaganda.
Dattalion’s data collection team leader, Olha Lykova, was an early member of the team. She joined as the fighting reached the outskirts of her hometown of Kyiv.
“We started to collect data from open sources in Ukraine, because there were no international reporters and international press at the time,” Lykova, 25, told TNW in a video call. “In the news, it was not possible to see the reality of what was happening in Ukraine.”
In addition to her role at Dattalion, Lykova works in project management for Luxoft Ukraine.
Since the project was established on February 27, 2022 — just three days after the full-scale invasion began — Dattalion has been cited in more than 250 international media outlets, from NBC News to Time. With the mooted addition of a paid subscription service, it could also be monetised — a thorny challenge for the sector.
An emerging sector
Counter-disinformation is not an obvious magnet for consumer cash. Nonetheless, the sector is attracting unusual investor interest.
Governments are particularly enthusiastic backers.In the US, more than $1bn of annual public funding is allocated to fighting disinformation, the Department of State said in 2018.Across the Atlantic, European nations are investing in targeted initiatives. The UK, for instance, created a ‘fake news fund’ for Eastern Europe, while the EU has financed AI-powered anti-disinformation projects.
Big tech is also writing big cheques. Since 2016, Meta alone has ploughed over $100mn into programs supporting its fact-checking efforts. In addition, the social media giant has splashed cash on startups in the space. In 2018, the companyspent up to $30mn to buy London-based Bloomsbury AI, with the aim of deploying the acquisition against fake news.
Still, not every tech giant is enthusiastic about corroborating content. Under Elon Musk’s leadership, X (formerly Twitter) has dismantled moderation teams, policies, and features. The approach has been praised by fans of Musk — a self-proclaimed “free speech absolutist” — but triggered spikes in falsehoods on the app.
Alarmed by the controversies, brands have fled the platform in their droves. In July, Musk said X had lost almost half its ad revenue since he bought the company for $44bn last October.
According to a new EU study “the dismantling of Twitter’s safety standards” boosted “the reach and influence of Kremlin-backed accounts.” Credit: Daniel Oberhaus
As X grapples with the concerns of advertisers, a wave of tech firms are offering solutions. In the last couple of years, over $300mn has been ploughed into startups that tackle false information, according to Crunchbase data. Two of them have raised over $100mn each: San Francisco-based Primer and Tel Aviv’s ActiveFace. Both companies develop AI tools that can identify disinformation campaigns.
Ukrainian startups are also starting to raise funding— and there are signs that the investments could soon surge.
“Ukraine has been waging an informational struggle for more than 10 years.
In the EU, tech companies now have to comply with the Digital Services Act (DSA), which requires platforms to tackle disinformation. If they don’t, they face fines of up to 6% of their annual global revenue.
X’s DSA obligations have received particular attention. In June, the company received the first “stress test” of the regulatory requirements. After the mock exercise, Musk and Twitter CEO Linda Yaccarino met with EU Commissioner Thierry Breton, who oversees digital policy in the bloc. Breton emphasised a threat that Ukraine recognises all too well.
“I told Elon Musk and Linda Yaccarino that Twitter should be very diligent in preparing to tackle illegal content in the European Union,” he said. “Fighting disinformation, including pro-Russian propaganda, will also be a focus area in particular as we are entering a period of elections in Europe.”
A brief history of the disinformation war
Since the early Soviet Union, Russia has been a pioneer in influence operations. Historians have traced the very word “disinformation” to the Russian neologism “dezinformatsiya.” Some contend that it emerged in the 1920s, as the name for a bureau tasked with deceiving enemies and influencing public opinion.
Defector Ion Mihai Pacepa claimed the term was coined by none other than Joseph Stalin. The Soviet ruler reputedly chose a French-sounding name to insinuate a Western origin. Yet all of these origin stories are disputed. In a world of deception, even etymology is fraught with mistruths.
What isn’t disputed is Russia’s expertise in the field. In the Soviet era, intelligence services merged forgeries, fake news, and front groups into a playbook for political warfare. After the USSR collapsed, old strategies were embedded in new tools. Today’s tricks encompass troll farms spreading support for Kremlin views, bot armies manipulating social media algorithms, and proxy news sites amplifying falsehoods.
Ukraine is all too familiar with the tactics. The country has become a testbed for Russia’s information warfare, which has laid firm foundations for a nascent startup sector.
“It’s an enduring act — Ukraine has been waging an informational struggle against the Russian aggressor for more than 10 years now,” Denis Gursky, a former data advisor to Ukraine’s Prime Minister and the co-founder of tech NGO SocialBoost, told TNW.
“Over this time, Ukraine formed the mechanism of joint work of various sectors, which all together help to repel enemy attacks and protect the information space.”
At SocialBoost, Gursky develops civic tech and open government data. Credit: Denis Gursky
Gursky is a driving force behind Ukraine’s emerging counter-disinformation industry. In January, he co-organised the1991 Hackathon: Media, which sought digital solutions to information security challenges. One of the judging criteria was commercial potential.
The responses ranged from war crime trackers and content blockers to news monitors and verification tools. To monetise their concepts, the teams pitched an array of business plans.
Mediawise, a browser extension that adds content and author checks to online news, plans to take payment for premium features, such as alerts and extended article summaries.
OffZMI, an app that protects reliable information from a controversial Ukrainian media law, is eyeing revenues from ads, subscriptions, and NGO partnerships. MindMap, which provides Q&A translations of English-language news reports, envisions a tiered membership model.
Then there is Osavul, which won the hackathon. The company has built a platform that targets an evolving concept in the field: coordinated inauthentic behaviour (CIB).
“The problem is big enough to solve.
A term popularised by Facebook, CIB involves multiple fake accounts collaborating to manipulate people for political or financial ends. To spot this behaviour, Osavul’s AI models detect indicators including account affiliations, posting time patterns, involvement of state media, and content synchronisation.
A key component of the system is a cross-platform approach. This enables Osavu to track CIB across various social networks, online media, and messenger apps. A single campaign can, therefore, be followed from Telegram through X and then into news reports.
One such campaign claimed that NATO had donated infected blood to Ukraine. At the centre of the conspiracy theory was a fake document that purportedly proved the claim.
According to Osavul, the CIB was detected before the campaign gained momentum. Ukrainian government agencies then used the findings to refute the canard.
Dmytro Bilash (left) and Dmytro Pleshakov founded Osavul in February.
Ukrainian institutions will get free access to Osavul throughout the war, but the company has also developed a SaaS product. The software targets businesses that are vulnerable to disinformation campaigns, such as pharmaceutical companies. Osavul’s founders, Dmytro Bilash and Dmitry Pleshakov, compare it to conventional cyber security products.
“In the same way organisations protect themselves from malware or phishing, they should protect themselves from disinformation,” Bilash and Pleshakov told TNW via email. “The problem is big enough to solve, and there is a need for suppliers of software products like Osavul.”
With multilingual capabilities and the infrastructure to integrate new data sources, the platform is built to scale. “Budgets for information security are growing, so we see a huge business opportunity in this niche,” Bilash and Pleshakov said.
An early investment suggests their plan has promise. In May, Osavul raised $1mn in a funding round led by SMRK, a Ukrainian VC firm. The cash will finance a move into the international market.
That market could be ripe for expansion. A 2019 study by cybersecurity firm CHEQ and the University of Baltimore estimated that fake news costs the global economy around $78bn (€72bn) each year.
Fake news can cause dramatic stock market fluctuations. Credit: CHEQ
According to the researchers, around half of that figure comes from stock market losses. They cite an eye-popping example from 2017. That December, ABC News erroneously reported that Donald Trump had directed Michael Flynn, his former national security advisor, to contact Russian officials during the 2016 presidential campaign. Following the story, the S&P 500 Index briefly dropped by 38 points — losing investors around $341bn.
ABC didn’t retract the claim until after markets closed. At that point, the losses were down to “only” $51bn (€47bn) for the day.
Beyond the stock market, the study estimated that financial misinformation in the US costs companies $17bn (€15.9bn) each year. Health misinformation, meanwhile, causes annual losses of around $9bn (€8.4bn). The researchers said all their estimates were conservative.
A divisive business
Despite the risks to corporations, the anti-disinformation sector still depends on government backing. That foundation creates both support and frailty.
“The state can have a long-term strategy in the fight against hybrid threats because commercial and public organisations do not have the institutional stability that state bodies have,” Gurksy, the hackathon organiser, told TNW. “But the fight against disinformation is possible only in cooperation with other private and third sectors, which, in fact, have the most experience and tools in this direction.”
Government links are also a prevailing concern about anti-disinformation. Outside of Ukraine, politicians have been accused of exploiting the issue to suppress dissent and control narratives.
In the UK, campaigners found that the government’s anti-fake news units have surveilled citizens, public figures, and media outlets for merely criticising state policies. In addition, the units reportedly facilitated censorship of legal content on social media.
Caroline Lucas, the Green Party’s first MP, was included in a disinformation report for criticising the government’s response to the pandemic. Credit: Patrick Duce
Critics have also been unsettled by tech firms acting as arbiters of truth. But there are now paradoxical concerns about Silicon Valley retreating from these roles.
X, Meta, and YouTube have all been recently accused of reducing efforts to tackle disinformation. In tough economic times, these investments appear to have slipped down the list of priorities. That raises another barrier for Ukraine’s nascent startups: access to capital.
Nonetheless, there are grounds for optimism. Ukraine has a deep pool of tech talent, demonstrably resilient startups, unique experience in fighting propaganda, and strong support from international allies. Sector insiders believe this combination is a powerful launchpad for startups.
Nina Kulchevych, a disinformation researcher and founder of the Ukraine PR Army, expects her country to reap the rewards. She envisions the cottage industry evolving into a global powerhouse.
“Ukraine can be an IT hub for Europe in the creation of technologies for debunking propaganda and spreading disinformation,” she said.
In an economy devastated by war, the commercial potential of counter-disinformation is a powerful attraction. But it’s a peripheral motive for many Ukrainians in the sector. Olha Lykova, the data collection lead at Dattalion, has a separate focus: exposing the truth about Russia’s war.
“Of course, we hope that Ukraine will win,” she said. “But in any case, it will be harder to rewrite history — because we have the proof.”
This morning, one of Sweden’s largest newspapers, Svenska Dagbladet (SvD), published a thorough investigation into how criminal networks have used Spotify to launder money for years. Specifically, they have been paying for false streams of music published by artists with ties to the groups, and then capitalised on the engineered popularity.
Analysts at the National Operative Unit of the Swedish Police Force have been looking at (or listening to) rappers publishing their music on the streaming giant since autumn 2021. First, as a means of gathering clues and information about crimes from the lyrics.
Then, as streaming patterns began to crystallise, the analysts began to suspect that criminals were using the service for a different purpose entirely.
Now, from a series of interviews conducted with people belonging to either these networks or having insight into the work against streaming bots, SvD’s reporters have established that criminals have indeed been using Spotify to launder money since 2019.
The <3 of EU tech
The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!
The first step, one of the interviewees belonging to a criminal network said, was to buy Bitcoin through “cash in hand” trades initiated via a Facebook group. The gangs then used the cryptocurrency to pay for fake streams — that is, plays of music by bots, hijacked accounts, or other inauthentic methods. Apparently, this contact happened through Telegram, earning the sellers the nickname “Telegrambots.”
Symbiotic relationship between gangs and artists
So how does this scheme actually launder money? With more streams come higher ratings on lists. With higher ratings come more actual streams, and artists connected to the gangs have set up their own record labels to be able to cash in on the engineered popularity. Actual streams mean real payouts from Spotify, et voilà, the cash is back, cleansed of the illegal activity from whence it came.
In return, many of the rappers gain “legitimacy” from their affiliation with the gangs. Reportedly, one has tens of millions of streams on the platform.
Some money will inevitably be lost along the way, due to Spotify’s payout structure (free vs premium accounts, which country the listener is in, etc.). There is also a risk of discovery. Spotify has been clamping down on bot streaming, and can stop payments to accounts that are proven to be involved in the illicit activity.
According to the newspaper’s sources, this means that this way of laundering money is only worth it when dealing with sums over a few million Swedish krona (1mn SEK = approx €84,000).
Earlier this year, Spotify, the crown jewel of the Swedish startup ecosystem, said that it had paid out $40bn (€37.2bn) to the music industry since the start of operations.
Update (12: 30 UTC, September 5, 2023): Spotify shared the following statement with TNW:
“Manipulated streams are an industry-wide challenge and Spotify has been working hard to address this issue. That said, Spotify is not aware of any contact by law enforcement concerning the suggestions in the SVD article, nor have our internal teams found anything or been provided with any data or hard evidence that indicates that the platform is being used at scale in the fashion described. ”
Spotify is not aware of any contact by law enforcement concerning the suggestions in the SVD article
The company adds that only 1% of streams on the service are deemed to be artificial, with systems detecting anomalies before they reach a “significant” threshold. Furthermore, the music streaming giant highlights its work with the Music Against Fraud Alliance, and educational resources provided to artists on the harms of stream manipulation.
Vienna-based advocacy group Noyb has filed complaints against Fitbit in Austria, the Netherlands, and Italy, alleging that the Google-owned fitness tracking company is in violation of EU data privacy regulations.
Fitbit — which sells watches that track activity, heart rate, and sleep — “forces” new users of its app to consent to data transfers outside the EU, said Noyb.
Currently, the only way Fitbit users can withdraw their consent is by deleting their accounts entirely, which would mean losing all their previously tracked workouts and health data.
“This means there is no realistic way to regain control over your data without making your product useless,” said the digital rights group in a statement. This, it argued, puts Fitbit in breach of the GDPR.
“Given that the company collects the most sensitive health data, it’s astonishing that it doesn’t even try to explain its use of such data, as required by law,” said Bernardo Armentano, data protection lawyer at Noyb.
Acquired by Google in 2021 at a $2.1bn valuation, Fitbit is one of the world’s most popular smart watchmakers. Its wearable fitness trackers monitor various aspects of your activity, such as steps taken, heart rate, and sleep patterns, and syncs this data to a smartphone app for analysis and tracking. In 2021, Fitbit counted over 100 million registered users.
According to the company’s privacy policy, the data it shares not only includes things like a user’s email address, date of birth, and gender. It can also share information “like logs for food, weight, sleep, water, or female health tracking; an alarm; and messages on discussion boards or to your friends on the Services”.
Even if Fitbit did offer an opt-out function on its app, the company’s routine transfer of data to third parties outside the EU is still in breach of the GDPR, said the campaigners.
“Fitbit may be a nice app to track your fitness, but once you want to learn more about how your data is being handled, you are bound for a marathon,” said Romain Robert, one of the three complainants represented by Noyb.
Noyb, founded by privacy activist Max Schrems, has already filed hundreds of complaints against big tech companies like Google and Meta over privacy violations, some leading tobig penalties.
In this case, Noyb is requesting that the Austrian, Dutch, and Italian regulators order Fitbit to share all mandatory information about the transfers with its users and allow them to use its app without having to consent to the data transfers.
The privacy watchdogs could also issue a fine for violating GDPR rules that can reach up to 4% of a firm’s global annual revenue, which for Google’s parent company Alphabet would equal €11bn.
The UK’s National Cyber Security Centre (NCSC) is warning organisations to be wary of the imminent cyber risks associated with the integration of Large Language Models (LLMs) — such as ChatGPT — into their business, products, or services.
In a set ofblog posts, the NCSC emphasised that the global tech community doesn’t yet fully grasp LLMs’ capabilities, weaknesses, and (most importantly) vulnerabilities. “You could say our understanding of LLMs is still ‘in beta’,’’ the authority said.
One of the most extensively reported security weaknesses of existing LLMs is their susceptibility to malicious “prompt injection” attacks. These occur when a user creates an input aimed at causing the AI model to behave in an unintended way — such as generating offensive content or disclosing confidential information.
In addition, the data LLMs are trained on poses a twofold risk. Firstly a vast amount of this data is collected from the open internet, meaning it can include content that’s inaccurate, controversial, or biased.
Secondly, cyber criminals can not only distort the data available for malicious practices (also known as “data poisoning”), but also use it to conceal prompt injection attacks. This way, for example, a bank’s AI-assistant for account holders can be tricked into transferring money to the attackers.
“The emergence of LLMs is undoubtedly a very exciting time in technology – and a lot of people and organisations (including the NCSC) want to explore and benefit from it,” said the authority.
“However, organisations building services that use LLMs need to be careful, in the same way they would be if they were using a product or code library that was in beta,” the NCSC added. That is, with caution.
The UK authority is urging organisations to establish cybersecurity principles and ensure that even the “worst case scenario” of whatever their LLM-powered applications are permitted to do is something they can deal with.