Data and security

myanmar-genocide-refugees-take-meta-to-irish-court-over-disinformation-claims

Myanmar genocide refugees take Meta to Irish court over disinformation claims

Rohingya refugees are taking Meta to Ireland’s High Court for the tech giant’s alleged role in the 2017 Myanmar genocide, one of the worst war crimes of this century.

Friday (25 August) marked six years since the start of the ethnic cleansing, which saw more than 25,000 people of Muslim Rohingya descent killed by Myanmar’s military, with a further 700,000 displaced. Most fled to nearby Bangladesh, where many still live, often in refugee camps like Kutupalong, the world’s largest.  

In 2018, UN human rights investigators said the use of Meta’s social media platform Facebook had played a key role in spreading hate speech that fuelled the violence.  

“Facebook’s algorithms and Meta’s ruthless pursuit of profit created an echo chamber that helped foment hatred of the Rohingya people and contributed to the conditions which forced the ethnic group to flee Myanmar en masse,” read a statement by Amnesty International

In 2018, Facebook admitted it had been “too slow” in addressing hate speech in Myanmar and was acting to remedy the problem by hiring more Burmese speakers and investing in technology to identify problematic content. To date, the company has not paid any compensation.

“I blame Facebook, its parent company Meta, and the man behind it all, Mark Zuckerberg, for helping create the conditions that allowed the Myanmar military to unleash hell upon us,” wrote Maung Sawyeddollah, a Rohingya refugee who lives at the Kutupalong refugee camp in Bangladesh, in an op-ed for Al Jazeera last week. “The social media company allowed anti-Rohingya sentiments to fester on its pages. Its algorithms promoted disinformation that eventually translated into real-life violence.”   

The cases, filed last week, were brought by two Dublin-based law firms that represent 17 Rohingya refugees. Ireland, the cases will likely argue, is the most appropriate country to bring the litigation since Meta is headquartered there and the content moderation was done in Dublin.   

This is not the first time that the Rohingya have instigated legal proceedings against Meta. In 2021, a $150bn lawsuit was filed in California but later dismissed for a number of reasons, including a provision in the US’ Communications Decency Act that gives immunity to platforms like Facebook which publish third-party content. It remains to be seen whether the Irish courts will come to a similar judgement.

Either way, the case raises a number of serious concerns about the spread of disinformation on social media platforms. 

Ironically, the sixth anniversary of the Rohingya genocide fell on the same day that the EU’s Digital Services Act came into effect. The law is designed to empower and protect users online against harmful or illegal content, disinformation, and the violation of privacy and free speech.

Let’s hope Big Tech falls in line with these new regulations and the EU enforce them properly, so that we have a better chance of prevent tragedies like the Myanmar genocide.

Myanmar genocide refugees take Meta to Irish court over disinformation claims Read More »

google-expands-transparency-for-ads,-content,-policy-as-eu’s-new-rules-kick-in

Google expands transparency for ads, content, policy as EU’s new rules kick in

Google will provide more information on targeted advertising, content decisions, and product policies as it strives to comply with the EU’s new content moderation rules, the tech giant said on Thursday.

Known as the Digital Services Act (DSA), the bloc’s landmark legislation kicks in today for 19 big tech companies. It sets multiple far-reaching measures designed to empower and protect users online against disinformation, harmful or illegal content, and the violation of privacy and free speech.

“We will be expanding the Ads Transparency Center, a global searchable repository of advertisers across all our platforms, to meet specific DSA provisions and providing additional information on targeting for ads served in the European Union,” Google’s Laurie Richardson, VP Trust and Safety, and Jennifer Flannery O’Connor, VP Product Management YouTube, wrote. The center, which was launched in March, allows users to learn more about the ads they see.

Google is also broadening the scope of its transparency reports to include information about content moderation decisions across a larger number of their services, such as Google Play, Search, and Maps.

In addition, the company is rolling out another Transparency Center, where users can access information about product policies, find reporting and appeal tools, and get hold of its transparency reports.

Google's Transparency Center
Google’s new Transparency Center

Meanwhile, the tech giant is increasing data access for researchers seeking to “understand more about how Google Search, YouTube, Google Maps, Google Play, and Shopping work in practice” and conducting research “related to understanding systemic content risks in the EU.”

Google is among the numerous big tech companies announcing changes to adhere to the DSA’s rules. Facebook and Instagram have launched non-personalised (aka chronological) feeds, while Amazon has opened a new channel for flagging illegal or counterfeit products.

Published

Back to top

Google expands transparency for ads, content, policy as EU’s new rules kick in Read More »

here’s-how-the-eu’s-digital-services-act-changes-the-content-rules-for-big-tech

Here’s how the EU’s Digital Services Act changes the content rules for big tech

The EU’s latest crackdown on big tech begins before the end of the week. Starting on Friday, a total of 19 major companies must adhere to the sweeping rules of the Digital Services Act (DSA).

Essentially, the DSA is a landmark content moderation rulebook, designed to empower and protect users online against harmful or illegal content, disinformation, and the violation of privacy and free speech.

The tech firms listed are not only the first required to comply, but also the ones facing the act’s strictest and most far-reaching measures. That’s because they reach at least 45 million European active users per month, which according to the EU, translates to their “significant societal and economic impact.”

The legislation will eventually apply to all businesses providing digital services within the bloc, expected to come fully into force in February 2024. Violations could result in fines of up to 6% of their global revenue, or even a temporary ban from the union.

“The whole logic of our rules is to ensure that technology serves people and the societies that we live in — not the other way around,” said Margrethe Vestager, Executive VP of the Commission.

“The Digital Services Act will bring about meaningful transparency and accountability of platforms and search engines and give consumers more control over their online life.”

Who’s on the naughty list?

Ranging from social media platforms to online marketplaces and search engines, the list so far includes: Facebook, TikTok, X (formerly Twitter), YouTube, Instagram, LinkedIn, Pinterest, Snapchat, Amazon, Booking, AliExpress, Zalando, Google Shopping, Wikipedia, Google Maps, Google and Apple’s mobile app stores, Google’s Search, and Microsoft’s Bing.

5 key DSA obligations big tech have to follow

1. Remove illegal content

The designated companies are required to identify and remove any illegal content as defined by laws either at EU or national level from their platforms.

In the case of online marketplaces, this also means tracing sellers and conducting random checks on existing product databases to ensure protection against counterfeit and dangerous goods or services.

2. Ban some types of targeted ads

The big tech giants can no longer use targeted advertising that’s based on profiling of minors or sensitive personal data, such as ethnicity, sexual orientation, or political views.

3. Increase user empowerment

Users will have a set of new rights, such as flagging illegal content, contesting the decisions made by online platforms if their own content is removed, and even seek compensation for any rule breaches. They’ll also be able to receive information about the advertising practices, including if and why an ad targets them specifically with the option to opt out.

4. Constrain harmful content and disinformation

The selected companies will further have to perform an annual risk assessment and take corresponding measures to mitigate disinformation, election manipulation, hoaxes, cyber violence, and harm to vulnerable groups — while balancing freedom of expression. These measures are also subject to independent audits.

5. Be transparent

In an unprecedented move, the platforms will need to disclose long-guarded information on their data, systems, and algorithms to authorities and vetted researchers. They’ll also have to provide public access to their risk assessment and auditing reports alongside a repository with information about the ads they run.

“Complying with the DSA is not a punishment – it is an opportunity for these online platforms to reinforce their brand value and reputation as a trustworthy site,” Commissioner Thierry Breton said in a statement.

Who has complied so far?

In the group of social media, TikTok is introducing an “additional reporting option” for European consumers that allows them to flag illegal content, including advertising. It will further provide them information about its content moderation decisions and allow them to turn off personalisation. Targeted advertising for minors aged 13-17 will stop.

Snapchat has made similar changes. For instance, personal advertising for minors is no longer allowed and adult users have a higher level of transparency and control on the ads they see. Meanwhile, Meta has launched non-personalised content feeds on Facebook and Instagram.

Among the online marketplaces, Zalando has introduced content flagging systems on its website, while Amazon has opened a channel for flagging illegal products and is now providing more information about third-party merchants.

Nevertheless, both companies have taken legal action against the EU, claiming they have been “unfairly” added to the list.

The DSA’s potential impact

Historically, the rules for data sharing and online content moderation have been determined by big tech.The DSA aims to change that by setting an unprecedented touchstone, much like the EU’s regulatory efforts with the GDPR and the upcoming AI Act.

“The European Digital Services Act is trying to respond to online corporate practices that are considered inappropriate by the European Union,” David Frautschy Heredia, Senior Director of European Government and Regulatory Affairs at Internet Society (ISOC) told TNW.

“The impact of the act is being closely watched. By nature, corporate organisations operate across jurisdictions, and so their potentially damaging behaviour is not limited to a single region. Moreover, the EU has come to be widely regarded as the benchmark authority for digital regulation and as the example to follow.”

But as parts of the act and its implementation are still to be defined, experts are also pointing to potential risks.

“It is of crucial importance to ensure that these new obligations do not have unintended consequences, or they may be inadvertently mirrored across the globe, ” Frautschy Heredia noted, adding that misaligned policy could lead to the “fragmentation” of the internet.

Meanwhile, Mozilla alongside 66 civil organisations across the globe are urging the Commission to ensure that the DSA will not lead to censorship and the violation of fundamental rights.

Here’s how the EU’s Digital Services Act changes the content rules for big tech Read More »

this-watermarking-tool-detects-piracy,-ip-theft,-and-deepfakes-from-a-single-photo

This watermarking tool detects piracy, IP theft, and deepfakes from a single photo

A new watermarking tool detects pirated content from a single smartphone photo or screenshot.

The system was developed by castLabs, a video software provider based in Berlin. The company says the tech can protect videos, images, documents, and designs from piracy and intellectual property theft. It can also spot media that’s been manipulated for disinformation.

To safeguard the content, an algorithm first embeds a hidden watermark within a digital asset. Detailed user data, including IDs, IP addresses, and session information, can be stored in the watermark. 

When a user takes a picture of the content, a cloud-based extractor scans the image to identify the watermarked data. CastLabs told TNW that the results arrive in seconds — even if there’s image distortion or obstruction. The system can also withstand multiple distortions and attacks, including camcording, screenshots, and screencasting.

Once the watermark is retrieved, content owners can identify unauthorized users and sources of leaks in the supply chain. They can then pursue immediate remedial actions — such as stream takedowns — or use the data as evidence in legal actions. 

“We’re empowering them to safeguard their intellectual property, protect their monetisation models, and enforce their rights in the digital landscape,” said Michael Stattman, co-founder of castLab.

That landscape is constantly evolving. The combination of a cost of living crisis, a proliferation of streaming services, and crackdowns on password sharing have sparked a surge in online piracy.

Consumers are also increasingly supportive of the practice.  A recent survey found that 23% of US internet households view piracy as acceptable — up from 14% in 2019.

Content owners, however, are getting hit hard in the pocket. According to the Global Innovation Policy Center, overall content piracy costs as much as $71bn (€65.3bn) annually in lost revenues.

Those losses have created a big opening for castLabs, which says its tech is unique in the market. While competing solutions typically require tens of seconds of footage to extract concealed data, the castLabs system only needs a single video frame.

The company’s ambitions extend beyond the entertainment industry. CastLabs also envisions media outlets using the watermarks to detect deepfakes and fake news, while governments can apply them to secret information.

This watermarking tool detects piracy, IP theft, and deepfakes from a single photo Read More »

how-online-safety-tech-is-failing-women

How online safety tech is failing women

A team of researchers at King’s College London, has demonstrated that, despite being the more vulnerable group when it comes to cyber abuse, women engage much less than men with security and privacy tech. 

Led by Dr Kovila Coopamootoo, lecturer in Computer Science within the Cyber Security Group at King’s College, the research revealed a significant gender gap in the utilisation of tools designed to keep users safe online. 

From a survey of 600 people, in near equal proportion men and women, the team concluded that the habits of protecting oneself from cyber harassment and crime differ greatly between the two groups. 

Of the respondents, just over 75% of women were more likely to base their online safety customs on advice from family members and friends (intimate and social connections, or ISC), in comparison to under 24% of men. The vast majority of men on the other hand, 70%, were more likely to seek advice from online sources such as forums, reviews, and specialist pages. Meanwhile, this applied to only about 35% of women. 

Now, your cousin Luke might be the world’s greatest cyber security buff, in which case asking for their advice is probably a sensible thing to do. However, the researchers argue that these ISC may not be particularly qualified to provide the most accurate or helpful information. Furthermore, the arguably often more well-informed cyber security expertise available on the internet is evidently not reaching the female population.

Gender norms at play

The study also found that women were much less likely to use a broader spectrum of online safety tools, such as a VPN, multi-factor authentication, firewall, anti-spyware, anti-malware, and anti-tracking. Instead, they were more inclined to rely on simpler and more readily available security measures, such as software updates and strong passwords. 

“Women make up over 50% of the population yet they’re not able to effectively engage with digital safety advice, and security/privacy technologies,” Dr Coopamootoo stated. “The stark gender gap in access and participation, evidenced in our research, highlights the gender norms at play in online safety and the role that gender identity plays in staying safe online.”

How to increase gender equality and fairness in online safety

The research was presented at this year’s edition of the Usenix Security Symposium in Anaheim, California, an event sponsored by the likes of Meta, Google, TikTok, and IBM. The paper’s authors added a number of recommendations for developers and policymakers on how to make digital safety more inclusive. 

This includes providing trustworthy support in coping with often complex harm situations in accessible language, tailoring advice to threatening situations more often experienced by women, and ensuring women and girls are equipped with the digital skills needed to comprehend online safety protocols. However, researchers also stressed the importance of designing advice and technology that anyone can use to gain optimal protection, irrespective of skill level. 

“With online safety considered a social good and its equity advocated by international human rights organisations, we need action to bring about greater gender equity in online safety opportunities, access, participation, and outcomes,” Dr Coopamootoo added. “This requires re-envisaging the current models that don’t best serve women, so that we can make the online experience safer and fairer for everyone.”

Gender-based cyber violence is a constantly growing area of concern that has implications on both individual and societal levels. As if the obvious toll on mental health and quality of life was not enough, a different study commissioned by the European Parliament in 2021 estimated the overall costs of cyber harassment and cyber stalking of women in Europe at between €49 billion and €89.3 billion.

How online safety tech is failing women Read More »

uk’s-promise-to-protect-encrypted-messaging-is-‘delusional,’-say-critics

UK’s promise to protect encrypted messaging is ‘delusional,’ say critics

The British government’s promise to protect encryption has been pilloried by security experts and libertarians.

The dispute stems from a section of the Online Safety Bill. Under the legislation, messaging apps would be forced to provide access to private communications when requested by the regulator Ofcom.

Proponents say the measures will combat child abuse, but critics are aghast about the threat to privacy. They fear the plans will facilitate mass surveillance and damage the UK’s tech sector. Signal, Whatsapp, and five other messaging apps have all threatened to leave the country if the law is passed.

The British government has sought to allay their concerns. On Thursday, technology minister Michelle Donelan said the government is “not anti-encryption” and will protect user privacy.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

“Technology is in development to enable you to have encryption as well as to be able to access this particular information, and the safety mechanism that we have is very explicit that this can only be used for child exploitation and abuse,” Donelan told the BBC.

Her remarks were quickly lambasted by critics. Matthew Hodgson, CEO of secure messaging app Element — which is used by the government’s own Ministry of Defense — described Donelan’s claims as “factually incorrect.”

“No technology exists which allows encryption AND access to ‘this particular information.’ Detecting illegal content means ALL content must be scanned in the first place,” he said.

In response to these concerns, the government’s cybersecurity chiefs argue they can protect both children and privacy. To do this, they propose using client-side scanning, which involves installing software that detects suspicious activity. Many experts, however, argue that this tech is impossible to build.

“You cannot turn scanning on and off,” Hodgson said. “The government still does not understand how technology or encryption works, despite numerous experts explaining this to them.

“Its own ‘Safety Tech Challenge Fund’ failed to deliver an impossible solution to scan messages without breaking encryption. What more will it take for the government to finally accept how encryption works?”

Tech firms are not alone in opposing the plans.  Civil rights groups and libertarians have also denounced Donelan’s comments.

Michelle Donelan was appointed Secretary of State for Science, Innovation and Technology on Thursday 20 July.
Michelle Donelan was appointed to lead the new department for science, innovation, and technology in February.

Matthew Lesh, director of public policy and communications at the IEA, a free-market think-tank, described the government’s claims as “delusional.”

“There is no magic technological solution in existence or development that can protect user privacy while scanning their messages,” he said. “It’s a contradiction in terms.”

These arguments, however, have struggled to convince the general public.

According to a recent YouGov survey, there is strong support for the government’s plans. Almost three-quarters (73%) of respondents backed the requirement for tech that can identify child abuse in encrypted messages.

The NSPCC — which commissioned the research — said the critics are “out of step” with the public on the issue.

Defenders of encryption are running out of time to win more hearts and minds. The Online Safety Bill is expected to become law later this autumn.

UK’s promise to protect encrypted messaging is ‘delusional,’ say critics Read More »

deep-learning-model-can-steal-data-by-listening-to-keystrokes-with-95%-accuracy

Deep learning model can steal data by listening to keystrokes with 95% accuracy

Deep learning model can steal data by listening to keystrokes with 95% accuracy

A team of UK researchers has trained a deep learning model to interpret keystrokes remotely based solely on audio.

By recording keystrokes to train the model, they were able to predict what was typed on the keyboard with up to 95% accuracy. This accuracy dropped to 93% when using Zoom to train the system.

According to the new research, this means that sensitive information like passwords and messages could be interpreted by anyone within hearshot of someone typing away on their laptop, either by recording them in person or virtually through a video call. 

These so-called acoustic side-channel attacks have become much simpler in recent years due to the abundance of microphone-bearing devices like smartphones that can capture high-quality audio. 

Combined with the rapid advancements in machine learning, this makes these kinds of attacks feasible and a lot more dangerous than previously thought. Basically, you could hack sensitive information armed with nothing more than a microphone and machine learning algorithm.

“The ubiquity of keyboard acoustic emanations makes them not only a readily available attack vector, but also prompts victims to underestimate (and therefore not try to hide) their output,” the researchers said. “For example, when typing a password, people will regularly hide their screen but will do little to obfuscate their keyboard’s sound.”

The team conducted the test using a MacBook Pro. They pressed 36 individual keys 25 times a piece. This was the basis for the machine learning model to recognise what character is associated with what keystroke sound. 

This information was recorded both via a phone in close physical proximity to the laptop and Zoom. There were enough subtle differences in the waveforms produced by the recording for it to recognise each key with a startling degree of accuracy.

To prevent someone hacking your keystrokes, the researchers recommend typing style changes, using randomised passwords as opposed to passwords containing full words, adding randomly generated fake keystrokes for voice call-based attacks, and using biometric tools, like fingerprint or face scanning.

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Deep learning model can steal data by listening to keystrokes with 95% accuracy Read More »

taiwan-taps-european-satellite-firms-to-protect-wartime-communications

Taiwan taps European satellite firms to protect wartime communications

As fears of war with China escalate, Taiwan is seeking European support for its communications systems.

The island nation recently tapped services from two satellite firms on the continent: the UK’s OneWeb and Luxembourg’s SES.

To avoid disruption during a potential conflict, SES will implement a medium-earth orbit (MEO) satellite network, Taipei’s digital ministry announced last week.

SES confirmed the project to TNW on Monday. The company said it aimed to give Taiwan “an emergency backup network in case of damages to its current terrestrial networks.”

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

The move is part of broader plans to strengthen Taiwan’s digital resilience. The country hopes non-geostationary satellites can protect services such as online calls, video conferencing, and live broadcasting.

In addition to the SES deployment, Taiwan has sought assistance from OneWeb. Audrey Tang, the country’s digital minister, visited the British company in June to discuss deploying a low-Earth orbit (LEO) system. Tang said OneWeb — which is backed by the UK government — will provide satellite coverage for all of Taiwan by the end of the year.

The network would expand the communications options on the island. By the end of 2024, the nation aims to install over 700 satellite receivers, which will provide a backup network during any disaster.

The plans show the growing role that satellite firms are playing in global conflicts. They’ve been particularly prominent in Ukraine, where SpaceX’s Starlink has provided internet services since Russia’s full-scale invasion. The network has kept the country connected during disruption to terrestrial systems.

The war in Ukraine has also strengthened the case for the EU’s satellite constellation. Known as IRIS2, the network is designed to maintain internet access during crisis situations. The $6.2bn (€5.7bn) project is scheduled to launch by 2027.

“For the first time, the European Union will have its own telecommunications constellation, in particular in low orbits, the new frontier for telecommunication satellites,” MEP Christophe Grudler, rapporteur on the EU secure connectivity programme, said last year.

Taiwan taps European satellite firms to protect wartime communications Read More »

norway-fines-meta-1-million-crowns-per-day-over-data-harvesting-for-behavioural-ads

Norway fines Meta 1 MILLION crowns per day over data harvesting for behavioural ads

Meta’s litany of European privacy sanctions in 2023 just got a little longer. After a €390mn fine for illegal personalised ads, another €5.5mn hit for similar violations in WhatsApp, and a GDPR record €1.2bn for unsafe data transfers, this week yet another punishment arrived — and the sentence did not disappoint.

Norwegian regulators have demanded a gloriously round figure that would make Dr Evil proud: 1 MILLION crowns (€89,000) per day. The penalties are due to begin on August 14, but Meta wants a temporary injunction against the order, Reuters reports.

The ruling follows news last month that Norway will temporarily ban behavioural ads on Facebook and Instagram over privacy breaches. At the time, the country’s data protection authority, Datatilsynet, warned that Meta would also be fined if it didn’t address the violations.

The regulator cited various risks of using online behaviour for ad targeting, from fuelling discrimination to undermining democracy.

“Invasive commercial surveillance for marketing purposes is one of the biggest risks to data protection on the internet today. Users must have sufficient control over their own data, and any tracking must be limited,” Tobias Judin, head of Datatilsynet’s International Sector, said in a statement.

As the backlash grew, Meta pledged last week to seek consent from EU users for showing personalised ads. But this measure failed to impress Datatilsynet. 

“According to Meta, this will take several months, at the very earliest, for them to implement… And we don’t know what the consent mechanism will look like,” Judin told Reuters.

As a result, Datatilsynet will issue the daily fines until at least November 3 — but the regulator has also threatened to make them permanent.

For Meta, such sanctions have become painfully familiar. In May, the social media behemoth was found to have amassed over half of the €4bn in total fines for GDPR breaches.

Published

Back to top

Norway fines Meta 1 MILLION crowns per day over data harvesting for behavioural ads Read More »

meta-succumbs-to-eu-pressure,-will-seek-user-consent-for-targeted-ads

Meta succumbs to EU pressure, will seek user consent for targeted ads

Meta operates a highly targeted advertising model based on the swathes of personal data you share on its platforms, and it makes tens of billions of dollars off it each year. 

While these tactics are unlikely to end altogether in the near future, the company could soon offer users in the EU the chance to “opt-in” to the ads, the Wall Street Journal reports. 

Since April, Meta has offered users in Europe the chance to opt out from personalised ads but only if they complete a lengthy form on its help pages. That process has likely limited the number of people who have opted out. 

An opt-in option, however, would give users protection by default. That doesn’t mean you won’t be targeted by generalised ads, based on broader demographic data, such as your age, but it would prevent highly personalised ads based on, for instance, the videos you watch or the posts you share. Under EU law, a user has to be able to access Meta’s platforms even if they opt out.   

Meta said the change stems from an order in January by Ireland’s Data Protection Commissioner (DPC) to reassess the legal basis of how it targets ads. The proposal comes amid mounting pressure from EU privacy regulators.

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

In May, the DPC slapped Meta with a record €1.2bn fine over its transfer of user data to the US, which the watchdog said could violate citizens’ privacy under the GDPR. In fact, Meta has amassed half of the €5bn in fines issued under the GDPR since the law came into force five years ago. A new set of privacy rules, the Digital Markets Act (DMA), also forced the tech giant to delay the release of its new social app Threads in the EU.

As the EU tightens its grip and the fines pile up, it seems that the social media giant might finally be buckling. The change to its ads consent policy could come into effect in just three months’ time if the EU agrees to the proposal. 

Regulations limiting the company’s use of personal data for advertising could be a significant hit to its main source of income. The company said the Europe region generated 23% of its $31.5 billion in advertising revenue in the second quarter of this year.

However, Max Schrems, the privacy campaigner who filed the original forced consent GDPR complaint against Meta back in May 2018, says he will be closely monitoring exactly how Meta applies the new policy. 

“We will see if Meta is actually applying the consent requirement to all use of personal data for ads,” he said via his privacy rights not-for-profit noyb. “So far they talk about ‘highly personalized’ or ‘behavioural’ ads and it’s unclear what this means.

The GDPR covered all types of personalization, also on things like your age, which is not a ‘behaviour’. We will obviously continue litigation if Meta will not apply the law fully.”

Meta succumbs to EU pressure, will seek user consent for targeted ads Read More »

critical-infrastructure-radio-tech-‘easily-hacked’-through-deliberate-backdoor

Critical infrastructure radio tech ‘easily hacked’ through deliberate backdoor

Dutch researchers have found vulnerabilities in TETRA — a radio technology used across the world to control critical infrastructure such as power grids, gas pipelines, and trains. 

The researchers, Job Wetzels, Carlo Meijer, and Wouter Bokslag of cybersecurity firm Midnight Blue, found a deliberate backdoor in the encryption algorithm of these radios — made by Motorola, Damm, Hytera, and others — that was “easy” to hack. 

“The results of this research are serious,” said Jacobs, who is also a professor of computer security at Radboud University Nijmegen. “It is serious for the government, but also for business. It concerns vital infrastructure whose functioning can be affected by serious attacks.”

According to researchers, attackers could hack the network to send malicious commands that would disrupt critical infrastructure. They could also listen in on emergency services. “These are all realistic scenarios,” said Wetzels. 

Worryingly, critical infrastructure from all over the world is controlled using TETRA.  

In the Netherlands, the port of Rotterdam, several public transport companies, and most airports use the system. C2000, the communication system of the police, fire brigade, ambulance services, and parts of the Ministry of Defence, is also based on TETRA. 

Many critical infrastructure authorities in Germany, France, Spain, and other European countries rely on the network, and so do several equivalent entities in the USA, according to a WIRED investigation. TETRA is estimated to be in use in 120 countries.

And you don’t even have to be an expert hacker to tap the network. According to Midnight Blue, you could crack the system in a minute using simple hardware such as a radio and dongle. Once cracked, hackers could send malicious commands to critical infrastructure undetected.  

The researchers first uncovered the vulnerabilities in 2021 and immediately reported them to the Dutch National Cyber Security Centre. Over the last two years, the NCSC has been hard at work informing the governments of various countries about the dangerous loopholes.

The Midnight Blue team also took it upon themselves to notify as many manufacturers and users of the technology as possible. Assumably, the researchers and the authorities only now deemed it safe enough to make the information public. 

Going forward, Midnight Blue warns that anyone using radio technologies should check with their manufacturer to determine if their devices are using TETRA and what fixes or mitigations are available. 

Aside from their day jobs, Wetzels, Meijer, and Bokslag are so-called ethical hackers. Meijer previously cracked the technology behind the OV-chipcard, the Dutch transport card, and Bokslag hacked the wireless car keys of Peugeot, Opel, and Fiat. Both did so to make the technology more secure.

Despite their best efforts to raise awareness of the TETRA backdoor vulnerabilities, the researchers say that many critical infrastructure companies are nonresponsive, and for all we know, could still be at risk. 

Critical infrastructure radio tech ‘easily hacked’ through deliberate backdoor Read More »

new-uk-law-could-spark-‘default-surveillance-of-everyone’s-devices’

New UK law could spark ‘default surveillance of everyone’s devices’

New UK law could spark ‘default surveillance of everyone’s devices’

New laws proposed in the UK could normalise surveillance of personal devices, experts have warned.

The concerns stem from a planned update to the Investigatory Powers Act (IPA). When the original rules passed in 2016, critics described them as the “most extreme spying powers ever seen.” They’re now set to become even more intrusive.

Under the new proposals, messaging services would have to clear security features with the government before releasing them. The Home Office could also demand that features are disabled —without telling the public. Apple has threatened to remove FaceTime and iMessage from the UK if the plans are enforced.

Another prominent critic is Harry Halpin, the CEO of Nym Technologies, a privacy startup based in Switzerland. According to Halpin, the rules could lead to “surveillance as the default on everyone’s devices.”

Secretly toying with security features designed to keep users safe is short-sighted and could be exploited by adversaries, whether they are criminal or political,” he told TNW.

One of Halpin’s key concerns involves the impact on the impending Online Safety Bill.

Ostensibly an attempt to remove harmful content from the internet, the bill has sparked fears that backdoors to end-to-end encryption will be mandated. Apple, Signal and WhatsApp have all refused to comply with the requirement.

Combined with the IPA, the legislation could make enforcement “politically motivated,” said Halpin.

“The thing about backdoors when it comes to communications technologies is that when you open them, you open them to anyone shrewd enough to exploit them,” he warned. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


New UK law could spark ‘default surveillance of everyone’s devices’ Read More »