Twitter

x-can’t-stop-spread-of-explicit,-fake-ai-taylor-swift-images

X can’t stop spread of explicit, fake AI Taylor Swift images

Escalating the situation —

Will Swifties’ war on AI fakes spark a deepfake porn reckoning?

X can’t stop spread of explicit, fake AI Taylor Swift images

Explicit, fake AI-generated images sexualizing Taylor Swift began circulating online this week, quickly sparking mass outrage that may finally force a mainstream reckoning with harms caused by spreading non-consensual deepfake pornography.

A wide variety of deepfakes targeting Swift began spreading on X, the platform formerly known as Twitter, yesterday.

Ars found that some posts have been removed, while others remain online, as of this writing. One X post was viewed more than 45 million times over approximately 17 hours before it was removed, The Verge reported. Seemingly fueling more spread, X promoted these posts under the trending topic “Taylor Swift AI” in some regions, The Verge reported.

The Verge noted that since these images started spreading, “a deluge of new graphic fakes have since appeared.” According to Fast Company, these harmful images were posted on X but soon spread to other platforms, including Reddit, Facebook, and Instagram. Some platforms, like X, ban sharing of AI-generated images but seem to struggle with detecting banned content before it becomes widely viewed.

Ars’ AI reporter Benj Edwards warned in 2022 that AI image-generation technology was rapidly advancing, making it easy to train an AI model on just a handful of photos before it could be used to create fake but convincing images of that person in infinite quantities. That is seemingly what happened to Swift, and it’s currently unknown how many different non-consensual deepfakes have been generated or how widely those images have spread.

It’s also unknown what consequences have resulted from spreading the images. At least one verified X user had their account suspended after sharing fake images of Swift, The Verge reported, but Ars reviewed posts on X from Swift fans targeting others who allegedly shared images whose accounts remain active. Swift fans also have been uploading countless favorite photos of Swift to bury the harmful images and prevent them from appearing in various X searches. Her fans seem dedicated to reducing the spread however they can, with some posting different addresses, seemingly in attempts to dox an X user who, they’ve alleged, is the initial source of the images.

Neither X nor Swift’s team has yet commented on the deepfakes, but it seems clear that solving the problem will require more than just requesting removals from social media platforms. The AI model trained on Swift’s images is likely still out there, likely procured through one of the known websites that specialize in making fine-tuned celebrity AI models. As long as the model exists, anyone with access could crank out as many new images as they wanted, making it hard for even someone with Swift’s resources to make the problem go away for good.

In that way, Swift’s predicament might raise awareness of why creating and sharing non-consensual deepfake pornography is harmful, perhaps moving the culture away from persistent notions that nobody is harmed by non-consensual AI-generated fakes.

Swift’s plight could also inspire regulators to act faster to combat non-consensual deepfake porn. Last year, she inspired a Senate hearing after a Live Nation scandal frustrated her fans, triggering lawmakers’ antitrust concerns about the leading ticket seller, The New York Times reported.

Some lawmakers are already working to combat deepfake porn. Congressman Joe Morelle (D-NY) proposed a law criminalizing deepfake porn earlier this year after teen boys at a New Jersey high school used AI image generators to create and share non-consensual fake nude images of female classmates. Under that proposed law, anyone sharing deepfake pornography without an individual’s consent risks fines and being imprisoned for up to two years. Damages could go as high as $150,000 and imprisonment for as long as 10 years if sharing the images facilitates violence or impacts the proceedings of a government agency.

Elsewhere, the UK’s Online Safety Act restricts any illegal content from being shared on platforms, including deepfake pornography. It requires moderation, or companies will risk fines worth more than $20 million, or 10 percent of their global annual turnover, whichever amount is higher.

The UK law, however, is controversial because it requires companies to scan private messages for illegal content. That makes it practically impossible for platforms to provide end-to-end encryption, which the American Civil Liberties Union has described as vital for user privacy and security.

As regulators tangle with legal questions and social media users with moral ones, some AI image generators have moved to limit models from producing NSFW outputs. Some did this by removing some of the large quantity of sexualized images in the models’ training data, such as Stability AI, the company behind Stable Diffusion. Others, like Microsoft’s Bing image creator, make it easy for users to report NSFW outputs.

But so far, keeping up with reports of deepfake porn seems to fall squarely on social media platforms’ shoulders. Swift’s battle this week shows how unprepared even the biggest platforms currently are to handle blitzes of harmful images seemingly uploaded faster than they can be removed.

X can’t stop spread of explicit, fake AI Taylor Swift images Read More »

elon-musk-drops-price-of-x-gold-checks-amid-rampant-crypto-scams

Elon Musk drops price of X gold checks amid rampant crypto scams

Elon Musk drops price of X gold checks amid rampant crypto scams

There’s currently a surge in cryptocurrency and phishing scams proliferating on X (formerly Twitter)—hiding under the guise of gold and gray checkmarks intended to mark “Verified Organizations,” reports have warned this week.

These scams seem to mostly commandeer dormant X accounts purchased online through dark web marketplaces, according to a whitepaper released by the digital threat monitoring platform CloudSEK. But the scams have also targeted high-profile X users who claim that they had enhanced security measures in place to protect against these hacks.

This suggests that X scammers are growing more sophisticated at a time when X has launched an effort to sell even more gold checks at lower prices through a basic tier announced this week.

Most recently, the cyber threat intelligence company Mandiant—which is a subsidiary of Google—confirmed its X account was hijacked despite enabling two-factor authentication. According to Bleeping Computer, the hackers used Mandiant’s account to “distribute a fake airdrop that emptied cryptocurrency wallets.”

A Google spokesperson declined to comment on how many users may have been scammed, but Mandiant is investigating and promised to share results when its probe concludes.

In September, a similar fate befell Ethereum co-founder Vitalik Buterin, who had his account hijacked by hackers. The bad actors posted a fake offer for free non-fungible tokens (NFTs) with a link to a fake website designed to empty cryptocurrency wallets. The post was only up for about 20 minutes but drained $691,000 in digital assets from Buterin’s unsuspecting followers, according to CloudSEK’s research.

Another group monitoring cryptocurrency and phishing scams linked to X accounts is MalwareHunterTeam (MHT), Bleeping Computer reported. This week, MHT has flagged additional scams targeting politicians’ accounts, including a Canadian senator, Amina Gerba, and a Brazilian politician, Ubiratan Sanderson.

On X, gold ticks are supposed to reassure users that an account can be trusted by designating that an account is affiliated with an official organization or company. Gray ticks signify an account is linked to government organizations. CloudSEK estimated that hijacked gold and gray checks could be sold online for between $1,200 to $2,000, depending on how old the account is or how many followers it has. Bad actors can also buy accounts affiliated with gold accounts for $500 each.

A CloudSEK spokesperson told Ars that its team is “in the process of reporting the matter” to X.

X did not immediately respond to Ars’ request to comment.

CloudSEK predicted that scams involving gold checks would continue to be a problem so long as selling gold and gray checks remains profitable.

“It is evident that threat actors would not budge from such profit-making businesses anytime soon,” CloudSEK’s whitepaper said.

For organizations seeking to avoid being targeted by hackers on X, CloudSEK recommends strengthening brand monitoring on the platform, enhancing security settings, and closing out any dormant accounts. It’s also wise for organizations to cease storing passwords in a browser, and instead use a password manager that’s less vulnerable to malware attacks, CloudSEK said. Organizations on X may also want to monitor activity on any apps that become connected to X, Bleeping Computer advised.

Elon Musk drops price of X gold checks amid rampant crypto scams Read More »

elon-musk-told-bankers-they-wouldn’t-lose-any-money-on-twitter-purchase

Elon Musk told bankers they wouldn’t lose any money on Twitter purchase

Value destruction —

Lenders unlikely to get even 60 cents on the dollar for the bonds and loans.

Elon Musk and a twitter logo

Elon Musk privately told some of the bankers who lent him $13 billion to fund his leveraged buyout of Twitter that they would not lose any money on the deal, according to five people familiar with the matter.

The verbal guarantees were made by Musk to banks as a way to reassure the lenders as the value of the social media site, now rebranded as X, fell sharply after he completed the acquisition last year.

Despite the assurances, the seven banks that lent money to the billionaire for his buyout—Morgan Stanley, Bank of America, Barclays, MUFG, BNP Paribas, Mizuho and Société Générale—are facing serious losses on the debt if and when they eventually sell it.

The sources did not specify when Musk’s assurances were made, although one noted Musk had made them on several occasions. But the billionaire’s behavior, both in attempting to back out of the takeover in 2022 and more recently in alienating advertisers, has more broadly stymied the banks’ efforts to offload the debt since he engineered the takeover.

Large hedge funds and credit investors on Wall Street held conversations with the banks late last year, offering to buy the senior-most portion of the debt at roughly 65 cents on the dollar. But in recent interviews with the Financial Times, several said there was no price at which they would buy the bonds and loans, given their inability to gauge whether Linda Yaccarino, X’s chief executive, could turn the business around.

One multibillion-dollar firm that specializes in distressed debt called X’s debt “uninvestable.”

Selling the $12.5 billion of bonds and loans below 60 cents on the dollar—a price many investors believe the banks would be lucky to achieve in the current market—would imply losses before accounting for X’s interest payments of $4 billion or more, writedowns that have not yet been publicly reported by the syndicate of lenders, according to FT calculations. The debt is split between $6.5 billion of term loans, as well as $6 billion of senior and junior bonds and a $500 million revolver.

Morgan Stanley, Bank of America, Barclays, MUFG, BNP Paribas, Mizuho and Société Générale declined to comment. A spokesperson for X declined to comment. Musk did not return a request for comment.

The banks have held the debt on their balance sheets instead of selling at a steep loss in the hope that X’s performance will improve following a series of cost-cutting measures. Several people involved in the transaction noted that there was no plan to sell the debt imminently, with one saying there was no guarantee the banks would be able to offload the debt even in 2024.

The people involved in the deal cautioned that Musk’s guarantee was not based on any formal contract. One said they understood it as a boastful statement that the entrepreneur had never let his lenders down.

“I have never lost money for those who invest in me and I am not starting now,” he told Axios earlier this month, when asked about a separate fundraising push by his company X.ai Corp.

Some on Wall Street view Musk’s personal guarantees with skepticism, given that he tried to back out of his agreement to buy Twitter despite a watertight contract, before relenting.

Nevertheless, the guarantee from a man whose net worth Forbes pegs at about $243 billion has helped some of the bankers make the pitch to their internal committees that they can ascribe a higher price to the debt while they hold it on their balance sheets.

Morgan Stanley, the largest lender on the deal, in January disclosed $356 million in mark-to-market losses on corporate loans it planned to sell and loan hedges. Banks rarely report specific losses tied to an individual bond or loan, and often report write-downs of multiple deals together.

Wall Street was saddled with the Twitter buyout loan at the same time they were holding a smattering of other hung bridge loans—deals they were forced to fund themselves after failing to raise cash in public bond and loan markets. The FT has previously reported on large losses tied to other hung loans at the time, including the buyouts of technology company Citrix and television rating provider Nielsen.

How the debt has been marked on bank balance sheets has been an open question for traders and investors across Wall Street, given how much X’s business has deteriorated since Musk bought the company.

Musk, already out of favor with marketers for loosening content moderation, last month lost more advertisers after endorsing an antisemitic post. In November he followed by telling brands that were boycotting the business over his actions to “go fuck” themselves, criticizing Disney’s Bob Iger in particular.

According to a report last week from market intelligence firm Sensor Tower, in November 2023 total US ad spend among the top 100 advertisers on X was down nearly 45 percent compared with October 2022, prior to Musk’s takeover.

© 2023 The Financial Times Ltd. All rights reserved. Not to be redistributed, copied, or modified in any way.

Elon Musk told bankers they wouldn’t lose any money on Twitter purchase Read More »

elon-musk’s-new-ai-bot,-grok,-causes-stir-by-citing-openai-usage-policy

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy

You are what you eat —

Some experts think xAI used OpenAI model outputs to fine-tune Grok.

Illustration of a broken robot exchanging internal gears.

Grok, the AI language model created by Elon Musk’s xAI, went into wide release last week, and people have begun spotting glitches. On Friday, security tester Jax Winterbourne tweeted a screenshot of Grok denying a query with the statement, “I’m afraid I cannot fulfill that request, as it goes against OpenAI’s use case policy.” That made ears perk up online since Grok isn’t made by OpenAI—the company responsible for ChatGPT, which Grok is positioned to compete with.

Interestingly, xAI representatives did not deny that this behavior occurs with its AI model. In reply, xAI employee Igor Babuschkin wrote, “The issue here is that the web is full of ChatGPT outputs, so we accidentally picked up some of them when we trained Grok on a large amount of web data. This was a huge surprise to us when we first noticed it. For what it’s worth, the issue is very rare and now that we’re aware of it we’ll make sure that future versions of Grok don’t have this problem. Don’t worry, no OpenAI code was used to make Grok.”

In reply to Babuschkin, Winterbourne wrote, “Thanks for the response. I will say it’s not very rare, and occurs quite frequently when involving code creation. Nonetheless, I’ll let people who specialize in LLM and AI weigh in on this further. I’m merely an observer.”

A screenshot of Jax Winterbourne's X post about Grok talking like it's an OpenAI product.

Enlarge / A screenshot of Jax Winterbourne’s X post about Grok talking like it’s an OpenAI product.

Jason Winterbourne

However, Babuschkin’s explanation seems unlikely to some experts because large language models typically do not spit out their training data verbatim, which might be expected if Grok picked up some stray mentions of OpenAI policies here or there on the web. Instead, the concept of denying an output based on OpenAI policies would probably need to be trained into it specifically. And there’s a very good reason why this might have happened: Grok was fine-tuned on output data from OpenAI language models.

“I’m a bit suspicious of the claim that Grok picked this up just because the Internet is full of ChatGPT content,” said AI researcher Simon Willison in an interview with Ars Technica. “I’ve seen plenty of open weights models on Hugging Face that exhibit the same behavior—behave as if they were ChatGPT—but inevitably, those have been fine-tuned on datasets that were generated using the OpenAI APIs, or scraped from ChatGPT itself. I think it’s more likely that Grok was instruction-tuned on datasets that included ChatGPT output than it was a complete accident based on web data.”

As large language models (LLMs) from OpenAI have become more capable, it has been increasingly common for some AI projects (especially open source ones) to fine-tune an AI model output using synthetic data—training data generated by other language models. Fine-tuning adjusts the behavior of an AI model toward a specific purpose, such as getting better at coding, after an initial training run. For example, in March, a group of researchers from Stanford University made waves with Alpaca, a version of Meta’s LLaMA 7B model that was fine-tuned for instruction-following using outputs from OpenAI’s GPT-3 model called text-davinci-003.

On the web you can easily find several open source datasets collected by researchers from ChatGPT outputs, and it’s possible that xAI used one of these to fine-tune Grok for some specific goal, such as improving instruction-following ability. The practice is so common that there’s even a WikiHow article titled, “How to Use ChatGPT to Create a Dataset.”

It’s one of the ways AI tools can be used to build more complex AI tools in the future, much like how people began to use microcomputers to design more complex microprocessors than pen-and-paper drafting would allow. However, in the future, xAI might be able to avoid this kind of scenario by more carefully filtering its training data.

Even though borrowing outputs from others might be common in the machine-learning community (despite it usually being against terms of service), the episode particularly fanned the flames of the rivalry between OpenAI and X that extends back to Elon Musk’s criticism of OpenAI in the past. As news spread of Grok possibly borrowing from OpenAI, the official ChatGPT account wrote, “we have a lot in common” and quoted Winterbourne’s X post. As a comeback, Musk wrote, “Well, son, since you scraped all the data from this platform for your training, you ought to know.”

Elon Musk’s new AI bot, Grok, causes stir by citing OpenAI usage policy Read More »

meta’s-threads-reaches-100m-users,-despite-delayed-eu-launch

Meta’s Threads reaches 100M users, despite delayed EU launch

Early this morning, Meta’s new Twitter rival, Threads, hit 100 million sign-ups, less than a week after launch. This makes Threads the fastest growing online platform in history, dethroning ChatGPT, which took two months to reach the same number of users. 

When Mark Zuckerberg commented (posting on Threads, naturally) on hitting 70 million users on Friday last week, he stated this was already “way beyond our expectations.” However, it is still a long way to go to the two-billion user base accessible through Instagram. 

It is worth noting that the record-breaking growth is despite Threads currently being unavailable on EU app stores due to privacy concerns. Meanwhile, even within the bloc, non-Apple addicts can download the app using an Android package kit, or APK. (UK iPhone users, download at will.)

Some major organisations such as French media outlets Le Monde and Agence France-Presse have reportedly found ways to circumvent geographical challenges to signing up, as have, ahem, we

The rivalry thus far: cage fights and trade secrets

Naturally, the launch of the new microblogging app from the people who brought us Facebook did not go by without protests from camp Elon. Apart from the cage match challenge (which we are all not-so-secretly hoping might still take place), Musk has threatened to sue Threads over “stealing trade secrets.” 

The <3 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

Meta issued a response (again, posting on Threads) saying that: “No one on the Threads engineering team is a former Twitter employee — that’s just not a thing.” However, the launch of Threads could not have been more optimally timed, coinciding with a bunch of missteps form Musk including limiting the number of tweets users could view in a day. 

When Twitter (founded in 2006, for those of you who may have been too young to remember) went public in 2013, it had 200 million users. The takeover by Musk went through in October 2022, sparking some controversy, and causing the app to lose about 32 million users. 

However, the latest statistics show that Twitter still has 368 million monthly active users worldwide. Furthermore, it counts 206 million monetisable daily users, with about 90% of Twitter’s revenue coming from advertising in 2022. 

Of course, if Threads continues to increase user numbers at this rate, it could indeed become a serious contender for the text-based throne. Initially, there will be no ads on Threads while the company “fine-tunes” the app. However, Zuckerberg has said that once Threads is on its way to one billion users, Meta will start thinking about monetising the newest addition to its portfolio.

Meta’s Threads reaches 100M users, despite delayed EU launch Read More »

twitter’s-withdrawal-from-disinformation-code-draws-ire-of-eu-politicians

Twitter’s withdrawal from disinformation code draws ire of EU politicians

Twitter’s withdrawal from disinformation code draws ire of EU politicians

Linnea Ahlgren

Story by

Linnea Ahlgren

Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climat Linnea is the senior editor at TNW, having joined in April 2023. She has a background in international relations and covers clean and climate tech, AI and quantum computing. But first, coffee.

Following a decision to pull Twitter out of the EU’s (voluntary) disinformation Code of Practice last week, the reactions have not been long in coming. Upon receiving the news, the bloc’s industry chief Thierry Breton said that Twitter would still need to abide by EU rules soon enough.

Or, as Monsieur Breton put it (tweeted, in fact) when referring to the Digital Services Act (DSA), which will make fighting disinformation a legal obligation from 25 August, “You can run, but you cannot hide.” 

Twitter leaves EU voluntary Code of Practice against disinformation.

But obligations remain. You can run but you can’t hide.

Beyond voluntary commitments, fighting disinformation will be legal obligation under #DSA as of August 25.

Our teams will be ready for enforcement.

— Thierry Breton (@ThierryBreton) May 26, 2023

Commissioner Breton was joined in his vexation today by France’s Digital Minister Jean-Noël Barrot. As reported by Politico, Barrot stated to the radio network France Info that, should Twitter fail to follow the new (and obligatory) rules laid down by the DSA, the company would get kicked out of the European Union. 

“Disinformation is one of the gravest threats weighing on our democracies,” said Barrot, as translated by Politico. “Twitter, if it repeatedly doesn’t follow our rules, will be banned from the EU.” 

First-of-its-kind self-regulatory rules

The code of conduct requires companies to measure their work on combating disinformation and issue regular reports on their progress. This includes things such as demonetising the dissemination of disinformation, ensuring transparency of political advertising, enhancing the cooperation with fact-checkers, and providing researchers with better data.

Google, TikTok, Microsoft, and Meta are all voluntary signatories. Twitter, obviously, was also part of the group up until last week.

There has been no official statement (or tweet for that matter) on the decision to leave, but it seems Elon Musk has changed his mind from four years ago, which was when the industry first agreed on the self-regulatory EU rules.

In an interview at the time, he stated that, “I think there should be regulations on social media to the degree that it negatively affects the public good. We can’t have like willy-nilly proliferation of fake news, that’s crazy.”

Blocking accounts on the behest of governments has increased

A $44 billion impulse purchase or not, changes have abounded at Twitter since Elon bought it. More than supplying the accounts of dead people with little blue ticks, it would seem that the new “era of free speech” he proclaimed is highly mutable.

Since Musk’s takeover, Twitter has actually become more compliant with government authority requests, including those of India and Turkey to block journalists, foreign politicians, and even poets. 

Musk has previously stated that he believes free speech to be that “which matches the law.” However, with the recent withdrawal from the disinformation code of conduct he has demonstrated he is not adverse to extracting his recently acquired company from regulations. 

By “free speech”, I simply mean that which matches the law.

I am against censorship that goes far beyond the law.

If people want less free speech, they will ask government to pass laws to that effect.

Therefore, going beyond the law is contrary to the will of the people.

— Elon Musk (@elonmusk) April 26, 2022

For once, it is not a tech lord threatening to leave the EU, but rather the bloc intimating that it might kick one out. Let’s see which way the DSA cookie will crumble. 

Get the TNW newsletter

Get the most important tech news in your inbox each week.

Also tagged with


Twitter’s withdrawal from disinformation code draws ire of EU politicians Read More »

surprise:-business-leaders-should-be-compassionate-–-here’s-the-evidence-to-prove it

Surprise: business leaders should be compassionate – here’s the evidence to prove it

In the month after Elon Musk triumphantly announced his takeover of Twitter with his now famous “the bird is freed” tweet, he implemented a large-scale cull of the social media platform’s global workforce. While Musk’s rationale for this move was to make Twitter more efficient, how he carried out the cuts was widely criticised as showing a lack of compassion for employees.

Luckily, the public have spoken and Musk has promised to step down after embarrassingly being voted out in his own poll. But what can we learn from this and what kind of leader does Twitter need moving forward?

Twitter might instead benefit from a more thoughtful and caring approach to leadership. Research shows compassionate leaders boost staff morale and productivity, not to mention projecting a more positive image of an organisation and its brand to the world.

Compassion in this context can be taken to mean a leader who is understanding, empathetic, and that strives to help their employees. This kind of leadership is needed now more than ever. Businesses are facing difficult times because of the lasting effects of the pandemic and the rising cost of living. The UK was already experiencing a slump in productivity growth since the 2008 financial crisis and a decline in the standard of living, which is set to continue over the next two years. Brexit hasn’t helped this situation.

Such testing times warrant organizational leadership by compassionate and competent people with sound judgment and effective coordination skills. This also applies to political leadership. The UK has seen a lack of this in recent months while dealing with “partygate”, reported bullying, and harassment in government offices, and the dire effect of recent leadership decisions around the economy.

International leaders aren’t doing much better. The US appears to have become far more polarised, leading to the Capitol riots and suffered accusations of a “leadership vaccum” during the pandemic. In the EU, compassionate leadership appears to have been in short supply based on slow responses to COVID and the energy crisis. All of these examples suggest a need for more compassionate leadership.

What is a good leader?

Research shows that good leadership helps companies to be more competitive and boost performance, particularly concerning innovation and flexibility. One study argues that good leaders win followers because of three main attributes: sound judgment, expertise, and coordination skills. These qualities allow leaders to lead by example.

Unfortunately, however, not all leaders fit this bill. A recent Europe-wide study found 13% of workers have “bad” bosses, although participants tended to score their bosses worse on competence than consideration. Still, poor leadership can negatively affect workers’ morale, wellbeing, and productivity. A review of studies in this area reported that worker wellbeing tends to be better served when companies — and their leaders — allow workers to have some control and provide more opportunities for their voices to be heard and for greater participation in making decisions.

In addition to the competence and coordination skills highlighted in a lot of research to date, my research shows that “soft leadership skills” are also important. This is about being compassionate and making others — employees in particular, but also suppliers and customers — feel important. Leaders with such “people skills” are not just technically competent, they can also look at an issue from a human perspective, thinking about how it might affect people.

My recently published research used nationally representative data from the 2004 and 2011 workplace employment relations survey, which polls more than 3,000 organizations and over 35,000 workers. They were asked to score their managers on a five-point scale in terms of certain soft leadership skills, chosen to measure the impartiality, trustworthiness and empathy of leaders.

These employees were asked whether their managers:

  • could be relied on to keep their promises
  • were sincere in attempting to understand employees’ views
  • dealt with employees honestly
  • understood that employees had responsibilities outside work
  • encouraged people to develop their skills
  • treated employees fairly
  • and maintained good relations with employees.

The results suggest that workers’ perception of good quality leadership is also positively affected by managers being upbeat when discussing organisational performance. This kind of leadership boosts workers’ wellbeing, helping employees experience greater job satisfaction and lower levels of job anxiety.

This research suggests that compassionate leaders help to both enhance company performance and boost worker wellbeing. It shows that improving the quality of leadership is worthwhile. This can be achieved with recruitment, appraisal, and training of leaders that elevate soft leadership skills.

Good leaders matter. As organizations and society in general face particularly difficult times, compassionate leadership could make a real difference to future business success.The Conversation

Getinet Astatike Haile, Associate Professor in Industrial Economics, University of Nottingham

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Surprise: business leaders should be compassionate – here’s the evidence to prove it Read More »

yes,-company-values-and-vision-matter-when-looking-for-a-job

Yes, company values and vision matter when looking for a job


Elon Musk’s takeover of Twitter earlier this year has shined a bright light on how quickly the sands can shift under employees’ feet. “Please note that Twitter will do lots of dumb things in the coming months. We will keep what works & change what doesn’t,” the SpaceX CEO tweeted of his mission for Twitter 2.0. Some of those changes have, so far, included mass layoffs, some of which are now resulting in lawsuits from workers who say they weren’t given proper notice of termination, or sufficient severance pay. Musk has also reinstated former president Donald Trump to the platform,…

This story continues at The Next Web

Yes, company values and vision matter when looking for a job Read More »