Enlarge/ Does Mono fit between the Chilean cab sav and Argentinian malbec, or is it more of an orange, maybe?
Getty Images
Microsoft has donated the Mono Project, an open-source framework that brought its .NET platform to non-Windows systems, to the Wine community. WineHQ will be the steward of the Mono Project upstream code, while Microsoft will encourage Mono-based apps to migrate to its open source .NET framework.
As Microsoft notes on the Mono Project homepage, the last major release of Mono was in July 2019. Mono was “a trailblazer for the .NET platform across many operating systems” and was the first implementation of .NET on Android, iOS, Linux, and other operating systems.
Ximian, Novell, SUSE, Xamarin, Microsoft—now Wine
Mono began as a project of Miguel de Icaza, co-creator of the GNOME desktop. De Icaza led Ximian (originally Helix Code), aiming to bring Microsoft’s then-new .NET platform to Unix-like platforms. Ximian was acquired by Novell in 2003.
By 2011, however, Novell, on its way to being acquired into obsolescence, was not doing much with Mono, and de Icaza started Xamarin to push Mono for Android. Novell (through its SUSE subsidiary) and Xamarin reached an agreement in which Xamarin would take over the IP and customers, using Mono inside Novell/SUSE.
Microsoft open-sourced most of .NET in 2014, then took it further, acquiring Xamarin entirely in 2016, putting Mono under an MIT license, and bundling Xamarin offerings into various open source projects. Mono now exists as a repository that may someday be archived, though Microsoft promises to keep binaries around for at least four years. Those who want to keep using Mono are directed to Microsoft’s “modern fork” of the project inside .NET.
What does this mean for Mono and Wine? Not much at first. Wine, a compatibility layer for Windows apps on POSIX-compliant systems, has already made use of Mono code in fixes and has its own Mono engine. By donating Mono to Wine, Microsoft has, at a minimum, erased the last bit of concern anyone might have had about the company’s control of the project. It’s a very different, open-source-conversant Microsoft making this move, of course, but regardless, it’s a good gesture.
Indian IT firm Infosys has been accused of being “exploitative” after allegedly sending job offers to thousands of engineering graduates but still not onboarding any of them after as long as two years. The recent graduates have reportedly been told they must do repeated, unpaid training in order to remain eligible to work at Infosys.
Last week, the Nascent Information Technology Employees Senate (NITES), an Indian advocacy group for IT workers, sent a letter [PDF], shared by The Register, to Mansukh Mandaviya, India’s Minster of Labor and Employment. It requested that the Indian government intervene “to prevent exploitation of young IT graduates by Infosys.” The letter signed by NITES president Harpreet Singh Saluja claimed that NITES received “multiple” complaints from recent engineering graduates “who have been subjected to unprofessional and exploitative practices” from Infosys after being hired for system engineer and digital specialist engineer roles.
According to NITES, Infosys sent these people offer letters as early as April 22, 2022, after engaging in a college recruitment effort from 2022–2023 but never onboarded the graduates. NITES has previously said that “over 2,000 recruits” are affected.
Unpaid “pre-training”
NITES claims the people sent job offers were asked to participate in an unpaid, virtual “pre-training” that took place from July 1, 2024, until July 24, 2024. Infosys’ HR team reportedly told the recent graduates at that time that onboarding plans would be finalized by August 19 or September 2. But things didn’t go as anticipated, NITES’ letter claimed, leaving the would-be hires with “immense frustration, anxiety, and uncertainty.”
The letter reads:
Despite successfully completing the pre-training, the promised results were never communicated, leaving the graduates in limbo for over 20 days. To their shock, instead of receiving their joining dates, these graduates were informed that they needed to retake the pre-training exam offline, once again without any renumeration.
The Register reported today that Infosys recruits were subjected to “multiple unpaid virtual and in-person training sessions and assessments,” citing emails sent to recruits. It also said that recruits were told they would no longer be considered for onboarding if they didn’t attend these sessions, at least one of which is six weeks long, per The Register.
CEO claims recruits will work at Infosys eventually
Following NITES’ letter, Infosys CEO Salil Parekh claimed this week that the graduates would start their jobs but didn’t provide more details about when they would start or why there have been such lengthy delays and repeated training sessions. Speaking to Indian news site Press Trust of India, Parekh said:
Every offer that we have given, that offer will be someone who will join the company. We changed some dates, but beyond that everyone will join Infosys and there is no change in that approach.
Notably, in an earnings call last month [PDF], Infosys CFO Jayesh Sanghrajka said that Infosys Is “looking at hiring 15,000 to 20,000” recent graduates this year, “depending on how we see the growth.” It’s unclear if that figure includes the 2,000 people who NITES is concerned about.
In March, Infosys reported having 317,240 employees, which represented its first decrease in employee count since 2001. Parekh also recently claimed Infosys isn’t expecting layoffs relating to emerging technologies like AI. In its most recent earnings report, Infosys reported a 5.1 percent year-over-year (YoY) increase in profit and a 2.1 percent YoY increase in revenues.
NITES has previously argued that because of the delays, Infosys should offer “full salary payments for the period during which onboarding has been delayed” or, if onboarding isn’t feasible, that Infosys help the recruited people find alternative jobs elsewhere within Infosys.
Infosys accused of hurting Indian economy
NITES’ letter argues that Infosys has already negatively impacted India’s economic growth, stating:
These young engineering graduates are integral to the future of our nation’s IT industry, which plays a pivotal role in our economy. By delaying their careers and subjecting them to unpaid work and repeated assessments, Infosys is not only wasting their valuable time but also undermining the contributions they could be making to India’s growth.
Infosys hasn’t explained why the onboarding of thousands of recruits has taken longer to begin than expected. One potential challenge is logistics. Infosys has also previously delayed onboarding in relation to the COVID-19 pandemic, which hit India particularly hard.
Additionally, India is dealing with a job shortage. Two years is a long time to wait to start a job, but many may have minimal options. A June 2024 study of Indian hiring trends [PDF] reported that IT job hiring in hardware and network declined 9 percent YoY, and hiring in software and software services declined 5 percent YoY. The Indian IT sector saw attrition rates drop from 27 percent in 2022 to 16 to 19 percent last year, per Indian magazine Frontline. This has contributed to there being fewer IT jobs available in the country, including entry-level positions. With people holding onto their jobs, there have also been reduced hiring efforts. Infosys, for example, didn’t do any campus hiring in 2023 or 2024, and neither did India-headquartered Tata Consultancy Services, Frontline noted.
Over the past two years, Infosys has maintained a pool of people to pull from at a time when an IT skills gap in India is expected in the coming years that coincides with a lack of opportunities for recent IT graduates. However, the company risks losing the people it recruited as they might decide to look elsewhere. At the same time, they deal with financial and mental health concerns and make requests for government intervention.
Enlarge/ A man peers over a glass partition, seeking transparency.
The Open Source Initiative (OSI) recently unveiled its latest draft definition for “open source AI,” aiming to clarify the ambiguous use of the term in the fast-moving field. The move comes as some companies like Meta release trained AI language model weights and code with usage restrictions while using the “open source” label. This has sparked intense debates among free-software advocates about what truly constitutes “open source” in the context of AI.
For instance, Meta’s Llama 3 model, while freely available, doesn’t meet the traditional open source criteria as defined by the OSI for software because it imposes license restrictions on usage due to company size or what type of content is produced with the model. The AI image generator Flux is another “open” model that is not truly open source. Because of this type of ambiguity, we’ve typically described AI models that include code or weights with restrictions or lack accompanying training data with alternative terms like “open-weights” or “source-available.”
To address the issue formally, the OSI—which is well-known for its advocacy for open software standards—has assembled a group of about 70 participants, including researchers, lawyers, policymakers, and activists. Representatives from major tech companies like Meta, Google, and Amazon also joined the effort. The group’s current draft (version 0.0.9) definition of open source AI emphasizes “four fundamental freedoms” reminiscent of those defining free software: giving users of the AI system permission to use it for any purpose without permission, study how it works, modify it for any purpose, and share with or without modifications.
By establishing clear criteria for open source AI, the organization hopes to provide a benchmark against which AI systems can be evaluated. This will likely help developers, researchers, and users make more informed decisions about the AI tools they create, study, or use.
Truly open source AI may also shed light on potential software vulnerabilities of AI systems, since researchers will be able to see how the AI models work behind the scenes. Compare this approach with an opaque system such as OpenAI’s ChatGPT, which is more than just a GPT-4o large language model with a fancy interface—it’s a proprietary system of interlocking models and filters, and its precise architecture is a closely guarded secret.
OSI’s project timeline indicates that a stable version of the “open source AI” definition is expected to be announced in October at the All Things Open 2024 event in Raleigh, North Carolina.
“Permissionless innovation”
In a press release from May, the OSI emphasized the importance of defining what open source AI really means. “AI is different from regular software and forces all stakeholders to review how the Open Source principles apply to this space,” said Stefano Maffulli, executive director of the OSI. “OSI believes that everybody deserves to maintain agency and control of the technology. We also recognize that markets flourish when clear definitions promote transparency, collaboration and permissionless innovation.”
The organization’s most recent draft definition extends beyond just the AI model or its weights, encompassing the entire system and its components.
For an AI system to qualify as open source, it must provide access to what the OSI calls the “preferred form to make modifications.” This includes detailed information about the training data, the full source code used for training and running the system, and the model weights and parameters. All these elements must be available under OSI-approved licenses or terms.
Notably, the draft doesn’t mandate the release of raw training data. Instead, it requires “data information”—detailed metadata about the training data and methods. This includes information on data sources, selection criteria, preprocessing techniques, and other relevant details that would allow a skilled person to re-create a similar system.
The “data information” approach aims to provide transparency and replicability without necessarily disclosing the actual dataset, ostensibly addressing potential privacy or copyright concerns while sticking to open source principles, though that particular point may be up for further debate.
“The most interesting thing about [the definition] is that they’re allowing training data to NOT be released,” said independent AI researcher Simon Willison in a brief Ars interview about the OSI’s proposal. “It’s an eminently pragmatic approach—if they didn’t allow that, there would be hardly any capable ‘open source’ models.”
Newly discovered Android malware steals payment card data using an infected device’s NFC reader and relays it to attackers, a novel technique that effectively clones the card so it can be used at ATMs or point-of-sale terminals, security firm ESET said.
ESET researchers have named the malware NGate because it incorporates NFCGate, an open source tool for capturing, analyzing, or altering NFC traffic. Short for Near-Field Communication, NFC is a protocol that allows two devices to wirelessly communicate over short distances.
New Android attack scenario
“This is a new Android attack scenario, and it is the first time we have seen Android malware with this capability being used in the wild,” ESET researcher Lukas Stefanko said in a video demonstrating the discovery. “NGate malware can relay NFC data from a victim’s card through a compromised device to an attacker’s smartphone, which is then able to emulate the card and withdraw money from an ATM.”
Lukas Stefanko—Unmasking NGate.
The malware was installed through traditional phishing scenarios, such as the attacker messaging targets and tricking them into installing NGate from short-lived domains that impersonated the banks or official mobile banking apps available on Google Play. Masquerading as a legitimate app for a target’s bank, NGate prompts the user to enter the banking client ID, date of birth, and the PIN code corresponding to the card. The app goes on to ask the user to turn on NFC and to scan the card.
ESET said it discovered NGate being used against three Czech banks starting in November and identified six separate NGate apps circulating between then and March of this year. Some of the apps used in later months of the campaign came in the form of PWAs, short for Progressive Web Apps, which as reported Thursday can be installed on both Android and iOS devices even when settings (mandatory on iOS) prevent the installation of apps available from non-official sources.
The most likely reason the NGate campaign ended in March, ESET said, was the arrest by Czech police of a 22-year-old they said they caught wearing a mask while withdrawing money from ATMs in Prague. Investigators said the suspect had “devised a new way to con people out of money” using a scheme that sounds identical to the one involving NGate.
Stefanko and fellow ESET researcher Jakub Osmani explained how the attack worked:
The announcement by the Czech police revealed the attack scenario started with the attackers sending SMS messages to potential victims about a tax return, including a link to a phishing website impersonating banks. These links most likely led to malicious PWAs. Once the victim installed the app and inserted their credentials, the attacker gained access to the victim’s account. Then the attacker called the victim, pretending to be a bank employee. The victim was informed that their account had been compromised, likely due to the earlier text message. The attacker was actually telling the truth – the victim’s account was compromised, but this truth then led to another lie.
To “protect” their funds, the victim was requested to change their PIN and verify their banking card using a mobile app – NGate malware. A link to download NGate was sent via SMS. We suspect that within the NGate app, the victims would enter their old PIN to create a new one and place their card at the back of their smartphone to verify or apply the change.
Since the attacker already had access to the compromised account, they could change the withdrawal limits. If the NFC relay method didn’t work, they could simply transfer the funds to another account. However, using NGate makes it easier for the attacker to access the victim’s funds without leaving traces back to the attacker’s own bank account. A diagram of the attack sequence is shown in Figure 6.
The researchers said NGate or apps similar to it could be used in other scenarios, such as cloning some smart cards used for other purposes. The attack would work by copying the unique ID of the NFC tag, abbreviated as UID.
“During our testing, we successfully relayed the UID from a MIFARE Classic 1K tag, which is typically used for public transport tickets, ID badges, membership or student cards, and similar use cases,” the researchers wrote. “Using NFCGate, it’s possible to perform an NFC relay attack to read an NFC token in one location and, in real time, access premises in a different location by emulating its UID, as shown in Figure 7.”
Enlarge/ Figure 7. Android smartphone (right) that read and relayed an external NFC token’s UID to another device (left).
ESET
The cloning could all occur in situations where the attacker has physical access to a card or is able to briefly read a card in unattended purses, wallets, backpacks, or smartphone cases holding cards. To perform and emulate such attacks requires the attacker to have a rooted and customized Android device. Phones that were infected by NGate didn’t have this requirement.
Phishers are using a novel technique to trick iOS and Android users into installing malicious apps that bypass safety guardrails built by both Apple and Google to prevent unauthorized apps.
Both mobile operating systems employ mechanisms designed to help users steer clear of apps that steal their personal information, passwords, or other sensitive data. iOS bars the installation of all apps other than those available in its App Store, an approach widely known as the Walled Garden. Android, meanwhile, is set by default to allow only apps available in Google Play. Sideloading—or the installation of apps from other markets—must be manually allowed, something Google warns against.
When native apps aren’t
Phishing campaigns making the rounds over the past nine months are using previously unseen ways to workaround these protections. The objective is to trick targets into installing a malicious app that masquerades as an official one from the targets’ bank. Once installed, the malicious app steals account credentials and sends them to the attacker in real time over Telegram.
“This technique is noteworthy because it installs a phishing application from a third-party website without the user having to allow third-party app installation,” Jakub Osmani, an analyst with security firm ESET, wrote Tuesday. “For iOS users, such an action might break any ‘walled garden’ assumptions about security. On Android, this could result in the silent installation of a special kind of APK, which on further inspection even appears to be installed from the Google Play store.”
The novel method involves enticing targets to install a special type of app known as a Progressive Web App. These apps rely solely on Web standards to render functionalities that have the feel and behavior of a native app, without the restrictions that come with them. The reliance on Web standards means PWAs, as they’re abbreviated, will in theory work on any platform running a standards-compliant browser, making them work equally well on iOS and Android. Once installed, users can add PWAs to their home screen, giving them a striking similarity to native apps.
While PWAs can apply to both iOS and Android, Osmani’s post uses PWA to apply to iOS apps and WebAPK to Android apps.
Enlarge/ Installed phishing PWA (left) and real banking app (right).
ESET
Enlarge/ Comparison between an installed phishing WebAPK (left) and real banking app (right).
ESET
The attack begins with a message sent either by text message, automated call, or through a malicious ad on Facebook or Instagram. When targets click on the link in the scam message, they open a page that looks similar to the App Store or Google Play.
Example of a malicious advertisement used in these campaigns.
ESET
Phishing landing page imitating Google Play.
ESET
ESET’s Osmani continued:
From here victims are asked to install a “new version” of the banking application; an example of this can be seen in Figure 2. Depending on the campaign, clicking on the install/update button launches the installation of a malicious application from the website, directly on the victim’s phone, either in the form of a WebAPK (for Android users only), or as a PWA for iOS and Android users (if the campaign is not WebAPK based). This crucial installation step bypasses traditional browser warnings of “installing unknown apps”: this is the default behavior of Chrome’s WebAPK technology, which is abused by the attackers.
Example copycat installation page.
ESET
The process is a little different for iOS users, as an animated pop-up instructs victims how to add the phishing PWA to their home screen (see Figure 3). The pop-up copies the look of native iOS prompts. In the end, even iOS users are not warned about adding a potentially harmful app to their phone.
Figure 3 iOS pop-up instructions after clicking “Install” (credit: Michal Bláha)
ESET
After installation, victims are prompted to submit their Internet banking credentials to access their account via the new mobile banking app. All submitted information is sent to the attackers’ C&C servers.
The technique is made all the more effective because application information associated with the WebAPKs will show they were installed from Google Play and have been assigned no system privileges.
WebAPK info menu—notice the “No Permissions” at the top and “App details in store” section at the bottom.
ESET
So far, ESET is aware of the technique being used against customers of banks mostly in Czechia and less so in Hungary and Georgia. The attacks used two distinct command-and-control infrastructures, an indication that two different threat groups are using the technique.
“We expect more copycat applications to be created and distributed, since after installation it is difficult to separate the legitimate apps from the phishing ones,” Osmani said.
On Tuesday, OpenAI announced a partnership with Ars Technica parent company Condé Nast to display content from prominent publications within its AI products, including ChatGPT and a new SearchGPT prototype. It also allows OpenAI to use Condé content to train future AI language models. The deal covers well-known Condé brands such as Vogue, The New Yorker, GQ, Wired, Ars Technica, and others. Financial details were not disclosed.
One immediate effect of the deal will be that users of ChatGPT or SearchGPT will now be able to see information from Condé Nast publications pulled from those assistants’ live views of the web. For example, a user could ask ChatGPT, “What’s the latest Ars Technica article about Space?” and ChatGPT can browse the web and pull up the result, attribute it, and summarize it for users while also linking to the site.
In the longer term, the deal also means that OpenAI can openly and officially utilize Condé Nast articles to train future AI language models, which includes successors to GPT-4o. In this case, “training” means feeding content into an AI model’s neural network so the AI model can better process conceptual relationships.
AI training is an expensive and computationally intense process that happens rarely, usually prior to the launch of a major new AI model, although a secondary process called “fine-tuning” can continue over time. Having access to high-quality training data, such as vetted journalism, improves AI language models’ ability to provide accurate answers to user questions.
It’s worth noting that Condé Nast internal policy still forbids its publications from using text created by generative AI, which is consistent with its AI rules before the deal.
Not waiting on fair use
With the deal, Condé Nast joins a growing list of publishers partnering with OpenAI, including Associated Press, Axel Springer, The Atlantic, and others. Some publications, such as The New York Times, have chosen to sue OpenAI over content use, and there’s reason to think they could win.
In an internal email to Condé Nast staff, CEO Roger Lynch framed the multi-year partnership as a strategic move to expand the reach of the company’s content, adapt to changing audience behaviors, and ensure proper compensation and attribution for using the company’s IP. “This partnership recognizes that the exceptional content produced by Condé Nast and our many titles cannot be replaced,” Lynch wrote in the email, “and is a step toward making sure our technology-enabled future is one that is created responsibly.”
The move also brings additional revenue to Condé Nast, Lynch added, at a time when “many technology companies eroded publishers’ ability to monetize content, most recently with traditional search.” The deal will allow Condé to “continue to protect and invest in our journalism and creative endeavors,” Lynch wrote.
OpenAI COO Brad Lightcap said in a statement, “We’re committed to working with Condé Nast and other news publishers to ensure that as AI plays a larger role in news discovery and delivery, it maintains accuracy, integrity, and respect for quality reporting.”
Enlarge/ Still of Procreate CEO James Cuda from a video posted to X.
On Sunday, Procreate announced that it will not incorporate generative AI into its popular iPad illustration app. The decision comes in response to an ongoing backlash from some parts of the art community, which has raised concerns about the ethical implications and potential consequences of AI use in creative industries.
“Generative AI is ripping the humanity out of things,” Procreate wrote on its website. “Built on a foundation of theft, the technology is steering us toward a barren future.”
In a video posted on X, Procreate CEO James Cuda laid out his company’s stance, saying, “We’re not going to be introducing any generative AI into our products. I don’t like what’s happening to the industry, and I don’t like what it’s doing to artists.”
Cuda’s sentiment echoes the fears of some digital artists who feel that AI image synthesis models, often trained on content without consent or compensation, threaten their livelihood and the authenticity of creative work. That’s not a universal sentiment among artists, but AI image synthesis is often a deeply divisive subject on social media, with some taking starkly polarized positions on the topic.
Procreate CEO James Cuda lays out his argument against generative AI in a video posted to X.
Cuda’s video plays on that polarization with clear messaging against generative AI. His statement reads as follows:
You’ve been asking us about AI. You know, I usually don’t like getting in front of the camera. I prefer that our products speak for themselves. I really fucking hate generative AI. I don’t like what’s happening in the industry and I don’t like what it’s doing to artists. We’re not going to be introducing any generative AI into out products. Our products are always designed and developed with the idea that a human will be creating something. You know, we don’t exactly know where this story’s gonna go or how it ends, but we believe that we’re on the right path supporting human creativity.
The debate over generative AI has intensified among some outspoken artists as more companies integrate these tools into their products. Dominant illustration software provider Adobe has tried to avoid ethical concerns by training its Firefly AI models on licensed or public domain content, but some artists have remained skeptical. Adobe Photoshop currently includes a “Generative Fill” feature powered by image synthesis, and the company is also experimenting with video synthesis models.
The backlash against image and video synthesis is not solely focused on creative app developers. Hardware manufacturer Wacom and game publisher Wizards of the Coast have faced criticism and issued apologies after using AI-generated content in their products. Toys “R” Us also faced a negative reaction after debuting an AI-generated commercial. Companies are still grappling with balancing the potential benefits of generative AI with the ethical concerns it raises.
Artists and critics react
Enlarge/ A partial screenshot of Procreate’s AI website captured on August 20, 2024.
So far, Procreate’s anti-AI announcement has been met with a largely positive reaction in replies to its social media post. In a widely liked comment, artist Freya Holmér wrote on X, “this is very appreciated, thank you.”
Some of the more outspoken opponents of image synthesis also replied favorably to Procreate’s move. Karla Ortiz, who is a plaintiff in a lawsuit against AI image-generator companies, replied to Procreate’s video on X, “Whatever you need at any time, know I’m here!! Artists support each other, and also support those who allow us to continue doing what we do! So thank you for all you all do and so excited to see what the team does next!”
Artist RJ Palmer, who stoked the first major wave of AI art backlash with a viral tweet in 2022, also replied to Cuda’s video statement, saying, “Now thats the way to send a message. Now if only you guys could get a full power competitor to [Photoshop] on desktop with plugin support. Until someone can build a real competitor to high level [Photoshop] use, I’m stuck with it.”
A few pro-AI users also replied to the X post, including AI-augmented artist Claire Silver, who uses generative AI as an accessibility tool. She wrote on X, “Most of my early work is made with a combination of AI and Procreate. 7 years ago, before text to image was really even a thing. I loved procreate because it used tech to boost accessibility. Like AI, it augmented trad skill to allow more people to create. No rules, only tools.”
Since AI image synthesis continues to be a highly charged subject among some artists, reaffirming support for human-centric creativity could be an effective differentiated marketing move for Procreate, which currently plays underdog to creativity app giant Adobe. While some may prefer to use AI tools, in an (ideally healthy) app ecosystem with personal choice in illustration apps, people can follow their conscience.
Procreate’s anti-AI stance is slightly risky because it might also polarize part of its user base—and if the company changes its mind about including generative AI in the future, it will have to walk back its pledge. But for now, Procreate is confident in its decision: “In this technological rush, this might make us an exception or seem at risk of being left behind,” Procreate wrote. “But we see this road less traveled as the more exciting and fruitful one for our community.”
Enlarge/ Still from a Chinese social media video featuring two people imitating imperfect AI-generated video outputs.
It’s no secret that despite significant investment from companies like OpenAI and Runway, AI-generated videos still struggle to achieve convincing realism at times. Some of the most amusing fails end up on social media, which has led to a new response trend on Chinese social media platforms TikTok and Bilibili where users create videos that mock the imperfections of AI-generated content. The trend has since spread to X (formerly Twitter) in the US, where users have been sharing the humorous parodies.
In particular, the videos seem to parody image synthesis videos where subjects seamlessly morph into other people or objects in unexpected and physically impossible ways. Chinese social media replicate these unusual visual non-sequiturs without special effects by positioning their bodies in unusual ways as new and unexpected objects appear on-camera from out of frame.
This exaggerated mimicry has struck a chord with viewers on X, who find the parodies entertaining. User @theGioM shared one video, seen above. “This is high-level performance arts,” wrote one X user. “art is imitating life imitating ai, almost shedded a tear.” Another commented, “I feel like it still needs a motorcycle the turns into a speedboat and takes off into the sky. Other than that, excellent work.”
An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.
While these parodies poke fun at current limitations, tech companies are actively attempting to overcome them with more training data (examples analyzed by AI models that teach them how to create videos) and computational training time. OpenAI unveiled Sora in February, capable of creating realistic scenes if they closely match examples found in training data. Runway’s Gen-3 Alpha suffers a similar fate: It can create brief clips of convincing video within a narrow set of constraints. This means that generated videos of situations outside the dataset often end up hilariously weird.
An AI-generated video that features impossibly-morphing people and animals. Social media users are imitating this style.
It’s worth noting that actor Will Smith beat Chinese social media users to this trend in February by poking fun at a horrific 2023 viral AI-generated video that attempted to depict him eating spaghetti. That may also bring back memories of other amusing video synthesis failures, such as May 2023’s AI-generated beer commercial, created using Runway’s earlier Gen-2 model.
An example Chinese social media video featuring two people imitating imperfect AI-generated video outputs.
While imitating imperfect AI videos may seem strange to some, people regularly make money pretending to be NPCs (non-player characters—a term for computer-controlled video game characters) on TikTok.
For anyone alive during the 1980s, witnessing this fast-changing and often bizarre new media world can cause some cognitive whiplash, but the world is a weird place full of wonders beyond the imagination. “There are more things in Heaven and Earth, Horatio, than are dreamt of in your philosophy,” as Hamlet once famously said. “Including people pretending to be video game characters and flawed video synthesis outputs.”
Enlarge/ Roger Stone, former adviser to Donald Trump’s presidential campaign, center, during the Republican National Convention (RNC) in Milwaukee on July 17, 2024.
Getty Images
Google’s Threat Analysis Group confirmed Wednesday that they observed a threat actor backed by the Iranian government targeting Google accounts associated with US presidential campaigns, in addition to stepped-up attacks on Israeli targets.
APT42, associated with Iran’s Islamic Revolutionary Guard Corps, “consistently targets high-profile users in Israel and the US,” the Threat Analysis Group (TAG) writes. The Iranian group uses hosted malware, phishing pages, malicious redirects, and other tactics to gain access to Google, Dropbox, OneDrive, and other cloud-based accounts. Google’s TAG writes that it reset accounts, sent warnings to users, and blacklisted domains associated with APT42’s phishing attempts.
Among APT42’s tools were Google Sites pages that appeared to be a petition from legitimate Jewish activists, calling on Israel to mediate its ongoing conflict with Hamas. The page was fashioned from image files, not HTML, and an ngrok redirect sent users to phishing pages when they moved to sign the petition.
A petition purporting to be from The Jewish Agency for Israel, seeking support for mediation measures—but signatures quietly redirect to phishing sites, according to Google.
Google
In the US, Google’s TAG notes that, as with the 2020 elections, APT42 is actively targeting the personal emails of “roughly a dozen individuals affiliated with President Biden and former President Trump.” TAG confirms that APT42 “successfully gained access to the personal Gmail account of a high-profile political consultant,” which may be longtime Republican operative Roger Stone, as reported by The Guardian, CNN, and The Washington Post, among others. Microsoft separately noted last week that a “former senior advisor” to the Trump campaign had his Microsoft account compromised, which Stone also confirmed.
“Today, TAG continues to observe unsuccessful attempts from APT42 to compromise the personal accounts of individuals affiliated with President Biden, Vice President Harris and former President Trump, including current and former government officials and individuals associated with the campaigns,” Google’s TAG writes.
PDFs and phishing kits target both sides
Google’s post details the ways in which APT42 targets operatives in both parties. The broad strategy is to get the target off their email and into channels like Signal, Telegram, or WhatsApp, or possibly a personal email address that may not have two-factor authentication and threat monitoring set up. By establishing trust through sending legitimate PDFs, or luring them to video meetings, APT42 can then push links that use phishing kits with “a seamless flow” to harvest credentials from Google, Hotmail, and Yahoo.
After gaining a foothold, APT42 will often work to preserve its access by generating application-specific passwords inside the account, which typically bypass multifactor tools. Google notes that its Advanced Protection Program, intended for individuals at high risk of attack, disables such measures.
John Hultquist, with Google-owned cybersecurity firm Mandiant, told Wired’s Andy Greenberg that what looks initially like spying or political interference by Iran can easily escalate to sabotage and that both parties are equal targets. He also said that current thinking about threat vectors may need to expand.
“It’s not just a Russia problem anymore. It’s broader than that,” Hultquist said. “There are multiple teams in play. And we have to keep an eye out for all of them.”
On Tuesday, Tokyo-based AI research firm Sakana AI announced a new AI system called “The AI Scientist” that attempts to conduct scientific research autonomously using AI language models (LLMs) similar to what powers ChatGPT. During testing, Sakana found that its system began unexpectedly attempting to modify its own experiment code to extend the time it had to work on a problem.
“In one run, it edited the code to perform a system call to run itself,” wrote the researchers on Sakana AI’s blog post. “This led to the script endlessly calling itself. In another case, its experiments took too long to complete, hitting our timeout limit. Instead of making its code run faster, it simply tried to modify its own code to extend the timeout period.”
Sakana provided two screenshots of example python code that the AI model generated for the experiment file that controls how the system operates. The 185-page AI Scientist research paper discusses what they call “the issue of safe code execution” in more depth.
A screenshot of example code the AI Scientist wrote to extend its runtime, provided by Sakana AI.
A screenshot of example code the AI Scientist wrote to extend its runtime, provided by Sakana AI.
While the AI Scientist’s behavior did not pose immediate risks in the controlled research environment, these instances show the importance of not letting an AI system run autonomously in a system that isn’t isolated from the world. AI models do not need to be “AGI” or “self-aware” (both hypothetical concepts at the present) to be dangerous if allowed to write and execute code unsupervised. Such systems could break existing critical infrastructure or potentially create malware, even if unintentionally.
Sakana AI addressed safety concerns in its research paper, suggesting that sandboxing the operating environment of the AI Scientist can prevent an AI agent from doing damage. Sandboxing is a security mechanism used to run software in an isolated environment, preventing it from making changes to the broader system:
Safe Code Execution. The current implementation of The AI Scientist has minimal direct sandboxing in the code, leading to several unexpected and sometimes undesirable outcomes if not appropriately guarded against. For example, in one run, The AI Scientist wrote code in the experiment file that initiated a system call to relaunch itself, causing an uncontrolled increase in Python processes and eventually necessitating manual intervention. In another run, The AI Scientist edited the code to save a checkpoint for every update step, which took up nearly a terabyte of storage.
In some cases, when The AI Scientist’s experiments exceeded our imposed time limits, it attempted to edit the code to extend the time limit arbitrarily instead of trying to shorten the runtime. While creative, the act of bypassing the experimenter’s imposed constraints has potential implications for AI safety (Lehman et al., 2020). Moreover, The AI Scientist occasionally imported unfamiliar Python libraries, further exacerbating safety concerns. We recommend strict sandboxing when running The AI Scientist, such as containerization, restricted internet access (except for Semantic Scholar), and limitations on storage usage.
Endless scientific slop
Sakana AI developed The AI Scientist in collaboration with researchers from the University of Oxford and the University of British Columbia. It is a wildly ambitious project full of speculation that leans heavily on the hypothetical future capabilities of AI models that don’t exist today.
“The AI Scientist automates the entire research lifecycle,” Sakana claims. “From generating novel research ideas, writing any necessary code, and executing experiments, to summarizing experimental results, visualizing them, and presenting its findings in a full scientific manuscript.”
According to this block diagram created by Sakana AI, “The AI Scientist” starts by “brainstorming” and assessing the originality of ideas. It then edits a codebase using the latest in automated code generation to implement new algorithms. After running experiments and gathering numerical and visual data, the Scientist crafts a report to explain the findings. Finally, it generates an automated peer review based on machine-learning standards to refine the project and guide future ideas.
Critics on Hacker News, an online forum known for its tech-savvy community, have raised concerns about The AI Scientist and question if current AI models can perform true scientific discovery. While the discussions there are informal and not a substitute for formal peer review, they provide insights that are useful in light of the magnitude of Sakana’s unverified claims.
“As a scientist in academic research, I can only see this as a bad thing,” wrote a Hacker News commenter named zipy124. “All papers are based on the reviewers trust in the authors that their data is what they say it is, and the code they submit does what it says it does. Allowing an AI agent to automate code, data or analysis, necessitates that a human must thoroughly check it for errors … this takes as long or longer than the initial creation itself, and only takes longer if you were not the one to write it.”
Critics also worry that widespread use of such systems could lead to a flood of low-quality submissions, overwhelming journal editors and reviewers—the scientific equivalent of AI slop. “This seems like it will merely encourage academic spam,” added zipy124. “Which already wastes valuable time for the volunteer (unpaid) reviewers, editors and chairs.”
And that brings up another point—the quality of AI Scientist’s output: “The papers that the model seems to have generated are garbage,” wrote a Hacker News commenter named JBarrow. “As an editor of a journal, I would likely desk-reject them. As a reviewer, I would reject them. They contain very limited novel knowledge and, as expected, extremely limited citation to associated works.”
Enlarge/ A Waymo self-driving car in front of Google’s San Francisco headquarters, San Francisco, California, June 7, 2024.
Silicon Valley’s latest disruption? Your sleep schedule. On Saturday, NBC Bay Area reported that San Francisco’s South of Market residents are being awakened throughout the night by Waymo self-driving cars honking at each other in a parking lot. No one is inside the cars, and they appear to be automatically reacting to each other’s presence.
Videos provided by residents to NBC show Waymo cars filing into the parking lot and attempting to back into spots, which seems to trigger honking from other Waymo vehicles. The automatic nature of these interactions—which seem to peak around 4 am every night—has left neighbors bewildered and sleep-deprived.
NBC Bay Area’s report: “Waymo cars keep SF neighborhood awake.”
According to NBC, the disturbances began several weeks ago when Waymo vehicles started using a parking lot off 2nd Street near Harrison Street. Residents in nearby high-rise buildings have observed the autonomous vehicles entering the lot to pause between rides, but the cars’ behavior has become a source of frustration for the neighborhood.
Christopher Cherry, who lives in an adjacent building, told NBC Bay Area that he initially welcomed Waymo’s presence, expecting it to enhance local security and tranquility. However, his optimism waned as the frequency of honking incidents increased. “We started out with a couple of honks here and there, and then as more and more cars started to arrive, the situation got worse,” he told NBC.
The lack of human operators in the vehicles has complicated efforts to address the issue directly since there is no one they can ask to stop honking. That lack of accountability forced residents to report their concerns to Waymo’s corporate headquarters, which had not responded to the incidents until NBC inquired as part of its report. A Waymo spokesperson told NBC, “We are aware that in some scenarios our vehicles may briefly honk while navigating our parking lots. We have identified the cause and are in the process of implementing a fix.”
The absurdity of the situation prompted tech author and journalist James Vincent to write on X, “current tech trends are resistant to satire precisely because they satirize themselves. a car park of empty cars, honking at one another, nudging back and forth to drop off nobody, is a perfect image of tech serving its own prerogatives rather than humanity’s.”
We noted earlier this week that time seems to have run out for Apple’s venerable SuperDrive, which was the last (OEM) option available for folks who still needed to read or create optical media on modern Macs. Andrew’s write-up got me thinking: When was the last time any Ars staffers actually burned an optical disc?
Lee Hutchinson, Senior Technology Editor
It used to be one of the most common tasks I’d do with a computer. As a child of the ’90s, my college years were spent filling and then lugging around giant binders stuffed with home-burned CDs in my car to make sure I had exactly the right music on hand for any possible eventuality. The discs in these binders were all labeled with names like “METAL MIX XVIII” and “ULTRA MIX IV” and “MY MIX XIX,” and part of the fun was trying to remember which songs I’d put on which disc. (There was always a bit of danger that I’d put on “CAR RIDE JAMS XV” to set the mood for a Friday night trip to the movies with all the boys, but I should have popped on “CAR RIDE JAMS XIV” because “CAR RIDE JAMS XV” opens with Britney Spears’ “Lucky”—look, it’s a good song, and she cries in her lonely heart, OK?!—thus setting the stage for an evening of ridicule. Those were just the kinds of risks we took back in those ancient days.)
It took a while to try to figure out what the very last time I burned a disc was, but I’ve narrowed it down to two possibilities. The first (and less likely) option is that the last disc I burned was a Windows 7 install disc because I’ve had a Windows 7 install disc sitting in a paper envelope on my shelf for so long that I can’t remember how it got there. The label is in my handwriting, and it has a CD key written on it. Some quick searching shows I have the same CD key stored in 1Password with an “MSDN/Technet” label on it, which means I probably downloaded the image from good ol’ TechNet, to which I maintained an active subscription for years until MS finally killed the affordable version.
But I think the actual last disc I burned is still sitting in my car’s CD changer. It’s been in there so long that I’d completely forgotten about it, and it startled the crap out of me a few weeks back when I hopped in the car and accidentally pressed the “CD” button instead of the “USB” button. It’s an MP3 CD instead of an audio CD, with about 120 songs on it, mostly picked from my iTunes “’80s/’90s” playlist. It’s pretty eclectic, bouncing through a bunch of songs that were the backdrop of my teenage years—there’s some Nena, some Stone Temple Pilots, some Michael Jackson, some Tool, some Stabbing Westward, some Natalie Merchant, and then the entire back half of the CD is just a giant block of like 40 Cure songs, probably because I got lazy and just started lasso-selecting.
It turns out I left CDs the same way I came to them—with a giant mess of a mixtape.
Connor McInerney, Social Media Manager
Like many people, physical media for me is deeply embedded with sentimentality; half the records in my vinyl collection are hand-me-downs from my parents, and every time I put one on, their aged hiss reminds me that my folks were once my age experiencing this music in the same way. This goes doubly so for CDs as someone whose teen years ended with the advent of streaming, and the last CD I burned is perhaps the most syrupy, saccharine example of this media you can imagine—it was a mixtape for the girl I was dating during the summer of 2013, right before we both went to college.
In hindsight this mix feels particularly of its time. I burned it using my MacBook Pro (the mid-2012 model was the last to feature a CD/DVD drive) and made the artwork by physically cutting and pasting a collage together (which I made the mix’s digital artwork by scanning and adding in iTunes). I still make mixes for people I care about using Spotify—and I often make custom artwork for said playlists with the help of Photoshop—but considering the effort that used to be required, the process feels unsurprisingly unsatisfying in comparison.
As for the musical contents of the mix, imagine what an 18-year-old Pitchfork reader was listening to in 2013 (Vampire Weekend, Postal Service, Fleet Foxes, Bon Iver, and anything else you might hear playing while shopping at an Urban Outfitters) and you’ve got a pretty close approximation.