Author name: Beth Washington

no-cloud-needed:-nvidia-creates-gaming-centric-ai-chatbot-that-runs-on-your-gpu

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU

Nvidia has seen its fortunes soar in recent years as its AI-accelerating GPUs have become worth their weight in gold. Most people use their Nvidia GPUs for games, but why not both? Nvidia has a new AI you can run at the same time, having just released its experimental G-Assist AI. It runs locally on your GPU to help you optimize your PC and get the most out of your games. It can do some neat things, but Nvidia isn’t kidding when it says this tool is experimental.

G-Assist is available in the Nvidia desktop app, and it consists of a floating overlay window. After invoking the overlay, you can either type or speak to G-Assist to check system stats or make tweaks to your settings. You can ask basic questions like, “How does DLSS Frame Generation work?” but it also has control over some system-level settings.

By calling up G-Assist, you can get a rundown of how your system is running, including custom data charts created on the fly by G-Assist. You can also ask the AI to tweak your machine, for example, optimizing the settings for a particular game or toggling on or off a setting. G-Assist can even overclock your GPU if you so choose, complete with a graph of expected performance gains.

Nvidia on G-Assist.

Nvidia demoed G-Assist last year with some impressive features tied to the active game. That version of G-Assist could see what you were doing and offer suggestions about how to reach your next objective. The game integration is sadly quite limited in the public version, supporting just a few games, like Ark: Survival Evolved.

There is, however, support for a number of third-party plug-ins that give G-Assist control over Logitech G, Corsair, MSI, and Nanoleaf peripherals. So, for instance, G-Assist could talk to your MSI motherboard to control your thermal profile or ping Logitech G to change your LED settings.

No cloud needed: Nvidia creates gaming-centric AI chatbot that runs on your GPU Read More »

napster-to-become-a-music-marketing-metaverse-firm-after-being-sold-for-$207m

Napster to become a music-marketing metaverse firm after being sold for $207M

Infinite Reality, a media, ecommerce, and marketing company focused on 3D and AI-powered experiences, has entered an agreement to acquired Napster. That means that the brand originally launched in 1999 as a peer-to-peer (P2P) music file-sharing service is set to be reborn again. This time, new owners are reshaping the brand into one focused on marketing musicians in the metaverse.

Infinite announced today a definitive agreement to buy Napster for $207 million. The Norwalk, Connecticut-based company plans to turn Napster into a “social music platform that prioritizes active fan engagement over passive listening, allowing artists to connect with, own, and monetize the relationship with their fans.” Jon Vlassopulos, who became Napster CEO in 2022, will continue with his role at the brand.

Since 2016, Napster has been operating as a (legal) streaming service. It claims to have over 110 million high-fidelity tracks, with some supporting lossless audio. Napster subscribers can also listen offline and watch music videos. The service currently starts at $11 per month.

Since 2022, Napster has been owned by Web3 and blockchain firms Hivemind and Algorand. Infinite also develops Web3 tech, and CEO John Acunto told CNBC that Algorand’s blockchain background was appealing, as was Napster’s licenses for streaming millions of songs.

To market musicians, Infinite has numerous ideas for helping Napster users interact more with the platform than they do with the current music streaming service. The company shared goals of using Napster to offer “branded 3D virtual spaces where fans can enjoy virtual concerts, social listening parties, and other immersive and community-based experiences” and more “gamification.” Infinite also wants musicians to use Napster as a platform where fans can purchase tickets for performances, physical and virtual merchandise, and “exclusive digital content.” The 6-year-old firm also plans to offer artists abilities to use “AI-powered customer service, sales, and community management agents” and “enhanced analytics dashboards to better understand fan behavior” with Napster.

Napster to become a music-marketing metaverse firm after being sold for $207M Read More »

we’ve-outsourced-our-confirmation-biases-to-search-engines

We’ve outsourced our confirmation biases to search engines

So, the researchers decided to see if they could upend it.

Keeping it general

The simplest way to change the dynamics of this was simply to change the results returned by the search. So, the researchers did a number of experiments where they gave all of the participants the same results, regardless of the search terms they had used. When everybody gets the same results, their opinions after reading them tend to move in the same direction, suggesting that search results can help change people’s opinions.

The researchers also tried giving everyone the results of a broad, neutral search, regardless of the terms they’d entered. This weakened the probability that beliefs would last through the process of formulating and executing a search. In other words, avoiding the sorts of focused, biased search terms allowed some participants to see information that could change their minds.

Despite all the swapping, participants continued to rate the search results relevant. So, providing more general search results even when people were looking for more focused information doesn’t seem to harm people’s perception of the service. In fact, Leung and Urminsky found that the AI version of Bing search would reformulate narrow questions into more general ones.

That said, making this sort of change wouldn’t be without risks. There are a lot of subject areas where a search shouldn’t return a broad range of information—where grabbing a range of ideas would expose people to fringe and false information.

Nevertheless, it can’t hurt to be aware of how we can use search services to reinforce our biases. So, in the words of Leung and Urminsky, “When search engines provide directionally narrow search results in response to users’ directionally narrow search terms, the results will reflect the users’ existing beliefs, instead of promoting belief updating by providing a broad spectrum of related information.”

PNAS, 2025. DOI: 10.1073/pnas.2408175122  (About DOIs).

We’ve outsourced our confirmation biases to search engines Read More »

as-preps-continue,-it’s-looking-more-likely-nasa-will-fly-the-artemis-ii-mission

As preps continue, it’s looking more likely NASA will fly the Artemis II mission

NASA’s existing architecture still has a limited shelf life, and the agency will probably have multiple options for transporting astronauts to and from the Moon in the 2030s. A decision on the long-term future of SLS and Orion isn’t expected until the Trump administration’s nominee for NASA administrator, Jared Isaacman, takes office after confirmation by the Senate.

So, what is the plan for SLS?

There are different degrees of cancellation options. The most draconian would be an immediate order to stop work on Artemis II preparations. This is looking less likely than it did a few months ago and would come with its own costs. It would cost untold millions of dollars to disassemble and dispose of parts of Artemis II’s SLS rocket and Orion spacecraft. Canceling multibillion-dollar contracts with Boeing, Northrop Grumman, and Lockheed Martin would put NASA on the hook for significant termination costs.

Of course, these liabilities would be less than the $4.1 billion NASA’s inspector general estimates each of the first four Artemis missions will cost. Most of that money has already been spent for Artemis II, but if NASA spends several billion dollars on each Artemis mission, there won’t be much money left over to do other cool things.

Other options for NASA might be to set a transition point when the Artemis program would move off of the Space Launch System rocket, and perhaps even the Orion spacecraft, and switch to new vehicles.

Looking down on the Space Launch System for Artemis II. Credit: NASA/Frank Michaux

Another possibility, which seems to be low-hanging fruit for Artemis decision-makers, could be to cancel the development of a larger Exploration Upper Stage for the SLS rocket. If there are a finite number of SLS flights on NASA’s schedule, it’s difficult to justify the projected $5.7 billion cost of developing the upgraded Block 1B version of the Space Launch System. There are commercial options available to replace the rocket’s Boeing-built Exploration Upper Stage, as my colleague Eric Berger aptly described in a feature story last year.

For now, it looks like NASA’s orange behemoth has a little life left in it. All the hardware for the Artemis II mission has arrived at the launch site in Florida.

The Trump administration will release its fiscal-year 2026 budget request in the coming weeks. Maybe then NASA will also have a permanent administrator, and the veil will lift over the White House’s plans for Artemis.

As preps continue, it’s looking more likely NASA will fly the Artemis II mission Read More »

you-can-now-download-the-source-code-that-sparked-the-ai-boom

You can now download the source code that sparked the AI boom

On Thursday, Google and the Computer History Museum (CHM) jointly released the source code for AlexNet, the convolutional neural network (CNN) that many credit with transforming the AI field in 2012 by proving that “deep learning” could achieve things conventional AI techniques could not.

Deep learning, which uses multi-layered neural networks that can learn from data without explicit programming, represented a significant departure from traditional AI approaches that relied on hand-crafted rules and features.

The Python code, now available on CHM’s GitHub page as open source software, offers AI enthusiasts and researchers a glimpse into a key moment of computing history. AlexNet served as a watershed moment in AI because it could accurately identify objects in photographs with unprecedented accuracy—correctly classifying images into one of 1,000 categories like “strawberry,” “school bus,” or “golden retriever” with significantly fewer errors than previous systems.

Like viewing original ENIAC circuitry or plans for Babbage’s Difference Engine, examining the AlexNet code may provide future historians insight into how a relatively simple implementation sparked a technology that has reshaped our world. While deep learning has enabled advances in health care, scientific research, and accessibility tools, it has also facilitated concerning developments like deepfakes, automated surveillance, and the potential for widespread job displacement.

But in 2012, those negative consequences still felt like far-off sci-fi dreams to many. Instead, experts were simply amazed that a computer could finally recognize images with near-human accuracy.

Teaching computers to see

As the CHM explains in its detailed blog post, AlexNet originated from the work of University of Toronto graduate students Alex Krizhevsky and Ilya Sutskever, along with their advisor Geoffrey Hinton. The project proved that deep learning could outperform traditional computer vision methods.

The neural network won the 2012 ImageNet competition by recognizing objects in photos far better than any previous method. Computer vision veteran Yann LeCun, who attended the presentation in Florence, Italy, immediately recognized its importance for the field, reportedly standing up after the presentation and calling AlexNet “an unequivocal turning point in the history of computer vision.” As Ars detailed in November, AlexNet marked the convergence of three critical technologies that would define modern AI.

You can now download the source code that sparked the AI boom Read More »

more-on-various-ai-action-plans

More on Various AI Action Plans

Last week I covered Anthropic’s relatively strong submission, and OpenAI’s toxic submission. This week I cover several other submissions, and do some follow-up on OpenAI’s entry.

The most prominent remaining lab is Google. Google focuses on AI’s upside. The vibes aren’t great, but they’re not toxic. The key asks for their ‘pro-innovation’ approach are:

  1. Coordinated policy at all levels for transmission, energy and permitting. Yes.

  2. ‘Balanced’ export controls, meaning scale back the restrictions a bit on cloud compute in particular and actually execute properly, but full details TBD, they plan to offer their final asks here by May 15. I’m willing to listen.

  3. ‘Continued’ funding for AI R&D, public-private partnerships. Release government data sets, give startups cash, and bankroll our CBRN-risk research. Ok I guess?

  4. ‘Pro-innovation federal policy frameworks’ that preempt the states, in particular ‘state-level laws that affect frontier models.’ Again, a request for a total free pass.

  5. ‘Balanced’ copyright law meaning full access to anything they want, ‘without impacting rights holders.’ The rights holders don’t see it that way. Google’s wording here opens the possibility of compensation, and doesn’t threaten that we would lose to China if they don’t get their way, so there’s that.

  6. ‘Balanced privacy laws that recognize exemptions for publicly available information will avoid inadvertent conflicts with AI or copyright standards, or other impediments to the development of AI systems.’ They do still want to protect ‘personally identifying data’ and protect it from ‘malicious actors’ (are they here in the room with us right now?) but mostly they want a pass here too.

  7. Expedited review of the validity of AI-related patents upon request. Bad vibes around the way they are selling it, but the core idea seems good, this seems like a case where someone is actually trying to solve real problems. I approve.

  8. ‘Emphasize focused, sector-specific, and risk-based AI governance and standards.’ Et tu, Google? You are going to go with this use-based regulatory nightmare? I would have thought Google would be better than trying to invoke the nightmare of distinct rules for every different application, which does not deal with the real dangers but does cause giant pains in the ass.

  9. A call for ‘workforce development’ programs, which as I noted for OpenAI are usually well-intentioned and almost always massive boondoggles. Incorporating AI into K-12 education is of course vital but don’t make a Federal case out of it.

  10. Federal government adaptation of AI, including in security and cybersecurity. This is necessary and a lot of the details here seem quite good.

  11. ‘Championing market-driven and widely adopted technical standards and security protocols for frontier models, building on the Commerce Department’s leading role with the International Organization for Standardization’ and ‘Working with industry and aligned countries to develop tailored protocols and standards to identify and address potential national security risks of frontier AI systems.’ They are treating a few catastrophic risks (CBRN in particular) as real, although the document neglects to mention anything beyond that. They want clear indications of who is responsible for what and clear standards to meet, which seems fair. They also want full immunity for ‘misuse’ by customers or end users, which seems far less fair when presented in this kind of absolute way. I’m fine with letting users shoot themselves in the foot but this goes well beyond that.

  12. Ensuring American AI has access to foreign markets via trade agreements. Essentially, make sure no one else tries to regulate anything or stop us from dying, either.

This is mostly Ordinary Decent Corporate Lobbying. Some of it is good and benefits from their expertise, some is not so good, some is attempting regulatory capture, same as it ever was.

The problem is that AI poses existential risks and is going to transform our entire way of life even if things go well, and Google is suggesting strategies that don’t take any of that into account at all. So I would say that overall, I am modestly disappointed, but not making any major updates.

It is a tragedy that Google makes very good AI models, then cripples them by being overly restrictive in places where there is no harm, in ways that only hurt Google’s reputation, while being mostly unhelpful around the actually important existential risks. It doesn’t have to be this way, but I see no signs that Demis can steer the ship on these fronts and make things change.

John Pressman has a follow-up thread explaining why he thought OpenAI’s thread exceeded his expectations. I can understand why one could have expected something worse than what we got, and he asks good questions about the relationship between various parts of OpenAI – a classic mistake is not realizing that companies are made of individuals and those individuals are often at cross-purposes. I do think this is the best steelman I’ve seen, so I’ll quote it at length.

John Pressman: It’s more like “well the entire Trump administration seems to be based on vice signaling so”.

Do I like the framing? No. But concretely it basically seems to say “if we want to beat China we should beef up our export controls *on China*, stop signaling to our allies that we plan to subjugate them, and build more datacenters” which is broad strokes Correct?

“We should be working to convince our allies to use AI to advance Western democratic values instead of an authoritarian vision from the CCP” isn’t the worst thing you could say to a group of vice signaling jingoists who basically demand similar from petitioners.

… [hold this thought]

More important than what the OpenAI comment says is what it doesn’t say: How exactly we should be handling “recipe for ruin” type scenarios, let alone rogue superintelligent reinforcement learners. Lehane seems happy to let these leave the narrative.

I mostly agree with *what is there*, I’m not sure I mostly agree with what’s not there so to speak. Even the China stuff is like…yeah fearmongering about DeepSeek is lame, on the other hand it is genuinely the case that the CCP is a scary institution that likes coercing people.

The more interesting thing is that it’s not clear to me what Lehane is saying is even in agreement with the other stated positions/staff consensus of OpenAI. I’d really like to know what’s going on here org chart wise.

Thinking about it further it’s less that I would give OpenAI’s comment a 4/5 (let alone a 5/5), and more like I was expecting a 1/5 or 0/5 and instead read something more like 3/5: Thoroughly mediocre but technically satisfies the prompt. Not exactly a ringing endorsement.

We agree about what is missing. There are two disagreements about what is there.

The potential concrete disagreement is over OpenAI’s concrete asks, which I think are self-interested overreaches in several places. It’s not clear to what extent he sees them as overreaches versus being justified underneath the rhetoric.

The other disagreement is over the vice signaling. He is saying (as I understand it) that the assignment was to vice signal, of course you have to vice signal, so you can’t dock them for vice signaling. And my response is a combination of ‘no, it still counts as vice signaling, you still pay the price and you still don’t do it’ and also ‘maybe you had to do some amount of vice signaling but MY LORD NOT LIKE THAT.’ OpenAI sent a strong, costly and credible vice signal and that is important evidence to notice and also the act of sending it changes them.

By contrast: Google’s submission is what you’d expect from someone who ‘understood the assignment’ and wasn’t trying to be especially virtuous, but was not Obviously Evil. Anthropic’s reaction is someone trying to do better than that while strategically biting their tongue, and of course MIRI’s would be someone politely not doing that.

I think this is related to the statement I skipped over, which was directed at me, and I’ll include my response from the thread, and I want to be clear I think John is doing his best and saying what he actually believes here and I don’t mean to single him out but this is a persistent pattern that I think causes a lot of damage:

John Pressman: Anyway given you think that we’re all going to die basically, it’s not like you get to say “that person over there is very biased but I am a neutral observer”, any adherence to the truth on your part in this situation would be like telling the axe murderer where the victim is.

Zvi Mowshowitz: I don’t know how to engage with your repeated claims that people who believe [X] would obviously then do [Y], no matter the track record of [~Y] and advocacy of [~Y] and explanation of [~Y] and why [Y] would not help with the consequences of [X].

This particular [Y] is lying, but there have been other values of [Y] as well. And, well, seriously, WTF am I supposed to do with that, I don’t know how to send or explain costlier signals than are already being sent.

I don’t really have an ask, I just want to flag how insanely frustrating this is and that it de facto makes it impossible to engage and that’s sad because it’s clear you have unique insights into some things, whereas if I was as you assume I am I wouldn’t have quoted you at all.

I think this actually is related to one of our two disagreements about the OP from OpenAI – you think that vice signaling to those who demand vice signaling is good because it works, and I’m saying no, you still don’t do it, and if you do then that’s still who you are.

The other values of [Y] he has asserted, in other places, have included a wide range of both [thing that would never work and is also pretty horrible] and [preference that John thinks follows from [X] but where we strongly think the opposite and have repeatedly told him and others this and explained why].

And again, I’m laying this out because he’s not alone. I believe he’s doing it in unusually good faith and is mistaken, whereas mostly this class of statement is rolled out as a very disingenuous rhetorical attack.

The short version of why the various non-virtuous [Y] strategies wouldn’t work is:

  1. The FDT or virtue ethics answer. The problems are complicated on all levels. The type of person who would [Y] in pursuit of [~X] can’t even figure out to expect [X] to happen by default, let alone think well enough to figure out what [Z] to pursue (via [Y] or [~Y]), in order to accomplish [~X]. The whole rationality movement was created exactly because if you can’t think well in general and have very high epistemic standards, you can’t think well about AI, either, and you need to do that.

  2. The CDT or utilitarian answer. Even if you knew the [Z] to aim for, this is an iterated, complicated social game, where we need to make what to many key decision makers look like extraordinary claims, and ask for actions to be taken based on chains of logic, without waiting for things to blow up in everyone’s face first and muddling through afterwards, like humanity normally does it. Employing various [Y] to those ends, even a little, let alone on the level of say politicians, will inevitably and predictably backfire. And indeed, in those few cases where someone notably broke this rule, it did massively backfire.

Is it possible that at some point in the future, we will have a one-shot situation actually akin to Kant’s ax murderer, where we know exactly the one thing that matters most and a deceptive path to it, and then have a more interesting question? Indeed do many things come to pass. But that is at least quite a ways off, and my hope is to be the type of person who would still try very hard not to pull that trigger.

The even shorter version is:

  1. The type of person who can think well enough to realize to do it, won’t do it.

  2. Even if you did it anyway, it wouldn’t work, and we realize this.

Here is the other notable defense of OpenAI, which is to notice what John was pointing to, that OpenAI contains multitudes.

Shakeel: I really, really struggle to see how OpenAI’s suggestions to the White House on AI policy are at all compatible with the company recently saying that “our models are on the cusp of being able to meaningfully help novices create known biological threats”.

Just an utterly shameful document. Lots of OpenAI employees still follow me; I’d love to know how you feel about your colleagues telling the government that this is all that needs to be done! (My DMs are open.)

Roon: the document mentions CBRN risk. openai has to do the hard work of actually dealing with the White House and figuring out whatever the hell they’re going to be receptive to

Shakeel: I think you are being way too charitable here — it’s notable that Google and Anthropic both made much more significant suggestions. Based on everything I’ve heard/seen, I think your policy team (Lehane in particular) just have very different views and aims to you!

“maybe the biggest risk is missing out”? Cmon.

Lehane (OpenAI, in charge of the document): Maybe the biggest risk here is actually missing out on the opportunity. There was a pretty significant vibe shift when people became more aware and educated on this technology and what it means.

Roon: yeah that’s possible.

Richard Ngo: honestly I think “different views” is actually a bit too charitable. the default for people who self-select into PR-type work is to optimize for influence without even trying to have consistent object-level beliefs (especially about big “sci-fi” topics like AGI)

You can imagine how the creatives reacted to proposals to invalidate copyright without any sign of compensation.

Chris Morris (Fast Company): A who’s who of musicians, actors, directors, and more have teamed up to sound the alarm as AI leaders including OpenAI and Google argue that they shouldn’t have to pay copyright holders for AI training material.

Included among the prominent signatures on the letter were Paul McCartney, Cynthia Erivo, Cate Blanchett, Phoebe Waller-Bridge, Bette Midler, Cate Blanchett, Paul Simon, Ben Stiller, Aubrey Plaza, Ron Howard, Taika Waititi, Ayo Edebiri, Joseph Gordon-Levitt, Janelle Monáe, Rian Johnson, Paul Giamatti, Maggie Gyllenhaal, Alfonso Cuarón, Olivia Wilde, Judd Apatow, Chris Rock, and Mark Ruffalo.

“It is clear that Google . . . and OpenAI . . . are arguing for a special government exemption so they can freely exploit America’s creative and knowledge industries, despite their substantial revenues and available funds.”

No surprises there. If anything, that was unexpectedly polite.

I would perhaps be slightly concerned about pissing off the people most responsible for the world’s creative content (and especially Aubrey Plaza), but hey. That’s just me.

I’ve definitely been curious where these folks would land. Could have gone either way.

I am once again disappointed to see the framing as Americans versus authoritarians, although in a calm and sane fashion. They do call for investment in ‘reliability and security’ but only because they recognize, and on the basis of, the fact that reliability and security are (necessary for) capability. Which is fine to the extent it gets the job done, I suppose. But the complete failure to consider existential or catastrophic risks, other than authoritarianism, is deeply disappointing.

They offer six areas of focus.

  1. Making it easier to build AI data centers and associated energy infrastructure. Essentially everyone agrees on this, it’s a question of execution, they offer details.

  2. Supporting American open-source AI leadership. They open this section with ‘some models… will need to be kept secure from adversaries.’ So there’s that, in theory we could all be on the same page on this, if more of the advocates of open models could also stop being anarchists and face physical reality. The IFP argument for why it must be America that ‘dominates open source AI’ is the danger of backdoors, but yes it is rather impossible to get an enduring ‘lead’ in open models because all your open models are, well, open. They admit this is rather tricky.

    1. The first basic policy suggestion here is to help American open models git gud via reliability, but how is that something the government can help with?

    2. They throw out the idea of prizes for American open models, but again I notice I am puzzled by how exactly this would supposedly work out.

    3. They want to host American open models on NAIRR, so essentially offering subsidized compute to the ‘little guy’? I pretty much roll my eyes, but shrug.

  3. Launch R&D moonshots to solve AI reliability and security. I strongly agree that it would be good if we could indeed do this in even a modestly reasonable way, as in a fraction of the money turns into useful marginal spending. Ambitious investments in hardware security, a moonshot for AI-driven formally verified software and a ‘grand challenge’ for interpretability, would be highly welcome, as would a pilot for a highly secure data center. Of course, the AI labs are massively underinvesting in this even purely from a selfish perspective.

  4. Build state capacity to evaluate the national security capabilities and implications of US and adversary models. This is important. I think their recommendation on AISI is making a tactical error. It is emphasizing the dangers of AISI following things like the ‘risk management framework’ and thus playing into the hands of those who would dismantle AISI, which I know is not what they want. AISI is already focused on what IFP is referring to as ‘security risks’ combined with potential existential dangers, and emphasizing that is what is most important. AISI is under threat mostly because MAGA people, and Cruz in particular, are under the impression that it is something that it is not.

  5. Attracting and retaining superstar AI talent. Absolutely. They mention EB-1A, EB-2 and O-3, which I hadn’t considered. Such asks are tricky because obviously we should be allowing as much high skill immigration as we can across the board, especially from our rivals, except you’re pitching the Trump Administration.

  6. Improving export control policies and enforcement capacity. They suggest making export exceptions for chips with proper security features that guard against smuggling and misuse. Sounds great to me if implemented well. And they also want to control high-performance inference chips and properly fund BIS, again I don’t have any problem with that.

Going item by item, I don’t agree with everything and think there are some tactical mistakes, but that’s a pretty good list. I see what IFP is presumably trying to do, to sneak useful-for-existential-risk proposals in because they would be good ideas anyway, without mentioning the additional benefits. I totally get that, and my own write-up did a bunch in this direction too, so I get it even if I think they took it too far.

This was a frustrating exercise for everyone writing suggestions. Everyone had to balance between saying what needs to be said, versus saying it in a way that would cause the administration to listen.

How everyone responded to that challenge tells you a lot about who they are.

Discussion about this post

More on Various AI Action Plans Read More »

ceo-of-ai-ad-tech-firm-pledging-“world-free-of-fraud”-sentenced-for-fraud

CEO of AI ad-tech firm pledging “world free of fraud” sentenced for fraud

In May 2024, the website of ad-tech firm Kubient touted that the company was “a perfect blend” of ad veterans and developers, “committed to solving the growing problem of fraud” in digital ads. Like many corporate sites, it also linked old blog posts from its home page, including a May 2022 post on “How to create a world free of fraud: Kubient’s secret sauce.”

These days, Kubient’s website cannot be reached, the team is no more, and CEO Paul Roberts is due to serve one year and one day in prison, having pled guilty Thursday to creating his own small world of fraud. Roberts, according to federal prosecutors, schemed to create $1.3 million in fraudulent revenue statements to bolster Kubient’s initial public offering (IPO) and significantly oversold “KAI,” Kubient’s artificial intelligence tool.

The core of the case is an I-pay-you, you-pay-me gambit that Roberts initiated with an unnamed “Company-1,” according to prosecutors. Kubient and this firm would each bill the other for nearly identical amounts, with Kubient purportedly deploying KAI to find instances of ad fraud in the other company’s ad spend.

Roberts, prosecutors said, “directed Kubient employees to generate fake KAI reports based on made-up metrics and no underlying data at all.” These fake reports helped sell the story to independent auditors and book the synthetic revenue in financial statements, according to Roberts’ indictment.

CEO of AI ad-tech firm pledging “world free of fraud” sentenced for fraud Read More »

how-the-language-of-job-postings-can-attract-rule-bending-narcissists

How the language of job postings can attract rule-bending narcissists

Why it matters

Companies write job postings carefully in hopes of attracting the ideal candidate. However, they may unknowingly attract and select narcissistic candidates whose goals and ethics might not align with a company’s values or long-term success. Research shows that narcissistic employees are more likely to behave unethically, potentially leading to legal consequences.

While narcissistic traits can lead to negative outcomes, we aren’t saying that companies should avoid attracting narcissistic applicants altogether. Consider a company hiring a salesperson. A firm can benefit from a salesperson who is persuasive, who “thinks outside the box,” and who is “results-oriented.” In contrast, a company hiring an accountant or compliance officer would likely benefit from someone who “thinks methodically” and “communicates in a straightforward and accurate manner.”

Bending the rules is of particular concern in accounting. A significant amount of research examines how accounting managers sometimes bend rules or massage the numbers to achieve earnings targets. This “earnings management” can misrepresent the company’s true financial position.

In fact, my co-author Nick Seybert is currently working on a paper whose data suggests rule-bender language in accounting job postings predicts rule-bending in financial reporting.

Our current findings shed light on the importance of carefully crafting job posting language. Recruiting professionals may instinctively use rule-bender language to try to attract someone who seems like a good fit. If companies are concerned about hiring narcissists, they may want to clearly communicate their ethical values and needs while crafting a job posting, or avoid rule-bender language entirely.

What still isn’t known

While we find that professional recruiters are using language that attracts narcissists, it is unclear whether this is intentional.

Additionally, we are unsure what really drives rule-bending in a company. Rule-bending could happen due to attracting and hiring more narcissistic candidates, or it could be because of a company’s culture—or a combination of both.

The Research Brief is a short take on interesting academic work.

Jonathan Gay is Assistant Professor of Accountancy at the University of Mississippi.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

How the language of job postings can attract rule-bending narcissists Read More »

cloudflare-turns-ai-against-itself-with-endless-maze-of-irrelevant-facts

Cloudflare turns AI against itself with endless maze of irrelevant facts

On Wednesday, web infrastructure provider Cloudflare announced a new feature called “AI Labyrinth” that aims to combat unauthorized AI data scraping by serving fake AI-generated content to bots. The tool will attempt to thwart AI companies that crawl websites without permission to collect training data for large language models that power AI assistants like ChatGPT.

Cloudflare, founded in 2009, is probably best known as a company that provides infrastructure and security services for websites, particularly protection against distributed denial-of-service (DDoS) attacks and other malicious traffic.

Instead of simply blocking bots, Cloudflare’s new system lures them into a “maze” of realistic-looking but irrelevant pages, wasting the crawler’s computing resources. The approach is a notable shift from the standard block-and-defend strategy used by most website protection services. Cloudflare says blocking bots sometimes backfires because it alerts the crawler’s operators that they’ve been detected.

“When we detect unauthorized crawling, rather than blocking the request, we will link to a series of AI-generated pages that are convincing enough to entice a crawler to traverse them,” writes Cloudflare. “But while real looking, this content is not actually the content of the site we are protecting, so the crawler wastes time and resources.”

The company says the content served to bots is deliberately irrelevant to the website being crawled, but it is carefully sourced or generated using real scientific facts—such as neutral information about biology, physics, or mathematics—to avoid spreading misinformation (whether this approach effectively prevents misinformation, however, remains unproven). Cloudflare creates this content using its Workers AI service, a commercial platform that runs AI tasks.

Cloudflare designed the trap pages and links to remain invisible and inaccessible to regular visitors, so people browsing the web don’t run into them by accident.

A smarter honeypot

AI Labyrinth functions as what Cloudflare calls a “next-generation honeypot.” Traditional honeypots are invisible links that human visitors can’t see but bots parsing HTML code might follow. But Cloudflare says modern bots have become adept at spotting these simple traps, necessitating more sophisticated deception. The false links contain appropriate meta directives to prevent search engine indexing while remaining attractive to data-scraping bots.

Cloudflare turns AI against itself with endless maze of irrelevant facts Read More »

california-bill-would-force-isps-to-offer-100mbps-plans-for-$15-a-month

California bill would force ISPs to offer 100Mbps plans for $15 a month

Several states consider price requirements

While the California proposal will face opposition from ISPs and is not guaranteed to become law, the amended bill has higher speed requirements for the $15 plan than the existing New York law that inspired it. The New York law lets ISPs comply either by offering $15 broadband plans with download speeds of at least 25Mbps, or $20-per-month service with 200Mbps speeds. The New York law doesn’t specify minimum upload speeds.

AT&T stopped offering its 5G home Internet service in New York entirely instead of complying with the law. But AT&T wouldn’t be able to pull home Internet service out of California so easily because it offers DSL and fiber Internet in the state, and it is still classified as a carrier of last resort for landline phone service.

The California bill says ISPs must file annual reports starting January 1, 2027, to describe their affordable plans and specify the number of households that purchased the service and the number of households that were rejected based on eligibility verification. The bill seems to assume that ISPs will offer the plans before 2027 but doesn’t specify an earlier date. Boerner’s office told us the rule would take effect on January 1, 2026. Boerner’s office is also working on an exemption for small ISPs, but hasn’t settled on final details.

Meanwhile, a Massachusetts bill proposes requiring that ISPs provide at least 100Mbps speeds for $15 a month or 200Mbps for $20 a month. A Vermont bill would require 25Mbps speeds for $15 a month or 200Mbps for $20 a month.

Telco groups told the Supreme Court last year that the New York law “will likely lead to more rate regulation absent the Court’s intervention” as other states will copy New York. They subsequently claimed that AT&T’s New York exit proves the law is having a negative effect. But the Supreme Court twice declined to hear the industry challenge, allowing New York to enforce the law.

California bill would force ISPs to offer 100Mbps plans for $15 a month Read More »

fcc-chairman-brendan-carr-starts-granting-telecom-lobby’s-wish-list

FCC Chairman Brendan Carr starts granting telecom lobby’s wish list

In July 2024, AT&T became the first carrier to apply for a technology transition discontinuance “under the Adequate Replacement Test relying on the applicant’s own replacement service,” the order said. “AT&T indicated in this application that it was relying on a totality of the circumstances showing to establish the adequacy of its replacement service, but also committed to the performance testing methodology and parameters established in the 2016 Technology Transitions Order Technical Appendix.” This “delay[ed] the filing of its discontinuance application for several months,” the FCC said.

Harold Feld, senior VP of consumer advocacy group Public Knowledge, said the FCC clarification that carriers don’t need to perform testing, “combined with elimination of most of the remaining notice requirements, means that you don’t have to worry about actually proving anything. Just say ‘totality of the circumstances’ and by the time anyone who cares finds out, the application will be granted.”

“The one positive thing is that some states (such as California) still have carrier of last resort rules to protect consumers,” Feld told Ars. “In some states, at least, consumers will not suddenly find themselves cut off from 911 or other important services.”

Telco lobby loves FCC moves

The bureau separately approved a petition for a waiver filed last month by USTelecom, a lobby group that represents telcos such as AT&T, Verizon, and CenturyLink (aka Lumen). The group sought a waiver of a requirement that replacement voice services be offered on a stand-alone basis instead of only in a bundle with broadband.

While bundles cost more than single services for consumers who only want phone access, USTelecom said that “inefficiencies of offering stand-alone voice can raise costs for consumers and reduce capital available for investment and innovation.”

The FCC said granting the waiver will allow providers “to retire copper networks, not only in cases where replacement voice services are available on a stand-alone basis, but in cases where those services are available on a bundled basis.” The waiver is approved for two years and can be extended.

USTelecom President and CEO Jonathan Spalter praised the FCC actions in a statement. “Broadband providers appreciate Chairman Carr’s laser focus on cutting through red tape and outdated mindsets to accelerate the work of connecting all Americans,” Spalter said.

Just like Carr’s statement, Spalter did not use the word “fiber” when discussing replacements for copper service. He said vaguely that “today’s decision marks a significant step forward in transitioning outdated copper telephone lines to next-generation networks that better meet the needs of American consumers,” and “will help turbocharge investment in advanced broadband infrastructure, sustain and grow a skilled broadband workforce, bring countless new choices and services to more families and communities, and fuel our innovation economy.”

FCC Chairman Brendan Carr starts granting telecom lobby’s wish list Read More »

apple-loses-$1b-a-year-on-prestigious,-minimally-viewed-apple-tv+:-report

Apple loses $1B a year on prestigious, minimally viewed Apple TV+: report

The Apple TV+ streaming service “is losing more than $1 billion annually,” according to The Information today.

The report also claimed that Apple TV+’s subscriber count reached “around 45 million” in 2024, citing the two anonymous sources.

Ars reached out to Apple for comment on the accuracy of The Information’s report and will update this article if we hear back.

According to one of the sources, Apple TV+ has typically spent over $5 billion annually on content since 2019, when Apple TV+ debuted. Last year, though, Apple CEO Tim Cook reportedly cut the budget by about $500 million. The reported numbers are similar to a July report from Bloomberg that claimed that Apple had spent over $20 billion on Apple TV+’s library. For comparison, Netflix has 301.63 million subscribers and expects to spend $18 billion on content in 2025.

In the year preceding Apple TV+’s debut, Apple services chief Eddy Cue reportedly pushed back on executive requests to be stingier with content spending, “a person with direct knowledge of the matter” told The Information.

But Cook started paying closer attention to Apple TV+’s spending after the 2022 Oscars, where the Apple TV+ original CODA won Best Picture. The award signaled the significance of Apple TV+ as a business.

Per The Information, spending related to Apple TV+ previously included lavish perks for actors and producers. Apple paid “hundreds of thousands of dollars per flight” to transport Apple TV+ actors and producers to promotional events, The Information said, noting that such spending “is common in Hollywood” but “more unusual at Apple.” Apple’s finance department reportedly pushed Apple TV+ executives to find better flight deals sometime around 2023.

In 2024, Cook questioned big-budget Apple TV+ films, like the $200 million Argylle, which he said failed to generate impressive subscriber boosts or viewership, an anonymous “former Apple TV+ employee” shared. Cook reportedly cut about $500 million from the Apple TV+ content budget in 2024.

Apple loses $1B a year on prestigious, minimally viewed Apple TV+: report Read More »