Author name: Shannon Garcia

google-launches-“gemini-business”-ai,-adds-$20-to-the-$6-workspace-bill

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

$6 for apps like Gmail and Docs, and $20 for an AI bot? —

Google’s AI features add a 3x increase over the usual Workspace bill.

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill

Google

Google went ahead with plans to launch Gemini for Workspace today. The big news is the pricing information, and you can see the Workspace pricing page is new, with every plan offering a “Gemini add-on.” Google’s old AI-for-Business plan, “Duet AI for Google Workspace,” is dead, though it never really launched anyway.

Google has a blog post explaining the changes. Google Workspace starts at $6 per user per month for the “Starter” package, and the AI “Add-on,” as Google is calling it, is an extra $20 monthly cost per user (all of these prices require an annual commitment). That is a massive price increase over the normal Workspace bill, but AI processing is expensive. Google says this business package will get you “Help me write in Docs and Gmail, Enhanced Smart Fill in Sheets and image generation in Slides.” It also includes the “1.0 Ultra” model for the Gemini chatbot—there’s a full feature list here. This $20 plan is subject to a usage limit for Gemini AI features of “1,000 times per month.”

The new Workspace pricing page, with a

Enlarge / The new Workspace pricing page, with a “Gemini Add-On” for every plan.

Google

Gemini for Google Workspace represents a total rebrand of the AI business product and some amount of consistency across Google’s hard-to-follow, constantly changing AI branding. Duet AI never really launched to the general public. The product, announced in August, only ever had a “Try” link that led to a survey, and after filling it out, Google would presumably contact some businesses and allow them to pay for Duet AI. Gemini Business now has a checkout page, and any Workspace business customer can buy the product today with just a few clicks.

Google’s second plan is “Gemini Enterprise,” which doesn’t come with any usage limits, but it’s also only available through a “contact us” link and not a normal checkout procedure. Enterprise is $30 per user per month, and it “includes additional capabilities for AI-powered meetings, where Gemini can translate closed captions in more than 100 language pairs, and soon even take meeting notes.”

Google launches “Gemini Business” AI, adds $20 to the $6 Workspace bill Read More »

google-goes-“open-ai”-with-gemma,-a-free,-open-weights-chatbot-family

Google goes “open AI” with Gemma, a free, open-weights chatbot family

Free hallucinations for all —

Gemma chatbots can run locally, and they reportedly outperform Meta’s Llama 2.

The Google Gemma logo

On Wednesday, Google announced a new family of AI language models called Gemma, which are free, open-weights models built on technology similar to the more powerful but closed Gemini models. Unlike Gemini, Gemma models can run locally on a desktop or laptop computer. It’s Google’s first significant open large language model (LLM) release since OpenAI’s ChatGPT started a frenzy for AI chatbots in 2022.

Gemma models come in two sizes: Gemma 2B (2 billion parameters) and Gemma 7B (7 billion parameters), each available in pre-trained and instruction-tuned variants. In AI, parameters are values in a neural network that determine AI model behavior, and weights are a subset of these parameters stored in a file.

Developed by Google DeepMind and other Google AI teams, Gemma pulls from techniques learned during the development of Gemini, which is the family name for Google’s most capable (public-facing) commercial LLMs, including the ones that power its Gemini AI assistant. Google says the name comes from the Latin gemma, which means “precious stone.”

While Gemma is Google’s first major open LLM since the launch of ChatGPT (it has released smaller research models such as FLAN-T5 in the past), it’s not Google’s first contribution to open AI research. The company cites the development of the Transformer architecture, as well as releases like TensorFlow, BERT, T5, and JAX as key contributions, and it would not be controversial to say that those have been important to the field.

A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta's Llama 2 on several benchmarks.

Enlarge / A chart of Gemma performance provided by Google. Google says that Gemma outperforms Meta’s Llama 2 on several benchmarks.

Owing to lesser capability and high confabulation rates, smaller open-weights LLMs have been more like tech demos until recently, as some larger ones have begun to match GPT-3.5 performance levels. Still, experts see source-available and open-weights AI models as essential steps in ensuring transparency and privacy in chatbots. Google Gemma is not “open source” however, since that term usually refers to a specific type of software license with few restrictions attached.

In reality, Gemma feels like a conspicuous play to match Meta, which has made a big deal out of releasing open-weights models (such as LLaMA and Llama 2) since February of last year. That technique stands in opposition to AI models like OpenAI’s GPT-4 Turbo, which is only available through the ChatGPT application and a cloud API and cannot be run locally. A Reuters report on Gemma focuses on the Meta angle and surmises that Google hopes to attract more developers to its Vertex AI cloud platform.

We have not used Gemma yet; however, Google claims the 7B model outperforms Meta’s Llama 2 7B and 13B models on several benchmarks for math, Python code generation, general knowledge, and commonsense reasoning tasks. It’s available today through Kaggle, a machine-learning community platform, and Hugging Face.

In other news, Google paired the Gemma release with a “Responsible Generative AI Toolkit,” which Google hopes will offer guidance and tools for developing what the company calls “safe and responsible” AI applications.

Google goes “open AI” with Gemma, a free, open-weights chatbot family Read More »

microsoft-confirms-which-xbox-games-are-going-to-switch,-playstation

Microsoft confirms which Xbox games are going to Switch, PlayStation

four fewer reasons to buy an Xbox? —

Hi-Fi Rush, Grounded, Pentiment, and Sea of Thieves are going multiplatform.

Four Xbox console exclusives will soon be exclusive no more.

Enlarge / Four Xbox console exclusives will soon be exclusive no more.

Microsoft

During a “business update” video podcast last week, Microsoft addressed widespread rumors of Xbox software going multiplatform by saying that four of its legacy titles would be going “to the other consoles” in the future. But the company waited until today to confirm the names of the four soon-to-be-multiplatform titles.

The Xbox games coming to other consoles in the coming months are (multiplatform launch date in parentheses):

  • Pentiment (February 22, Switch, PS4/5): Obsidian’s historical murder mystery has a sprawling narrative that reacts strongly to player choices.
  • Hi-Fi Rush (March 9, PS5): A rhythm-action game from Bethesda Softworks where you have to match your attacks and movements to the beat to maximize your impact.
  • Grounded (April 16, Switch, PS4/5): Obsidian’s co-op survival adventure will be fully cross-play compatible across all platforms.
  • Sea of Thieves (April 30, PS5): Despite what we considered a poor first impression, Rare’s pirate-themed multiplayer simulation has attracted 35 million players, according to Microsoft. This title will also be cross-play compatible across platforms.

Microsoft’s announcement comes just after Grounded and Pentiment were announced for Switch as part of the morning’s Nintendo Direct: Partner Showcase video stream, the timing of which likely prevented Microsoft from announcing its plans for those titles last week. There wasn’t a lot of drama to today’s announcement, though; The Verge and independent journalist Stephen Totilo cited anonymous sources in accurately naming all four games just after Microsoft’s presentation last week.

Before that presentation, rumors flying around the Xbox community suggested that major Xbox exclusives like Starfield or Bethesda’s upcoming Indiana Jones and the Great Circle would be coming to other consoles or that Microsoft had plans to leave the console space entirely. And while Microsoft has effectively shot down those rumors, the company has suggested that exclusive games will be a less important part of its console strategy going into the future.

“[I have] a fundamental belief that over the next five or ten years… games that are exclusive to one piece of hardware are going to be a smaller and smaller part of the game industry,” Xbox CEO Phil Spencer said.

Microsoft confirms which Xbox games are going to Switch, PlayStation Read More »

twitter-security-staff-kept-firm-in-compliance-by-disobeying-musk,-ftc-says

Twitter security staff kept firm in compliance by disobeying Musk, FTC says

Close call —

Lina Khan: Musk demanded “actions that would have violated the FTC’s Order.”

Elon Musk sits on stage while being interviewed during a conference.

Enlarge / Elon Musk at the New York Times DealBook Summit on November 29, 2023, in New York City.

Getty Images | Michael Santiago

Twitter employees prevented Elon Musk from violating the company’s privacy settlement with the US government, according to Federal Trade Commission Chair Lina Khan.

After Musk bought Twitter in late 2022, he gave Bari Weiss and other journalists access to company documents in the so-called “Twitter Files” incident. The access given to outside individuals raised concerns that Twitter (which is currently named X) violated a 2022 settlement with the FTC, which has requirements designed to prevent repeats of previous security failures.

Some of Twitter’s top privacy and security executives also resigned shortly after Musk’s purchase, citing concerns that Musk’s rapid changes could cause violations of the settlement.

FTC staff deposed former Twitter employees and “learned that the access provided to the third-party individuals turned out to be more limited than the individuals’ tweets and other public reporting had indicated,” Khan wrote in a letter sent today to US Rep. Jim Jordan (R-Ohio). Khan’s letter said the access was limited because employees refused to comply with Musk’s demands:

The deposition testimony revealed that in early December 2022, Elon Musk had reportedly directed staff to grant an outside third-party individual “full access to everything at Twitter… No limits at all.” Consistent with Musk’s direction, the individual was initially assigned a company laptop and internal account, with the intent that the third-party individual be given “elevated privileges” beyond what an average company employee might have.

However, based on a concern that such an arrangement would risk exposing nonpublic user information in potential violation of the FTC’s Order, longtime information security employees at Twitter intervened and implemented safeguards to mitigate the risks. Ultimately the third-party individuals did not receive direct access to Twitter’s systems, but instead worked with other company employees who accessed the systems on the individuals’ behalf.

Khan: FTC “was right to be concerned”

Jordan is chair of the House Judiciary Committee and has criticized the investigation, claiming that “the FTC harassed Twitter in the wake of Mr. Musk’s acquisition.” Khan’s letter to Jordan today argues that the FTC investigation was justified.

“The FTC’s investigation confirmed that staff was right to be concerned, given that Twitter’s new CEO had directed employees to take actions that would have violated the FTC’s Order,” Khan wrote. “Once staff learned that the FTC’s Order had worked to ensure that Twitter employees took appropriate measures to protect consumers’ private information, compliance staff made no further inquiries to Twitter or anyone else concerning this issue.”

Khan also wrote that deep staff cuts following the Musk acquisition, and resignations of Twitter’s top privacy and compliance officials, meant that “there was no one left at the company responsible for interpreting and modifying data policies and practices to ensure Twitter was complying with the FTC’s Order to safeguard Americans’ personal data.” The letter continued:

During staff’s evaluation of the workforce reductions, one of the company’s recently departed lead privacy and security experts testified that Twitter Blue was being implemented too quickly so that the proper “security and privacy review was not conducted in accordance with the company’s process for software development.” Another expert testified that he had concerns about Mr. Musk’s “commitment to overall security and privacy of the organization.” Twitter, meanwhile, filed a motion seeking to eliminate the FTC Order that protected the privacy and security of Americans’ data. Fortunately for Twitter’s millions of users, that effort failed in court.

FTC still trying to depose Musk

While no violation was found in this case, the FTC isn’t done investigating. When contacted by Ars, an FTC spokesperson said the agency cannot rule out bringing lawsuits against Musk’s social network for violations of the settlement or US law.

“When we heard credible public reports of potential violations of protections for Twitter users’ data, we moved swiftly to investigate,” the FTC said in a statement today. “The order remains in place and the FTC continues to deploy the order’s tools to protect Twitter users’ data and ensure the company remains in compliance.”

The FTC also said it is continuing attempts to depose Musk. In July 2023, Musk’s X Corp. asked a federal court for an order that would terminate the settlement and prevent the FTC from deposing Musk. The court denied both requests in November. In a filing, US government lawyers said the FTC investigation had “revealed a chaotic environment at the company that raised serious questions about whether and how Musk and other leaders were ensuring X Corp.’s compliance with the 2022 Administrative Order.”

We contacted X today, but an auto-reply informed us that the company was busy and asked that we check back later.

Twitter security staff kept firm in compliance by disobeying Musk, FTC says Read More »

the-top-7-bestselling-phone-models-of-2023-are-all-iphones

The top 7 bestselling phone models of 2023 are all iPhones

Ok, but spots 8-1,000 are Android phones —

Every currently sold iPhone makes the top seven, except the iPhone SE.

The iPhone 14.

Enlarge / The iPhone 14.

Apple

Counterpoint has a new report on the top-selling phone models of 2023, and for the first time, the top seven sold models for the year are all iPhones. The report tracks worldwide sales of individual smartphone models, and while hundreds of new phones are released yearly, Counterpoint says this top-10 list represents a whopping 20 percent of the worldwide market.

The top three spots are all the iPhone 14 models, with the cheaper base model taking the top spot. 2023 saw the release of the iPhone 15, but only in September 2023. The iPhone 15 models rocketed to spots 5, 6, and 7 with only about three months of sales. Sandwiched in between the 14 and 15 models at No. 4 is the iPhone 13, the cheapest modern-looking iPhone Apple sells.

Counterpoint's 2023 smartphone chart.

Enlarge / Counterpoint’s 2023 smartphone chart.

Counterpoint

The actual cheapest iPhone, the iPhone SE, didn’t make the list this year. The dated design and (maybe?) small size isn’t resonating with consumers, and right now, the rumor mill suggests Apple won’t be making another SE. The 2022 version of this report included the SE, so eight of the top 10 devices were Apple phones, but a Samsung phone crept in at spot No. 4.

Speaking of Samsung, the bottom three phones in the list are all Samsung phones, but probably none anyone has ever heard of. Samsung has plenty of expensive flagships, like the Galaxy Z Fold at $1,800, but the phones it ships at volume are all budget devices. Spot No. 8 is the $200 Galaxy A14 5G. No. 9 is the very bottom of Samsung’s phone lineup, the $100 Galaxy A04e, and then, at No. 10, a Galaxy A14 4G (not 5G), which is around $160. We’re trying to go by MSRP for these phone prices, but they all tend not to sell at MSRP. These cheaper devices are frequently on sale or are available as burner phones on a two-year pre-paid plan at a big discount.

It’s hard for any Samsung phone to stand out in the market because Samsung releases so many devices. If we look at the GSM Arena’s database for phones released from 2021–2023, Apple has released 13 phones, while Samsung has 89 different models.

The top 7 bestselling phone models of 2023 are all iPhones Read More »

a-tale-of-two-restaurant-types

A Tale of Two Restaurant Types

While I sort through whatever is happening with GPT-4, today’s scheduled post is two recent short stories about restaurant selection.

Tyler Cowen says that restaurants saying ‘since year 19xx’ are on net a bad sign, because they are frozen in time, focusing on being reliable.

For the best meals, he says look elsewhere, to places that shine brightly and then move on.

I was highly suspicious. So I ran a test.

I checked the oldest places in Manhattan. The list had 15 restaurants. A bunch are taverns, which are not relevant to my interests. The rest include the legendary Katz’s Delicatessen, which is still on the short list of very best available experiences (yes, of course you order the Pastrami), and the famous Keen’s Steakhouse. I don’t care for mutton, but their regular steaks are quite good. There’s also Peter Lugar’s and PJ Clarke’s. There were also two less impressive steakhouses. Old Homestead is actively bad, and Delmonico’s was a great experience because we went to The Continental and then to John Wick 3 but is objectively overpriced without being special.

Those are all ‘since 18xx,’ so extreme cases. What about typical cases?

Unfortunately, getting opening date data is tricky. Other lists I found did not actually correspond to when places opened all that well. I wasn’t able to easily test more systematically. Looking at ‘places you like the most’ has obvious bias issues. The one I love most opened in 1978. Others were newer but mostly not that recent. However I’ve had a lifetime to find them, so a question is, how fast and completely do I evaluate new offerings?

My guess is that:

  1. Most new restaurants are below average, and also rather uninteresting. The average new (non-chain) restaurant is higher quality now than in the past, but it is also less interesting.

  2. Average (mean or median) quality increases with age, at least initially, due to positive selection via survivorship. If a place folds quickly, you usually did not miss very much.

  3. Older places are selected for because they reward repeat business and being a regular. Thus, if you are trying places in your area, you should be sure to try such places, because there could be high value in being that regular. But in your area you should be checking out essentially all plausible options over time. The primary question to ask is, what is the upside of trying this? The best upside is not the best one time experience, it is a place you can add to your rotation that brings something new to the table.

  4. A place that survives will on average become a slightly worse choice over time, as the alternatives improve and it attempts to largely stay the same, once they get the kinks out in their early period.

  5. The places you love, in particular, will get worse for you, in particular, over time, because any change to them or to you will tend to be bad for the match, and also alternatives will improve.

  6. The very best food experiences require novelty of some kind from your perspective, it is true. So there is a certain kind of experience for which you want to try the new, but you have to be in strong exploration mode.

  7. But also very old restaurants often do something unique or uniquely well and have survived because of it, so they can offer a unique experience as well. The most unique things won’t be so old, but on average older things will be more unique.

  8. You can get a better advance read on older places than you can on new places.

  9. In expectation, all else being equal, selection effects dominate, older has higher EV. This is true even if your priority ‘your experience today, right now.’

  10. However all else is not equal, and the more additional filtering work you do the more you should end up going to relatively new places.

The good news here is that I strongly think Emmett Shear is centrally wrong.

Nick: I hate how well DoorDash ratings correlate with the restaurants I spent 10yrs searching out all the hidden gems i had are 4.9 and only the only false positive is Sweet Maple.

Emmett Shear: Yelp has destroyed the joy of exploration and discovery in exchange for efficiency and quality, and I’m not sure it’s a good trade in the end. Yes, I know I could just not look. But knowing it’s there and I could just look makes trying and it turning out meh just feel bad.

Zvi: This is so bizarre to see. Yelp ratings seem awful to me I use Google Maps instead, but beyond that it is the ratings that enable exploration to be worthwhile. You learn what is worth exploring! It’s great. Another tactic that you can use that I enjoy sometimes: Explore, then once you are physically looking at the place and it looks promising, check online before actually going in, and to get ideas on what to order. Best of both worlds.

Sophia: people claim often that the overall quality improvements over decades from yelp making it hard to run a bad restaurant are huge, which seems really good to me.

Emmett Shear later clarified that his theory is that Yelp is good when hipsters dominate the rankings inputs, but poor when tourists do so.

Exploration and discovery is vastly better, easier and more rewarding in the review era than it was in the pre-review era. The joy is higher, not lower. It is your choice how much exploration you still want to do versus exploitation, and how many risky ‘hidden gems’ you want to seek out and test. As I note, one good tactic is to literally walk the streets anyway, see what is available, only then use online to verify.

Also, one can test Nick’s theory that the ratings are actually indicative.

As a baseline, let’s use this market as a source of places that I think were valuable to find, plus anything I pick up along the way that seems like an oversight on that list. Any exploration procedure should place a high priority in find them. Google Maps ratings will often fail entirely to differentiate these places from other similar places that I like less. The ratings are highly valuable, but they do not let you not do the work, especially for ethnic restaurants, where ‘do they handle delivery and customer service well’ is a huge portion of the rating.

We also want to check false positives. That is mostly rarer, if you have an exceptional rating you are probably good, but it does not reliably make you great on number alone.

So let’s check. Will DoorDash or Yelp do better? I am doing this hungry.

The average rating on DoorDash of the places that were there was 4.64. That is a good rating. But it is not an exceptional rating. The default filter is to only show you places at 4.5 or higher. The signal here seems to be filtering out of places that have big issues, but it does not seem good at identifying exceptional things. The places the app was suggesting were not differentiable via rating.

What about Yelp? I had it search my area. Of the first 10 hits, there was one legit hit, and multiple places I know are mediocre, but also they are clearly not sorting by rating there. Sorting by highest rating got a bunch of places with a small number of ratings that I did not recognize.

When I looked at the Yelp ratings of my top places, the ratings of the top half of that list (5.0s) averaged 4.0, and the ratings of the bottom half of that list (not 5.0s) also averaged 4.0. There did not seem to be a pattern based on whether tourists would dominate. My model continues to say that Yelp has its finger on the scale, and that is why the ratings are not so useful, but to be clear I do not have proof.

Looking at specific places made it very clear, once again, that Yelp ratings are worthless. They do not even have vague agreement among different outposts of the same chain (Naya) where I have always had entirely undifferentiated experiences.

Certainly none of this constitutes sufficiently good evidence that one can afford to cease exploring. Or on the flip side, be tempted into foregoing the joys of exploration. You still have to use your wits, learn to read the signs, adjust for your preferences, and then eat around and find out.

A Tale of Two Restaurant Types Read More »

broadcom-owned-vmware-kills-the-free-version-of-esxi-virtualization-software

Broadcom-owned VMware kills the free version of ESXi virtualization software

freesphere —

Software’s free version was a good fit for tinkerers and hobbyists.

Broadcom-owned VMware kills the free version of ESXi virtualization software

VMware

Since Broadcom’s $61 billion acquisition of VMware closed in November 2023, Broadcom has been charging ahead with major changes to the company’s personnel and products. In December, Broadcom began laying off thousands of employees and stopped selling perpetually licensed versions of VMware products, pushing its customers toward more stable and lucrative software subscriptions instead. In January, it ended its partner programs, potentially disrupting sales and service for many users of its products.

This week, Broadcom is making a change that is smaller in scale but possibly more relevant for home users of its products: The free version of VMware’s vSphere Hypervisor, also known as ESXi, is being discontinued.

ESXi is what is known as a “bare-metal hypervisor,” lightweight software that runs directly on hardware without requiring a separate operating system layer in between. ESXi allows you to split a PC’s physical resources (CPUs and CPU cores, RAM, storage, networking components, and so on) among multiple virtual machines. ESXi also supports passthrough for PCI, SATA, and USB accessories, allowing guest operating systems direct access to components like graphics cards and hard drives.

The free version of ESXi had limits compared to the full, paid enterprise versions—it could only support up to two physical CPUs, didn’t come with any software support, and lacked automated load-balancing and management features. But it was still useful for enthusiasts and home users who wanted to run multipurpose home servers or to split a system’s time between Windows and one or more Linux distributions without the headaches of dual booting. It was also a useful tool for people who used the enterprise versions of the vSphere Hypervisor but wanted to test the software or learn its ins and outs without dealing with paid licensing.

For the latter group, a 60-day trial of the VMware vSphere 8 software is still available. Tinkerers will be better off trying to migrate to an alternative product instead, like Proxmox, XCP-ng, or even the Hyper-V capabilities built into the Pro versions of Windows 10 and 11.

Broadcom-owned VMware kills the free version of ESXi virtualization software Read More »

scientists-found-a-stone-age-megastructure-submerged-in-the-baltic-sea

Scientists found a Stone Age megastructure submerged in the Baltic Sea

They built a wall —

“Blinkerwall” may have been a “desert kite,” used to channel and hunt reindeer.

Graphical reconstruction of a Stone Age wall as it may been used: as a hunting structure in a glacial landscape.

Enlarge / Graphical reconstruction of a Stone Age wall as it may been used: as a hunting structure in a glacial landscape.

Michał Grabowski

In 2021, Jacob Geersen, a geophysicist with the Leibniz Institute for Baltic Sea Research in the German port town of Warnemünde, took his students on a training exercise along the Baltic coast. They used a multibeam sonar system to map the seafloor about 6.2 miles (10 kilometers) offshore.  Analyzing the resulting images back in the lab, Geersen noticed a strange structure that did not seem like it would have occurred naturally.

Further investigation led to the conclusion that this was a manmade megastructure built some 11,000 years ago to channel reindeer herds as a hunting strategy. Dubbed the “Blinkerwall,” it’s quite possibly the oldest such megastructure yet discovered, according to a new paper published in the Proceedings of the National Academy of Sciences—although precisely dating these kinds of archaeological structures is notoriously challenging.

As previously reported, during the 1920s, aerial photographs revealed the presence of large kite-shaped stone wall mega-structures in deserts in Asia and the Middle East that most archaeologists believe were used to herd and trap wild animals. More than 6,000 of these “desert kites” have been identified as of 2018, although very few have been excavated. Last year, archaeologists found two stone engravings—one in Jordan, the other in Saudi Arabia—that they believe represent the oldest architectural plans for these desert kites.

However, these kinds of megastructures are almost unknown in Europe, according to Geersen et al., because they simply didn’t survive the ensuing millennia. But the Baltic Sea basins, which incorporate the Bay of Mecklenburg where Geersen made his momentous discovery, are known to harbor a dense population of submerged archaeological sites that are remarkably well-preserved—like the Blinkerwall.

Morphology of the southwest–northeast trending ridge that hosts the Blinkerwall and the adjacent mound.

Enlarge / Morphology of the southwest–northeast trending ridge that hosts the Blinkerwall and the adjacent mound.

J. Geersen et al., 2024

After they first spotted the underwater wall, Geeren enlisted several colleagues to lower a camera down to the structure. The images revealed a neat row of stones forming a wall under 1 meter (3.2 feet) in height. There are 10 large stones weighing several tons, spaced at intervals, and connected by more than 1,600 smaller stones (less than 100 kilograms or 220 pounds).  “Overall, the ten heaviest stones are all located within regions where the stonewall changes is strike direction,” the authors wrote. The length of the wall is 971 meters (a little over half a mile).

They concluded that the wall didn’t form through natural processes like a moving glacier or a tsunami, especially given the careful placement of the larger stones wherever the wall zigs or zags. It is more likely the structure is manmade and built over 10,000 years ago, although the lack of other archaeological evidence like stone tools or other artifacts makes dating the site difficult. They reasoned that before then, the region would have been covered in a sheet of ice. The immediate vicinity would have had plenty of stones laying about to build the Blinkerwall. Rising sea levels then submerged the structure until it was rediscovered in the 21st century. This would make the Blinkerwall among the oldest and largest Stone Age megastructures in Europe.

As for why the wall was built, Geeren et al. suggest that it was used as a desert kite similar to those found in Asia and the Middle East. There are usually two walls in a desert kite, forming a V shape, but the Blinkerwall happens to run along what was once a lake. Herding reindeer into the lake would have slowed the animals, making them easier to hunt. It’s also possible that there is a second wall hidden underneath the sediment on the seafloor. “When you chase the animals, they follow these structures, they don’t attempt to jump over them,” Geersen told The Guardian. “The idea would be to create an artificial bottleneck with a second wall or with the lake shore.”

3D model of a section of the Blinkerwall adjacent to the large boulder at the western end of the wall.

Enlarge / 3D model of a section of the Blinkerwall adjacent to the large boulder at the western end of the wall.

Philipp Hoy, Rostock University

A similar submerged stone-walled drive lane, known as “Drop 45,” is located in Lake Huron in the US; divers found various lithic artifacts around the drive lane, usually in circular spots that could have served as hunting blinds. The authors suggest that the larger blocks of the Blinkerwall could also have been hunting blinds, although further archaeological surveys will be needed to test this hypothesis.

“I think the case is well made for the wall as an artificial structure built to channel movements of migratory reindeer,” archaeologist Geoff Bailey of the University of York, who is not a co-author on the paper, told New Scientist. Vincent Gaffney of the University of Bradford concurred. “Such a find suggests that extensive prehistoric hunting landscapes may survive in a manner previously only seen in the Great Lakes,” he said. “This has very great implications for areas of the coastal shelves which were previously habitable.”

PNAS, 2024. DOI: 10.1073/pnas.2312008121 (About DOIs).

Scientists found a Stone Age megastructure submerged in the Baltic Sea Read More »

openai-experiments-with-giving-chatgpt-a-long-term-conversation-memory

OpenAI experiments with giving ChatGPT a long-term conversation memory

“I remember…the Alamo” —

AI chatbot “memory” will recall facts from previous conversations when enabled.

A pixelated green illustration of a pair of hands looking through file records.

Enlarge / When ChatGPT looks things up, a pair of green pixelated hands look through paper records, much like this. Just kidding.

Benj Edwards / Getty Images

On Tuesday, OpenAI announced that it is experimenting with adding a form of long-term memory to ChatGPT that will allow it to remember details between conversations. You can ask ChatGPT to remember something, see what it remembers, and ask it to forget. Currently, it’s only available to a small number of ChatGPT users for testing.

So far, large language models have typically used two types of memory: one baked into the AI model during the training process (before deployment) and an in-context memory (the conversation history) that persists for the duration of your session. Usually, ChatGPT forgets what you have told it during a conversation once you start a new session.

Various projects have experimented with giving LLMs a memory that persists beyond a context window. (The context window is the hard limit on the number of tokens the LLM can process at once.) The techniques include dynamically managing context history, compressing previous history through summarization, links to vector databases that store information externally, or simply periodically injecting information into a system prompt (the instructions ChatGPT receives at the beginning of every chat).

A screenshot of ChatGPT memory controls provided by OpenAI.

Enlarge / A screenshot of ChatGPT memory controls provided by OpenAI.

OpenAI

OpenAI hasn’t explained which technique it uses here, but the implementation reminds us of Custom Instructions, a feature OpenAI introduced in July 2023 that lets users add custom additions to the ChatGPT system prompt to change its behavior.

Possible applications for the memory feature provided by OpenAI include explaining how you prefer your meeting notes to be formatted, telling it you run a coffee shop and having ChatGPT assume that’s what you’re talking about, keeping information about your toddler that loves jellyfish so it can generate relevant graphics, and remembering preferences for kindergarten lesson plan designs.

Also, OpenAI says that memories may help ChatGPT Enterprise and Team subscribers work together better since shared team memories could remember specific document formatting preferences or which programming frameworks your team uses. And OpenAI plans to bring memories to GPTs soon, with each GPT having its own siloed memory capabilities.

Memory control

Obviously, any tendency to remember information brings privacy implications. You should already know that sending information to OpenAI for processing on remote servers introduces the possibility of privacy leaks and that OpenAI trains AI models on user-provided information by default unless conversation history is disabled or you’re using an Enterprise or Team account.

Along those lines, OpenAI says that your saved memories are also subject to OpenAI training use unless you meet the criteria listed above. Still, the memory feature can be turned off completely. Additionally, the company says, “We’re taking steps to assess and mitigate biases, and steer ChatGPT away from proactively remembering sensitive information, like your health details—unless you explicitly ask it to.”

Users will also be able to control what ChatGPT remembers using a “Manage Memory” interface that lists memory items. “ChatGPT’s memories evolve with your interactions and aren’t linked to specific conversations,” OpenAI says. “Deleting a chat doesn’t erase its memories; you must delete the memory itself.”

ChatGPT’s memory features are not currently available to every ChatGPT account, so we have not experimented with it yet. Access during this testing period appears to be random among ChatGPT (free and paid) accounts for now. “We are rolling out to a small portion of ChatGPT free and Plus users this week to learn how useful it is,” OpenAI writes. “We will share plans for broader roll out soon.”

OpenAI experiments with giving ChatGPT a long-term conversation memory Read More »

cdc-to-update-its-covid-isolation-guidance,-ditching-5-day-rule:-report

CDC to update its COVID isolation guidance, ditching 5-day rule: Report

update —

The agency is reportedly moving from the fixed time to a symptom-based isolation period.

CDC to update its COVID isolation guidance, ditching 5-day rule: Report

The Centers for Disease Control and Prevention is preparing to update its COVID-19 isolation guidance, moving from a minimum five-day isolation period to one that is solely determined by symptoms, according to a report from The Washington Post.

Currently, CDC isolation guidance states that people who test positive for COVID-19 should stay home for at least five days, at which point people can end their isolation as long as their symptoms are improving and they have been fever-free for 24 hours.

According to three unnamed officials who spoke with the Post, the CDC will update its guidance to remove the five-day minimum, recommending more simply that people can end their isolation any time after being fever-free for 24 hours without the aid of medication, as long as any other remaining symptoms are mild and improving. The change, which is expected to be released in April, would be the first to loosen the guidance since the end of 2021.

In an email to Ars, a CDC spokesperson did not confirm or deny the report, saying only that, “There are no updates to COVID guidelines to announce at this time. We will continue to make decisions based on the best evidence and science to keep communities healthy and safe.”

The Post notes that the proposed update to the guidance matches updated guidance from California and Oregon, as well as other countries.

The officials who spoke with the outlet noted that the loosened guidelines reflect that most people in the US have developed some level of immunity to the pandemic coronavirus from prior infections and vaccinations.

A report earlier this month found that the 2023–2024 COVID-19 vaccine was about 54 percent effective at preventing symptomatic COVID-19 when compared against people who had not received the latest vaccine. However, the CDC estimates that only about 22 percent of adults have received the updated shot.

Currently, the CDC recommends that people wear a mask for 10 days after testing positive unless they have two negative tests 48 hours apart. The Post reported that it’s unclear if the CDC will update its mask recommendation.

CDC to update its COVID isolation guidance, ditching 5-day rule: Report Read More »

judge-rejects-most-chatgpt-copyright-claims-from-book-authors

Judge rejects most ChatGPT copyright claims from book authors

Insufficient evidence —

OpenAI plans to defeat authors’ remaining claim at a “later stage” of the case.

Judge rejects most ChatGPT copyright claims from book authors

A US district judge in California has largely sided with OpenAI, dismissing the majority of claims raised by authors alleging that large language models powering ChatGPT were illegally trained on pirated copies of their books without their permission.

By allegedly repackaging original works as ChatGPT outputs, authors alleged, OpenAI’s most popular chatbot was just a high-tech “grift” that seemingly violated copyright laws, as well as state laws preventing unfair business practices and unjust enrichment.

According to judge Araceli Martínez-Olguín, authors behind three separate lawsuits—including Sarah Silverman, Michael Chabon, and Paul Tremblay—have failed to provide evidence supporting any of their claims except for direct copyright infringement.

OpenAI had argued as much in their promptly filed motion to dismiss these cases last August. At that time, OpenAI said that it expected to beat the direct infringement claim at a “later stage” of the proceedings.

Among copyright claims tossed by Martínez-Olguín were accusations of vicarious copyright infringement. Perhaps most significantly, Martínez-Olguín agreed with OpenAI that the authors’ allegation that “every” ChatGPT output “is an infringing derivative work” is “insufficient” to allege vicarious infringement, which requires evidence that ChatGPT outputs are “substantially similar” or “similar at all” to authors’ books.

“Plaintiffs here have not alleged that the ChatGPT outputs contain direct copies of the copyrighted books,” Martínez-Olguín wrote. “Because they fail to allege direct copying, they must show a substantial similarity between the outputs and the copyrighted materials.”

Authors also failed to convince Martínez-Olguín that OpenAI violated the Digital Millennium Copyright Act (DMCA) by allegedly removing copyright management information (CMI)—such as author names, titles of works, and terms and conditions for use of the work—from training data.

This claim failed because authors cited “no facts” that OpenAI intentionally removed the CMI or built the training process to omit CMI, Martínez-Olguín wrote. Further, the authors cited examples of ChatGPT referencing their names, which would seem to suggest that some CMI remains in the training data.

Some of the remaining claims were dependent on copyright claims to survive, Martínez-Olguín wrote.

Arguing that OpenAI caused economic injury by unfairly repurposing authors’ works, even if authors could show evidence of a DMCA violation, authors could only speculate about what injury was caused, the judge said.

Similarly, allegations of “fraudulent” unfair conduct—accusing OpenAI of “deceptively” designing ChatGPT to produce outputs that omit CMI—”rest on a violation of the DMCA,” Martínez-Olguín wrote.

The only claim under California’s unfair competition law that was allowed to proceed alleged that OpenAI used copyrighted works to train ChatGPT without authors’ permission. Because the state law broadly defines what’s considered “unfair,” Martínez-Olguín said that it’s possible that OpenAI’s use of the training data “may constitute an unfair practice.”

Remaining claims of negligence and unjust enrichment failed, Martínez-Olguín wrote, because authors only alleged intentional acts and did not explain how OpenAI “received and unjustly retained a benefit” from training ChatGPT on their works.

Authors have been ordered to consolidate their complaints and have until March 13 to amend arguments and continue pursuing any of the dismissed claims.

To shore up the tossed copyright claims, authors would likely need to provide examples of ChatGPT outputs that are similar to their works, as well as evidence of OpenAI intentionally removing CMI to “induce, enable, facilitate, or conceal infringement,” Martínez-Olguín wrote.

Ars could not immediately reach the authors’ lawyers or OpenAI for comment.

As authors likely prepare to continue fighting OpenAI, the US Copyright Office has been fielding public input before releasing guidance that could one day help rights holders pursue legal claims and may eventually require works to be licensed from copyright owners for use as training materials. Among the thorniest questions is whether AI tools like ChatGPT should be considered authors when spouting outputs included in creative works.

While the Copyright Office prepares to release three reports this year “revealing its position on copyright law in relation to AI,” according to The New York Times, OpenAI recently made it clear that it does not plan to stop referencing copyrighted works in its training data. Last month, OpenAI said it would be “impossible” to train AI models without copyrighted materials, because “copyright today covers virtually every sort of human expression—including blogposts, photographs, forum posts, scraps of software code, and government documents.”

According to OpenAI, it doesn’t just need old copyrighted materials; it needs current copyright materials to ensure that chatbot and other AI tools’ outputs “meet the needs of today’s citizens.”

Rights holders will likely be bracing throughout this confusing time, waiting for the Copyright Office’s reports. But once there is clarity, those reports could “be hugely consequential, weighing heavily in courts, as well as with lawmakers and regulators,” The Times reported.

Judge rejects most ChatGPT copyright claims from book authors Read More »

cryptocurrency-maker-sues-former-ars-reporter-for-writing-about-fraud-lawsuit

Cryptocurrency maker sues former Ars reporter for writing about fraud lawsuit

Promotional image of a man using a machine that looks like an ATM and is labeled

Enlarge / Image from Bitcoin Latinum’s website

Bitcoin Latinum

The cryptocurrency firm Bitcoin Latinum has sued journalists at Forbes and Poker.org, claiming that the writers made false and defamatory statements in articles that described securities fraud lawsuits filed against the crypto firm.

Bitcoin Latinum and its founder, Donald Basile, filed a libel lawsuit against Forbes reporter Cyrus Farivar and another libel lawsuit against Poker.org and its reporter Haley Hintze. (Farivar was a long-time Ars Technica reporter.)

The lawsuits are surprising because the Forbes article and the Poker.org article, both published in 2022, are very much like thousands of other news stories that describe allegations in a lawsuit. In both articles, it is clear that the allegations come from the filer of the lawsuit and not from the author of the article.

But both of Bitcoin Latinum’s lawsuits, which were filed last week in Delaware’s Court of Chancery, demand that the articles be retracted. They contain the following claim in exactly the same words:

The Article contains statements which insinuate and lead the reader to believe that Assofi’s allegations against Plaintiff Latinum and Plaintiff Basile are factual and correct, and which statements are not couched as the opinion of the author, but rather, are presented as fact, and therefore do not fall under any applicable privilege.

“Assofi’s allegations” are those made in a lawsuit filed against Bitcoin Latinum and Basile in November 2022. That lawsuit from Arshad Assofi, who said he lost over $15 million investing in worthless tokens, alleged that Bitcoin Latinum “is a scam” and accused the defendants of securities fraud and other violations. Bitcoin Latinum calls itself “the future of Bitcoin.”

Lawsuit cites wrong article

It’s especially surprising that Bitcoin Latinum’s lawsuit against Hintze contains the statement about “Assofi’s allegations” because the Hintze article cited in the lawsuit never mentions Assofi. The Hintze article on Poker.org is about a different lawsuit from different plaintiffs who also alleged securities fraud.

In fact, the Hintze article was published in February 2022, 10 months before the Assofi lawsuit was filed. TechDirt’s Mike Masnick pointed out this error in an article yesterday:

It appears that Latinum’s lawyer actually meant to sue over a different Poker.org article, that was published in November about the Assofi lawsuit, but repeatedly claims that the article was published on February 5, 2022, rather than the actual publication date of the article she meant, which was November 21, 2022. Also, Latinum’s lawyer included the February 5th article as the exhibit, rather than the November 21st article. Such attention to detail to talk about the wrong article and include the wrong article as an exhibit. Top notch lawyering.

Masnick also points out that the statute of limitations is two years, and the lawsuit against Hintze was filed more than two years after her February 2022 article.

In libel cases, journalists may defend themselves with the “fair report privilege.” This applies to accurate reporting on official government matters, including court proceedings.

The lawyer for Bitcoin Latinum in the Farivar and Hintze cases is Holly Whitney, who specializes in estate planning and probate cases. We contacted Whitney and Bitcoin Latinum about the lawsuits today and will update this article if we get a response.

Cryptocurrency maker sues former Ars reporter for writing about fraud lawsuit Read More »