For many of us, memories of our childhood have become a bit hazy, if not vanishing entirely. But nobody really remembers much before the age of 4, because nearly all humans experience what’s termed “infantile amnesia,” in which memories that might have formed before that age seemingly vanish as we move through adolescence. And it’s not just us; the phenomenon appears to occur in a number of our fellow mammals.
The simplest explanation for this would be that the systems that form long-term memories are simply immature and don’t start working effectively until children hit the age of 4. But a recent animal experiment suggests that the situation in mice is more complex: the memories are there, they’re just not normally accessible, although they can be re-activated. Now, a study that put human infants in an MRI tube suggests that memory activity starts by the age of 1, suggesting that the results in mice may apply to us.
Less than total recall
Mice are one of the species that we know experience infantile amnesia. And, thanks to over a century of research on mice, we have some sophisticated genetic tools that allow us to explore what’s actually involved in the apparent absence of the animals’ earliest memories.
A paper that came out last year describes a series of experiments that start by having very young mice learn to associate seeing a light come on with receiving a mild shock. If nothing else is done with those mice, that association will apparently be forgotten later in life due to infantile amnesia.
But in this case, the researchers could do something. Neural activity normally results in the activation of a set of genes. In these mice, the researchers engineered it so one of the genes that gets activated encodes a protein that can modify DNA. When this protein is made, it results in permanent changes to a second gene that was inserted in the animal’s DNA. Once activated through this process, the gene leads to the production of a light-activated ion channel.
We already have an example of general intelligence, and it doesn’t look like AI.
There’s no question that AI systems have accomplished some impressive feats, mastering games, writing text, and generating convincing images and video. That’s gotten some people talking about the possibility that we’re on the cusp of AGI, or artificial general intelligence. While some of this is marketing fanfare, enough people in the field are taking the idea seriously that it warrants a closer look.
Many arguments come down to the question of how AGI is defined, which people in the field can’t seem to agree upon. This contributes to estimates of its advent that range from “it’s practically here” to “we’ll never achieve it.” Given that range, it’s impossible to provide any sort of informed perspective on how close we are.
But we do have an existing example of AGI without the “A”—the intelligence provided by the animal brain, particularly the human one. And one thing is clear: The systems being touted as evidence that AGI is just around the corner do not work at all like the brain does. That may not be a fatal flaw, or even a flaw at all. It’s entirely possible that there’s more than one way to reach intelligence, depending on how it’s defined. But at least some of the differences are likely to be functionally significant, and the fact that AI is taking a very different route from the one working example we have is likely to be meaningful.
With all that in mind, let’s look at some of the things the brain does that current AI systems can’t.
Defining AGI might help
Artificial general intelligence hasn’t really been defined. Those who argue that it’s imminent are either vague about what they expect the first AGI systems to be capable of or simply define it as the ability to dramatically exceed human performance at a limited number of tasks. Predictions of AGI’s arrival in the intermediate term tend to focus on AI systems demonstrating specific behaviors that seem human-like. The further one goes out on the timeline, the greater the emphasis on the “G” of AGI and its implication of systems that are far less specialized.
But most of these predictions are coming from people working in companies with a commercial interest in AI. It was notable that none of the researchers we talked to for this article were willing to offer a definition of AGI. They were, however, willing to point out how current systems fall short.
“I think that AGI would be something that is going to be more robust, more stable—not necessarily smarter in general but more coherent in its abilities,” said Ariel Goldstein, a researcher at Hebrew University of Jerusalem. “You’d expect a system that can do X and Y to also be able to do Z and T. Somehow, these systems seem to be more fragmented in a way. To be surprisingly good at one thing and then surprisingly bad at another thing that seems related.”
“I think that’s a big distinction, this idea of generalizability,” echoed neuroscientist Christa Baker of NC State University. “You can learn how to analyze logic in one sphere, but if you come to a new circumstance, it’s not like now you’re an idiot.”
Mariano Schain, a Google engineer who has collaborated with Goldstein, focused on the abilities that underlie this generalizability. He mentioned both long-term and task-specific memory and the ability to deploy skills developed in one task in different contexts. These are limited-to-nonexistent in existing AI systems.
Beyond those specific limits, Baker noted that “there’s long been this very human-centric idea of intelligence that only humans are intelligent.” That’s fallen away within the scientific community as we’ve studied more about animal behavior. But there’s still a bias to privilege human-like behaviors, such as the human-sounding responses generated by large language models
The fruit flies that Baker studies can integrate multiple types of sensory information, control four sets of limbs, navigate complex environments, satisfy their own energy needs, produce new generations of brains, and more. And they do that all with brains that contain under 150,000 neurons, far fewer than current large language models.
These capabilities are complicated enough that it’s not entirely clear how the brain enables them. (If we knew how, it might be possible to engineer artificial systems with similar capacities.) But we do know a fair bit about how brains operate, and there are some very obvious ways that they differ from the artificial systems we’ve created so far.
Neurons vs. artificial neurons
Most current AI systems, including all large language models, are based on what are called neural networks. These were intentionally designed to mimic how some areas of the brain operate, with large numbers of artificial neurons taking an input, modifying it, and then passing the modified information on to another layer of artificial neurons. Each of these artificial neurons can pass the information on to multiple instances in the next layer, with different weights applied to each connection. In turn, each of the artificial neurons in the next layer can receive input from multiple sources in the previous one.
After passing through enough layers, the final layer is read and transformed into an output, such as the pixels in an image that correspond to a cat.
While that system is modeled on the behavior of some structures within the brain, it’s a very limited approximation. For one, all artificial neurons are functionally equivalent—there’s no specialization. In contrast, real neurons are highly specialized; they use a variety of neurotransmitters and take input from a range of extra-neural inputs like hormones. Some specialize in sending inhibitory signals while others activate the neurons they interact with. Different physical structures allow them to make different numbers and connections.
In addition, rather than simply forwarding a single value to the next layer, real neurons communicate through an analog series of activity spikes, sending trains of pulses that vary in timing and intensity. This allows for a degree of non-deterministic noise in communications.
Finally, while organized layers are a feature of a few structures in brains, they’re far from the rule. “What we found is it’s—at least in the fly—much more interconnected,” Baker told Ars. “You can’t really identify this strictly hierarchical network.”
With near-complete connection maps of the fly brain becoming available, she told Ars that researchers are “finding lateral connections or feedback projections, or what we call recurrent loops, where we’ve got neurons that are making a little circle and connectivity patterns. I think those things are probably going to be a lot more widespread than we currently appreciate.”
While we’re only beginning to understand the functional consequences of all this complexity, it’s safe to say that it allows networks composed of actual neurons far more flexibility in how they process information—a flexibility that may underly how these neurons get re-deployed in a way that these researchers identified as crucial for some form of generalized intelligence.
But the differences between neural networks and the real-world brains they were modeled on go well beyond the functional differences we’ve talked about so far. They extend to significant differences in how these functional units are organized.
The brain isn’t monolithic
The neural networks we’ve generated so far are largely specialized systems meant to handle a single task. Even the most complicated tasks, like the prediction of protein structures, have typically relied on the interaction of only two or three specialized systems. In contrast, the typical brain has a lot of functional units. Some of these operate by sequentially processing a single set of inputs in something resembling a pipeline. But many others can operate in parallel, in some cases without any input activity going on elsewhere in the brain.
To give a sense of what this looks like, let’s think about what’s going on as you read this article. Doing so requires systems that handle motor control, which keep your head and eyes focused on the screen. Part of this system operates via feedback from the neurons that are processing the read material, causing small eye movements that help your eyes move across individual sentences and between lines.
Separately, there’s part of your brain devoted to telling the visual system what not to pay attention to, like the icon showing an ever-growing number of unread emails. Those of us who can read a webpage without even noticing the ads on it presumably have a very well-developed system in place for ignoring things. Reading this article may also mean you’re engaging the systems that handle other senses, getting you to ignore things like the noise of your heating system coming on while remaining alert for things that might signify threats, like an unexplained sound in the next room.
The input generated by the visual system then needs to be processed, from individual character recognition up to the identification of words and sentences, processes that involve systems in areas of the brain involved in both visual processing and language. Again, this is an iterative process, where building meaning from a sentence may require many eye movements to scan back and forth across a sentence, improving reading comprehension—and requiring many of these systems to communicate among themselves.
As meaning gets extracted from a sentence, other parts of the brain integrate it with information obtained in earlier sentences, which tends to engage yet another area of the brain, one that handles a short-term memory system called working memory. Meanwhile, other systems will be searching long-term memory, finding related material that can help the brain place the new information within the context of what it already knows. Still other specialized brain areas are checking for things like whether there’s any emotional content to the material you’re reading.
All of these different areas are engaged without you being consciously aware of the need for them.
In contrast, something like ChatGPT, despite having a lot of artificial neurons, is monolithic: No specialized structures are allocated before training starts. That’s in sharp contrast to a brain. “The brain does not start out as a bag of neurons and then as a baby it needs to make sense of the world and then determine what connections to make,” Baker noted. “There already a lot of constraints and specifics that are already set up.”
Even in cases where it’s not possible to see any physical distinction between cells specialized for different functions, Baker noted that we can often find differences in what genes are active.
In contrast, pre-planned modularity is relatively new to the AI world. In software development, “This concept of modularity is well established, so we have the whole methodology around it, how to manage it,” Schain said, “it’s really an aspect that is important for maybe achieving AI systems that can then operate similarly to the human brain.” There are a few cases where developers have enforced modularity on systems, but Goldstein said these systems need to be trained with all the modules in place to see any gain in performance.
None of this is saying that a modular system can’t arise within a neural network as a result of its training. But so far, we have very limited evidence that they do. And since we mostly deploy each system for a very limited number of tasks, there’s no reason to think modularity will be valuable.
There is some reason to believe that this modularity is key to the brain’s incredible flexibility. The region that recognizes emotion-evoking content in written text can also recognize it in music and images, for example. But the evidence here is mixed. There are some clear instances where a single brain region handles related tasks, but that’s not consistently the case; Baker noted that, “When you’re talking humans, there are parts of the brain that are dedicated to understanding speech, and there are different areas that are involved in producing speech.”
This sort of re-use of would also provide an advantage in terms of learning since behaviors developed in one context could potentially be deployed in others. But as we’ll see, the differences between brains and AI when it comes to learning are far more comprehensive than that.
The brain is constantly training
Current AIs generally have two states: training and deployment. Training is where the AI learns its behavior; deployment is where that behavior is put to use. This isn’t absolute, as the behavior can be tweaked in response to things learned during deployment, like finding out it recommends eating a rock daily. But for the most part, once the weights among the connections of a neural network are determined through training, they’re retained.
That may be starting to change a bit, Schain said. “There is now maybe a shift in similarity where AI systems are using more and more what they call the test time compute, where at inference time you do much more than before, kind of a parallel to how the human brain operates,” he told Ars. But it’s still the case that neural networks are essentially useless without an extended training period.
In contrast, a brain doesn’t have distinct learning and active states; it’s constantly in both modes. In many cases, the brain learns while doing. Baker described that in terms of learning to take jumpshots: “Once you have made your movement, the ball has left your hand, it’s going to land somewhere. So that visual signal—that comparison of where it landed versus where you wanted it to go—is what we call an error signal. That’s detected by the cerebellum, and its goal is to minimize that error signal. So the next time you do it, the brain is trying to compensate for what you did last time.”
It makes for very different learning curves. An AI is typically not very useful until it has had a substantial amount of training. In contrast, a human can often pick up basic competence in a very short amount of time (and without massive energy use). “Even if you’re put into a situation where you’ve never been before, you can still figure it out,” Baker said. “If you see a new object, you don’t have to be trained on that a thousand times to know how to use it. A lot of the time, [if] you see it one time, you can make predictions.”
As a result, while an AI system with sufficient training may ultimately outperform the human, the human will typically reach a high level of performance faster. And unlike an AI, a human’s performance doesn’t remain static. Incremental improvements and innovative approaches are both still possible. This also allows humans to adjust to changed circumstances more readily. An AI trained on the body of written material up until 2020 might struggle to comprehend teen-speak in 2030; humans could at least potentially adjust to the shifts in language. (Though maybe an AI trained to respond to confusing phrasing with “get off my lawn” would be indistinguishable.)
Finally, since the brain is a flexible learning device, the lessons learned from one skill can be applied to related skills. So the ability to recognize tones and read sheet music can help with the mastery of multiple musical instruments. Chemistry and cooking share overlapping skillsets. And when it comes to schooling, learning how to learn can be used to master a wide range of topics.
In contrast, it’s essentially impossible to use an AI model trained on one topic for much else. The biggest exceptions are large language models, which seem to be able to solve problems on a wide variety of topics if they’re presented as text. But here, there’s still a dependence on sufficient examples of similar problems appearing in the body of text the system was trained on. To give an example, something like ChatGPT can seem to be able to solve math problems, but it’s best at solving things that were discussed in its training materials; giving it something new will generally cause it to stumble.
Déjà vu
For Schain, however, the biggest difference between AI and biology is in terms of memory. For many AIs, “memory” is indistinguishable from the computational resources that allow it to perform a task and was formed during training. For the large language models, it includes both the weights of connections learned then and a narrow “context window” that encompasses any recent exchanges with a single user. In contrast, biological systems have a lifetime of memories to rely on.
“For AI, it’s very basic: It’s like the memory is in the weights [of connections] or in the context. But with a human brain, it’s a much more sophisticated mechanism, still to be uncovered. It’s more distributed. There is the short term and long term, and it has to do a lot with different timescales. Memory for the last second, a minute and a day or a year or years, and they all may be relevant.”
This lifetime of memories can be key to making intelligence general. It helps us recognize the possibilities and limits of drawing analogies between different circumstances or applying things learned in one context versus another. It provides us with insights that let us solve problems that we’ve never confronted before. And, of course, it also ensures that the horrible bit of pop music you were exposed to in your teens remains an earworm well into your 80s.
The differences between how brains and AIs handle memory, however, are very hard to describe. AIs don’t really have distinct memory, while the use of memory as the brain handles a task more sophisticated than navigating a maze is generally so poorly understood that it’s difficult to discuss at all. All we can really say is that there are clear differences there.
Facing limits
It’s difficult to think about AI without recognizing the enormous energy and computational resources involved in training one. And in this case, it’s potentially relevant. Brains have evolved under enormous energy constraints and continue to operate using well under the energy that a daily diet can provide. That has forced biology to figure out ways to optimize its resources and get the most out of the resources it does commit to.
In contrast, the story of recent developments in AI is largely one of throwing more resources at them. And plans for the future seem to (so far at least) involve more of this, including larger training data sets and ever more artificial neurons and connections among them. All of this comes at a time when the best current AIs are already using three orders of magnitude more neurons than we’d find in a fly’s brain and have nowhere near the fly’s general capabilities.
It remains possible that there is more than one route to those general capabilities and that some offshoot of today’s AI systems will eventually find a different route. But if it turns out that we have to bring our computerized systems closer to biology to get there, we’ll run into a serious roadblock: We don’t fully understand the biology yet.
“I guess I am not optimistic that any kind of artificial neural network will ever be able to achieve the same plasticity, the same generalizability, the same flexibility that a human brain has,” Baker said. “That’s just because we don’t even know how it gets it; we don’t know how that arises. So how do you build that into a system?”
John is Ars Technica’s science editor. He has a Bachelor of Arts in Biochemistry from Columbia University, and a Ph.D. in Molecular and Cell Biology from the University of California, Berkeley. When physically separated from his keyboard, he tends to seek out a bicycle, or a scenic location for communing with his hiking boots.
Neurons and a second cell type called an astrocyte collaborate to hold memories.
Astrocytes (labelled in black) sit within a field of neurons. Credit: Ed Reschke
“If we go back to the early 1900s, this is when the idea was first proposed that memories are physically stored in some location within the brain,” says Michael R. Williamson, a researcher at the Baylor College of Medicine in Houston. For a long time, neuroscientists thought that the storage of memory in the brain was the job of engrams, ensembles of neurons that activate during a learning event. But it turned out this wasn’t the whole picture.
Williamson’s research investigated the role astrocytes, non-neuron brain cells, play in the read-and-write operations that go on in our heads. “Over the last 20 years the role of astrocytes has been understood better. We’ve learned that they can activate neurons. The addition we have made to that is showing that there are subsets of astrocytes that are active and involved in storing specific memories,” Williamson says in describing a new study his lab has published.
One consequence of this finding: Astrocytes could be artificially manipulated to suppress or enhance a specific memory, leaving all other memories intact.
Marking star cells
Astrocytes, otherwise known as star cells due to their shape, play various roles in the brain, and many are focused on the health and activity of their neighboring neurons. Williamson’s team started by developing techniques that enabled them to mark chosen ensembles of astrocytes to see when they activate genes (including one named c-Fos) that help neurons reconfigure their connections and are deemed crucial for memory formation. This was based on the idea that the same pathway would be active in neurons and astrocytes.
“In simple terms, we use genetic tools that allow us to inject mice with a drug that artificially makes astrocytes express some other gene or protein of interest when they become active,” says Wookbong Kwon, a biotechnologist at Baylor College and co-author of the study.
Those proteins of interest were mainly fluorescent proteins that make cells fluoresce bright red. This way, the team could spot the astrocytes in mouse brains that became active during learning scenarios. Once the tagging system was in place, Williamson and his colleagues gave their mice a little scare.
“It’s called fear conditioning, and it’s a really simple idea. You take a mouse, put it into a new box, one it’s never seen before. While the mouse explores this new box, we just apply a series of electrical shocks through the floor,” Williamson explains. A mouse treated this way remembers this as an unpleasant experience and associates it with contextual cues like the box’s appearance, the smells and sounds present, and so on.
The tagging system lit up all astrocytes that expressed the c-Fos gene in response to fear conditioning. Williamson’s team inferred that this is where the memory is stored in the mouse’s brain. Knowing that, they could move on to the next question, which was if and how astrocytes and engram neurons interacted during this process.
Modulating engram neurons
“Astrocytes are really bushy,” Williamson says. They have a complex morphology with lots and lots of micro or nanoscale processes that infiltrate the area surrounding them. A single astrocyte can contact roughly 100,000 synapses, and not all of them will be involved in learning events. So the team looked for correlations between astrocytes activated during memory formation and the neurons that were tagged at the same time.
“When we did that, we saw that engram neurons tended to be contacting the astrocytes that are active during the formation of the same memory,” Williamson says. To see how astrocytes’ activity affects neurons, the team artificially stimulated the astrocytes by microinjecting them with a virus engineered to induce the expression of the c-Fos gene. “It directly increased the activity of engram neurons but did not increase the activity of non-engram neurons in contact with the same astrocyte,” Williamson explains.
This way his team established that at least some astrocytes could preferentially communicate with engram neurons. The researchers also noticed that astrocytes involved in memorizing the fear conditioning event had elevated levels of a protein called NFIA, which is known to regulate memory circuits in the hippocampus.
But probably the most striking discovery came when the researchers tested whether the astrocytes involved in memorizing an event also played a role in recalling it later.
Selectively forgetting
The first test to see if astrocytes were involved in recall was to artificially activate them when the mice were in a box that they were not conditioned to fear. It turned out artificial activation of astrocytes that were active during the formation of a fear memory formed in one box caused the mice to freeze even when they were in a different one.
So, the next question was, if you just killed or otherwise disabled an astrocyte ensemble active during a specific memory formation, would it just delete this memory from the brain? To get that done, the team used their genetic tools to selectively delete the NFIA protein in astrocytes that were active when the mice received their electric shocks. “We found that mice froze a lot less when we put them in the boxes they were conditioned to fear. They could not remember. But other memories were intact,” Kwon claims.
The memory was not completely deleted, though. The mice still froze in the boxes they were supposed to freeze in, but they did it for a much shorter time on average. “It looked like their memory was maybe a bit foggy. They were not sure if they were in the right place,” Williamson says.
After figuring out how to suppress a memory, the team also figured out where the “undo” button was and brought it back to normal.
“When we deleted the NFIA protein in astrocytes, the memory was impaired, but the engram neurons were intact. So, the memory was still somewhere there. The mice just couldn’t access it,” Williamson claims. The team brought the memory back by artificially stimulating the engram neurons using the same technique they employed for activating chosen astrocytes. “That caused the neurons involved in this memory trace to be activated for a few hours. This artificial activity allowed the mice to remember it again,” Williamson says.
The team’s vision is that in the distant future this technique can be used in treatments targeting neurons that are overactive in disorders such as PTSD. “We now have a new cellular target that we can evaluate and potentially develop treatments that target the astrocyte component associated with memory,” Williamson claims. But there’s lot more to learn before anything like that becomes possible. “We don’t yet know what signal is released by an astrocyte that acts on the neuron. Another thing is our study was focused on one brain region, which was the hippocampus, but we know that engrams exist throughout the brain in lots of different regions. The next step is to see if astrocytes play the same role in other brain regions that are also critical for memory,” Williamson says.
Jacek Krywko is a freelance science and technology writer who covers space exploration, artificial intelligence research, computer science, and all sorts of engineering wizardry.
Enlarge/ A Voyager space probe in a clean room at the Jet Propulsion Laboratory in 1977.
Engineers have determined why NASA’s Voyager 1 probe has been transmitting gibberish for nearly five months, raising hopes of recovering humanity’s most distant spacecraft.
Voyager 1, traveling outbound some 15 billion miles (24 billion km) from Earth, started beaming unreadable data down to ground controllers on November 14. For nearly four months, NASA knew Voyager 1 was still alive—it continued to broadcast a steady signal—but could not decipher anything it was saying.
Confirming their hypothesis, engineers at NASA’s Jet Propulsion Laboratory (JPL) in California confirmed a small portion of corrupted memory caused the problem. The faulty memory bank is located in Voyager 1’s Flight Data System (FDS), one of three computers on the spacecraft. The FDS operates alongside a command-and-control central computer and another device overseeing attitude control and pointing.
The FDS duties include packaging Voyager 1’s science and engineering data for relay to Earth through the craft’s Telemetry Modulation Unit and radio transmitter. According to NASA, about 3 percent of the FDS memory has been corrupted, preventing the computer from carrying out normal operations.
Optimism growing
Suzanne Dodd, NASA’s project manager for the twin Voyager probes, told Ars in February that this was one of the most serious problems the mission has ever faced. That is saying something because Voyager 1 and 2 are NASA’s longest-lived spacecraft. They launched 16 days apart in 1977, and after flying by Jupiter and Saturn, Voyager 1 is flying farther from Earth than any spacecraft in history. Voyager 2 is trailing Voyager 1 by about 2.5 billion miles, although the probes are heading out of the Solar System in different directions.
Normally, engineers would try to diagnose a spacecraft malfunction by analyzing data it sent back to Earth. They couldn’t do that in this case because Voyager 1 has been transmitting data packages manifesting a repeating pattern of ones and zeros. Still, Voyager 1’s ground team identified the FDS as the likely source of the problem.
The Flight Data Subsystem was an innovation in computing when it was developed five decades ago. It was the first computer on a spacecraft to use volatile memory. Most of NASA’s missions operate with redundancy, so each Voyager spacecraft launched with two FDS computers. But the backup FDS on Voyager 1 failed in 1982.
Due to the Voyagers’ age, engineers had to reference paper documents, memos, and blueprints to help understand the spacecraft’s design details. After months of brainstorming and planning, teams at JPL uplinked a command in early March to prompt the spacecraft to send back a readout of the FDS memory.
The command worked, and Voyager.1 responded with a signal different from the code the spacecraft had been transmitting since November. After several weeks of meticulous examination of the new code, engineers pinpointed the locations of the bad memory.
“The team suspects that a single chip responsible for storing part of the affected portion of the FDS memory isn’t working,” NASA said in an update posted Thursday. “Engineers can’t determine with certainty what caused the issue. Two possibilities are that the chip could have been hit by an energetic particle from space or that it simply may have worn out after 46 years.”
Voyager 1’s distance from Earth complicates the troubleshooting effort. The one-way travel time for a radio signal to reach Voyager 1 from Earth is about 22.5 hours, meaning it takes roughly 45 hours for engineers on the ground to learn how the spacecraft responded to their commands.
NASA also must use its largest communications antennas to contact Voyager 1. These 230-foot-diameter (70-meter) antennas are in high demand by many other NASA spacecraft, so the Voyager team has to compete with other missions to secure time for troubleshooting. This means it will take time to get Voyager 1 back to normal operations.
“Although it may take weeks or months, engineers are optimistic they can find a way for the FDS to operate normally without the unusable memory hardware, which would enable Voyager 1 to begin returning science and engineering data again,” NASA said.
Enlarge /Eternal Sunshine of the Spotless Mind stars Jim Carrey in one of his most powerful dramatic roles.
Focus Features
Last week, the 2004 cult classic Eternal Sunshine of the Spotless Mind marked its 20th anniversary, prompting many people to revisit the surreal sci-fi psychological drama about two ex-lovers who erase their memories of each other—only to find themselves falling in love all over again. Eternal Sunshine was a box office success and earned almost universal praise upon its release. It’s still a critical favorite today and remains one of star Jim Carrey’s most powerful and emotionally resonant dramatic roles. What better time for a rewatch and in-depth discussion of the film’s themes of memory, personal identity, love, and loss?
(Spoilers for the 2004 film below.)
Director Michel Gondry and co-writer Pierre Bismuth first came up with the concept for the film in 1998, based on a conversation Bismuth had with a female friend who, when he asked, said she would absolutely erase her boyfriend from her memory if she could. They brought on Charlie Kaufman to write the script, and the three men went on to win an Oscar for Best Original Screenplay for their efforts. The title alludes to a 1717 poem by Alexander Pope, “Eloisa to Abelard,” based on the tragic love between medieval philosopher Peter Abelard and Héloïse d’Argenteuil and their differing perspectives on what happened between them when they exchanged letters later in life. These are the most relevant lines:
Of all affliction taught a lover yet,
‘Tis sure the hardest science to forget!
…
How happy is the blameless vestal’s lot!
The world forgetting, by the world forgot.
Eternal sunshine of the spotless mind!
Carrey plays Joel, a shy introvert who falls in love with the extroverted free spirit Clementine (Kate Winslet). The film opens with the couple estranged and Joel discovering that Clementine has erased all her memories of him, thanks to the proprietary technology of a company called Lacuna. Joel decides to do the same, and much of the film unfolds backward in time in a nonlinear narrative as Joel (while dreaming) relives his memories of their relationship in reverse. Those memories dissolve as he recalls each one, even though at one point, he changes his mind and tries unsuccessfully to stop the process.
The twist: Joel ends up meeting Clementine all over again on that beach in Montauk, and they are just as drawn to each other as before. When they learn—thanks to the machinations of a vengeful Lacuna employee—what happened between them the first time around, they almost separate again. But Joel convinces Clementine to take another chance, believing their relationship to be worth any future pain.
Enlarge/ Joel (Jim Carrey) and Clementine (Kate Winslet) meet-cute on the LIRR to Montauk.
Much has been written over the last two decades about the scientific basis for the film, particularly the technology used to erase Joel’s and Clementine’s respective memories. The underlying neuroscience involves what’s known as memory reconsolidation. The brain is constantly processing memories, including associated emotions, both within the hippocampus and across the rest of the brain (system consolidation). Research into reconsolidation of memories emerged in the 2000s, in which past memories (usually traumatic ones) are recalled with the intent of altering them, since memories are unstable during the recall process. For example, in the case of severe PTSD, administering Beta blockers can decouple intense feelings of fear from traumatic memories while leaving those memories intact.
Like all good science fiction, Eternal Sunshine takes that grain of actual science and extends it in thought-provoking ways. In the film, so-called “problem memories” can be recalled individually while the patient is in a dream state and erased completely—uncomfortable feelings and all—as if they were computer files. Any neuroscientist will tell you this is not how memory works. What remains most interesting about Eternal Sunshine‘s premise is its thematic exploration of the persistence and vital importance of human memory.
So we thought it would be intriguing to mark the film’s 20th anniversary by exploring those ideas through the lens of philosophy with the guidance of Johns Hopkins University philosopher Jenann Ismael. Ismael specializes in probing questions of physics, metaphysics, cognition, and theory of mind. Her many publications include The Situated Self (2009), How Physics Makes Us Free (2016), and, most recently, Time: A Very Short Introduction (2021).
Enlarge/ Samsung shared this rendering of a CAMM ahead of the publishing of the CAMM2 standard in September.
Of all the PC-related things to come out of CES this year, my favorite wasn’t Nvidia’s graphics cards or AMD’s newest Ryzens or Intel’s iterative processor refreshes or any one of the oddball PC concept designs or anything to do with the mad dash to cram generative AI into everything.
No, of all things, the thing that I liked the most was this Crucial-branded memory module spotted by Tom’s Hardware. If it looks a little strange to you, it’s because it uses the Compression Attached Memory Module (CAMM) standard—rather than being a standard stick of RAM that you insert into a slot on your motherboard, it lies flat against the board where metal contacts on the board and the CAMM module can make contact with one another.
CAMM memory has been on my radar for a while, since it first cropped up in a handful of Dell laptops. Mistakenly identified at the time as a proprietary type of RAM that would give Dell an excuse to charge more for it, Dell has been pushing for the standardization of CAMM modules for a couple of years now, and JEDEC (the organization that handles all current computer memory standards) formally finalized the spec last month.
Something about seeing an actual in-the-wild CAMM module with a Crucial sticker on it, the same kind of sticker you’d see on any old memory module from Amazon or Newegg, made me more excited about the standard’s future. I had a similar feeling when I started digging into USB-C or when I began seeing M.2 modules show up in actual computers (though CAMM would probably be a bit less transformative than either). Here’s a thing that solves some real problems with the current technology, and it has the industry backing to actually become a viable replacement.
From upgradable to soldered (and back again?)
Enlarge/ SO-DIMM memory slots in the Framework Laptop 13. RAM slots used to be the norm in laptop motherboards, though now you need to do a bit of work to seek out laptops that feature them.
Andrew Cunningham
It used to be easy to save some money on a new PC by buying a version without much RAM and performing an upgrade yourself, using third-party RAM sticks that cost a fraction of what manufacturers would charge. But most laptops no longer afford you the luxury.
Most PC makers and laptop PC buyers made an unspoken bargain in the early- to mid-2010s, around when the MacBook Air and the Ultrabook stopped being special thin-and-light outliers and became the standard template for the mainstream laptop: We would jettison nearly any port or internal component in the interest of making a laptop that was thinner, sleeker, and lighter.
The CD/DVD drive was one of the most immediate casualties, though its demise had already been foreshadowed thanks to cheap USB drives, cloud storage, and streaming music and video services. But as laptops got thinner, it also gradually became harder to find Ethernet and most other non-USB ports (and, eventually, even traditional USB-A ports), space for hard drives (not entirely a bad thing, now that M.2 SSDs are cheap and plentiful), socketed laptop CPUs, and room for other easily replaceable or upgradable components. Early Microsoft Surface tablets were some of the worst examples of this era of computer design—thin sandwiches of glass, metal, and glue that were difficult or impossible to open without totally destroying them.
Another casualty of this shift was memory modules, specifically Dual In-line Memory Modules (DIMMs) that could be plugged into a socket on the motherboard and easily swapped out. Most laptops had a pair of SO-DIMM slots, either stacked on top of each other (adding thickness) or placed side by side (taking up valuable horizontal space that could have been used for more battery).
Eventually, these began to go away in favor of soldered-down memory, saving space and making it easier for manufacturers to build the kinds of MacBook Air-alikes that people wanted to buy, but also adding a point of failure to the motherboard and possibly shortening its useful life by setting its maximum memory capacity at the outset.
Move over, SO-DIMM. A new type of memory module has been made official, and backers like Dell are hoping that it eventually replaces SO-DIMM (small outline dual in-line memory module) entirely.
This month, JEDEC, a semiconductor engineering trade organization, announced that it had published the JESD318: Compression Attached Memory Module (CAMM2) standard, as spotted by Tom’s Hardware.
CAMM2 was originally introduced as CAMM via Dell, which has been pushing for standardization since it announced the technology at CES 2022. Dell released the only laptops with CAMM in 2022, the Dell Precision 7670 and 7770 workstations.
The standard includes DDR5 and LPDDR5/5X designs. The former targets “performance notebooks and mainstream desktops,” and the latter is for “a broader range of notebooks and certain server market segment,” JEDEC’s announcement said.
They each have the same connector but differing pinouts, so a DDR5 CAMM2 can’t be wrongfully mounted onto an LPDDR5/5X connector. CAMM2 means that it will be possible to have non-soldered LPDD5X memory. Currently, you can only get LPDDR5X as soldered chips.
Another reason supporters are pushing CAMM2 is in consideration of speed, as SO-DIMM tops out at 6,400 MHz, with max supported speeds even lower in four-DIMM designs. Many mainstream designs aren’t yet at this threshold. But Dell originally proposed CAMM as a way to get ahead of this limitation (largely through closer contact between the module and motherboard). The published CAMM2 standard says LPDDR5 DRAM CAMM2 “is expected to start at 6,400 MTs and increment upward in cadence with the DRAM speed capabilities.”
Samsung in September announced plans to offer LPDDR CAMM at 7.5Gbps, noting that it expects commercialization in 2024. Micron also plans to offer CAMM at up to 9,600Mbps and 192GB-plus per module in late 2026, as per a company road map shared by AnandTech last month. Both announcements were made before the CAMM2 standard was published, and we wouldn’t be surprised to see timelines extended.
Enlarge/ Samsung shared this rendering of a CAMM ahead of the publishing of the CAMM2 standard in September.
CAMM2 supports capacities of 8GB to 128GB on a single module. This opens the potential for thinner computer designs that don’t sacrifice memory or require RAM modules on both sides of the motherboard. Dell’s Precision laptops with Dell’s original CAMM design is 57 percent thinner than SO-DIMM, Dell said. The laptops released with up to 128GB of DDR5-3600 across one module and thinness as low as 0.98 inches, with a 16-inch display.
Enlarge/ A Dell rendering depicting the size differences between SO-DIMM and CAMM.
Dell
Nominal module dimensions listed in the standard point to “various” form factors for the modules, with the X-axis measuring 78 mm (3.07 inches) and the Y-axis 29.6–68 mm (1.17–2.68 inches).
Computers can also achieve dual-channel memory for more bandwidth with one CAMM compared to SO-DIMM’s single-channel design. Extra space could lead to better room for things like device heat management.
JEDEC’s announcement said:
By splitting the dual-channel CAMM2 connector lengthwise into two single-channel CAMM2 connectors, each connector half can elevate the CAMM2 to a different level. The first connector half supports one DDR5 memory channel at 2.85mm height while the second half supports a different DDR5 memory channel at 7.5mm height. Or, the entire CAMM2 connector can be used with a dual-channel CAMM2. This scalability from single-channel and dual-channel configurations to future multi-channel setups promises a significant boost in memory capacity.
Unlike their taller SO-DIMM counterparts, CAMM2 modules press against an interposer, which has pins on both sides to communicate with the motherboard. However, it’s also worth noting that compared to SO-DIMM modules, CAMM2 modules are screwed in. Upgrades may also be considered more complex since going from 8GB to 16GB, for example, would require buying a whole new CAMM and getting rid of the prior rather than only buying a second 8GB module.
JEDEC’s standardization should eventually make it cheaper for these parts to be created and sourced for different computers. It could also help adoption grow, but it will take years before we can expect this CAMM2 to overtake 26-year-old SO-DIMM, as Dell hopes. But with a few big names behind the standard and interest in thinner, more powerful computers, we should see a greater push for these modules in computers in the coming years.