Based on the video analysis, scientists identified three facial gestures they wanted to focus on: the lipsmack macaques use to signal receptivity or submission; the threat face they make when they want to challenge or chase off an adversary; and chewing, a non-social, volitional movement. Then, using the fMRI scans, the team located key brain areas involved in triggering these gestures. And when this was done, Ianni and her colleagues went deeper—quite literally.
Under the hood
“We targeted these brain areas with sub-millimeter precision for implantation of micro-electrode arrays,” Ianni explains. This, for the first time, allowed her team to simultaneously record the activity from many neurons spaced across the areas where the brain generates facial gestures. The electrodes went into the primary motor cortex, the ventral premotor cortex, the primary somatosensory cortex, and the cingulate motor cortex. When they were in, the team once again exposed the macaques to the same set of social stimuli, looking for neural signatures of the three selected facial gestures. And that’s when things took a surprising turn.
The researchers expected to see a clear division of responsibilities, one where the cingulate cortex governs social signals, while the motor cortex is specialized in chewing. Instead, they found that every single region was involved in every type of gesture. Whether the macaques were threatening a rival or simply enjoying a snack, all four brain areas were firing in a coordinated symphony.
This led Ianni’s team to the question of how the brain distinguished between social gestures and chewing, since it apparently wasn’t about where the brain processed the information. The answer was in different neural codes—different ways that neurons represent and transmit information in the brain over time.
The hierarchy of timing
By analyzing neural population dynamics, the team identified a temporal hierarchy across the cortex in macaques. The cingulate cortex used a static neural code. “The static means the firing pattern of neurons is persistent across both multiple repetitions of the same facial gesture and across time,” Ianni explains, and maintained their firing pattern till 0.8 seconds after that. “A single decoder which learns this pattern could be used at any timepoint or during any trial to read out the facial expression,” Ianni says.