Plant cells are surrounded by an intricately structured protective coat called the cell wall. It’s built of cellulose microfibrils intertwined with polysaccharides like hemicellulose or pectin. We have known what plant cells look like without their walls, and we know what they look like when the walls are fully assembled, but we’ve never seen the wall-building process in action. “We knew the starting point and the finishing point, but had no idea what happens in between,” says Eric Lam, a plant biologist at Rutgers University. He’s a co-author of the study that caught wall-building plant cells in action for the first time. And once we saw how the cell wall building worked, it looked nothing like how we drew that in biology handbooks.
Camera-shy builders
Plant cells without walls, known as protoplasts, are very fragile, and it has been difficult to keep them alive under a microscope for the several hours needed for them to build walls. Plant cells are also very light-sensitive, and most microscopy techniques require pointing a strong light source at them to get good imagery.
Then there was the issue of tracking their progress. “Cellulose is not fluorescent, so you can’t see it with traditional microscopy,” says Shishir Chundawat, a biologist at Rutgers. “That was one of the biggest issues in the past.” The only way you can see it is if you attach a fluorescent marker to it. Unfortunately, the markers typically used to label cellulose were either bound to other compounds or were toxic to the plant cells. Given their fragility and light sensitivity, the cells simply couldn’t survive very long with toxic markers as well.
Elon Musk, back in October 2021, tweeted that “humans drive with eyes and biological neural nets, so cameras and silicon neural nets are only way to achieve generalized solution to self-driving.” The problem with his logic has been that human eyes are way better than RGB cameras at detecting fast-moving objects and estimating distances. Our brains have also surpassed all artificial neural nets by a wide margin at general processing of visual inputs.
To bridge this gap, a team of scientists at the University of Zurich developed a new automotive object-detection system that brings digital camera performance that’s much closer to human eyes. “Unofficial sources say Tesla uses multiple Sony IMX490 cameras with 5.4-megapixel resolution that [capture] up to 45 frames per second, which translates to perceptual latency of 22 milliseconds. Comparing [these] cameras alone to our solution, we already see a 100-fold reduction in perceptual latency,” says Daniel Gehrig, a researcher at the University of Zurich and lead author of the study.
Replicating human vision
When a pedestrian suddenly jumps in front of your car, multiple things have to happen before a driver-assistance system initiates emergency braking. First, the pedestrian must be captured in images taken by a camera. The time this takes is called perceptual latency—it’s a delay between the existence of a visual stimuli and its appearance in the readout from a sensor. Then, the readout needs to get to a processing unit, which adds a network latency of around 4 milliseconds.
The processing to classify the image of a pedestrian takes further precious milliseconds. Once that is done, the detection goes to a decision-making algorithm, which takes some time to decide to hit the brakes—all this processing is known as computational latency. Overall, the reaction time is anywhere between 0.1 to half a second. If the pedestrian runs at 12 km/h they would travel between 0.3 and 1.7 meters in this time. Your car, if you’re driving 50 km/h, would cover 1.4 to 6.9 meters. In a close-range encounter, this means you’d most likely hit them.
Gehrig and Davide Scaramuzza, a professor at the University of Zurich and a co-author on the study, aimed to shorten those reaction times by bringing the perceptual and computational latencies down.
The most straightforward way to lower the former was using standard high-speed cameras that simply register more frames per second. But even with a 30-45 fps camera, a self-driving car would generate nearly 40 terabytes of data per hour. Fitting something that would significantly cut the perceptual latency, like a 5,000 fps camera, would overwhelm a car’s onboard computer in an instant—the computational latency would go through the roof.
So, the Swiss team used something called an “event camera,” which mimics the way biological eyes work. “Compared to a frame-based video camera, which records dense images at a fixed frequency—frames per second—event cameras contain independent smart pixels that only measure brightness changes,” explains Gehrig. Each of these pixels starts with a set brightness level. When the change in brightness exceeds a certain threshold, the pixel registers an event and sets a new baseline brightness level. All the pixels in the event camera are doing that continuously, with each registered event manifesting as a point in an image.
This makes event cameras particularly good at detecting high-speed movement and allows them to do so using far less data. The problem with putting them in cars has been that they had trouble detecting things that moved slowly or didn’t move at all relative to the camera. To solve that, Gehrig and Scaramuzza went for a hybrid system, where an event camera was combined with a traditional one.