imaging

man-got-$2,500-whole-body-mri-that-found-no-problems—then-had-massive-stroke

Man got $2,500 whole-body MRI that found no problems—then had massive stroke

A New York man is suing Prenuvo, a celebrity-endorsed whole-body magnetic resonance imaging (MRI) provider, claiming that the company missed clear signs of trouble in his $2,500 whole-body scan—and if it hadn’t, he could have acted to avert the catastrophic stroke he suffered months later.

Sean Clifford and his legal team claim that his scan on July 15, 2023, showed a 60 percent narrowing and irregularity in a major artery in his brain—the proximal right middle cerebral artery, a branch of the most common artery involved in acute strokes. But Prenuvo’s reviews of the scan did not flag the finding and otherwise reported everything in his brain looked normal; there was “no adverse finding.” (You can read Prenuvo’s report and see Clifford’s subsequent imaging here.)

Clifford suffered a massive stroke on March 7, 2024. Subsequent imaging found that the proximal right middle cerebral artery progressed to a complete blockage, causing the stroke. Clifford suffered paralysis of his left hand and leg, general weakness on his left side, vision loss and permanent double vision, anxiety, depression, mood swings, cognitive deficits, speech problems, and permanent difficulties with all daily activities.

He filed his lawsuit against Prenuvo in September 2024 in the New York State Supreme Court. In the lawsuit, he argues that if he had known of the problem, he could have undergone stenting or other minimally invasive measures to prevent the stroke.

Ongoing litigation

In the legal proceedings since, Prenuvo, a California-based company, has tried to limit the damages that Clifford could seek, first by trying to force arbitration and then by trying to apply California laws to the New York case, as California law caps malpractice damages. The company failed on both counts. In a December ruling, a judge also denied Prenovo’s attempts to shield the radiologist who reviewed Clifford’s scan, William A. Weiner, DO, of East Rockaway, New York.

Notably, Weiner has had his medical license suspended in connection with an auto insurance scheme, in which Weiner was accused of falsifying findings on MRI scans.

Man got $2,500 whole-body MRI that found no problems—then had massive stroke Read More »

we-have-the-first-video-of-a-plant-cell-wall-being-built

We have the first video of a plant cell wall being built

Plant cells are surrounded by an intricately structured protective coat called the cell wall. It’s built of cellulose microfibrils intertwined with polysaccharides like hemicellulose or pectin. We have known what plant cells look like without their walls, and we know what they look like when the walls are fully assembled, but we’ve never seen the wall-building process in action. “We knew the starting point and the finishing point, but had no idea what happens in between,” says Eric Lam, a plant biologist at Rutgers University. He’s a co-author of the study that caught wall-building plant cells in action for the first time. And once we saw how the cell wall building worked, it looked nothing like how we drew that in biology handbooks.

Camera-shy builders

Plant cells without walls, known as protoplasts, are very fragile, and it has been difficult to keep them alive under a microscope for the several hours needed for them to build walls. Plant cells are also very light-sensitive, and most microscopy techniques require pointing a strong light source at them to get good imagery.

Then there was the issue of tracking their progress. “Cellulose is not fluorescent, so you can’t see it with traditional microscopy,” says Shishir Chundawat, a biologist at Rutgers. “That was one of the biggest issues in the past.” The only way you can see it is if you attach a fluorescent marker to it. Unfortunately, the markers typically used to label cellulose were either bound to other compounds or were toxic to the plant cells. Given their fragility and light sensitivity, the cells simply couldn’t survive very long with toxic markers as well.

We have the first video of a plant cell wall being built Read More »

new-camera-design-can-id-threats-faster,-using-less-memory

New camera design can ID threats faster, using less memory

Image out the windshield of a car, with other vehicles highlighted by computer-generated brackets.

Elon Musk, back in October 2021, tweeted that “humans drive with eyes and biological neural nets, so cameras and silicon neural nets are only way to achieve generalized solution to self-driving.” The problem with his logic has been that human eyes are way better than RGB cameras at detecting fast-moving objects and estimating distances. Our brains have also surpassed all artificial neural nets by a wide margin at general processing of visual inputs.

To bridge this gap, a team of scientists at the University of Zurich developed a new automotive object-detection system that brings digital camera performance that’s much closer to human eyes. “Unofficial sources say Tesla uses multiple Sony IMX490 cameras with 5.4-megapixel resolution that [capture] up to 45 frames per second, which translates to perceptual latency of 22 milliseconds. Comparing [these] cameras alone to our solution, we already see a 100-fold reduction in perceptual latency,” says Daniel Gehrig, a researcher at the University of Zurich and lead author of the study.

Replicating human vision

When a pedestrian suddenly jumps in front of your car, multiple things have to happen before a driver-assistance system initiates emergency braking. First, the pedestrian must be captured in images taken by a camera. The time this takes is called perceptual latency—it’s a delay between the existence of a visual stimuli and its appearance in the readout from a sensor. Then, the readout needs to get to a processing unit, which adds a network latency of around 4 milliseconds.

The processing to classify the image of a pedestrian takes further precious milliseconds. Once that is done, the detection goes to a decision-making algorithm, which takes some time to decide to hit the brakes—all this processing is known as computational latency. Overall, the reaction time is anywhere between 0.1 to half a second. If the pedestrian runs at 12 km/h they would travel between 0.3 and 1.7 meters in this time. Your car, if you’re driving 50 km/h, would cover 1.4 to 6.9 meters. In a close-range encounter, this means you’d most likely hit them.

Gehrig and Davide Scaramuzza, a professor at the University of Zurich and a co-author on the study, aimed to shorten those reaction times by bringing the perceptual and computational latencies down.

The most straightforward way to lower the former was using standard high-speed cameras that simply register more frames per second. But even with a 30-45 fps camera, a self-driving car would generate nearly 40 terabytes of data per hour. Fitting something that would significantly cut the perceptual latency, like a 5,000 fps camera, would overwhelm a car’s onboard computer in an instant—the computational latency would go through the roof.

So, the Swiss team used something called an “event camera,” which mimics the way biological eyes work. “Compared to a frame-based video camera, which records dense images at a fixed frequency—frames per second—event cameras contain independent smart pixels that only measure brightness changes,” explains Gehrig. Each of these pixels starts with a set brightness level. When the change in brightness exceeds a certain threshold, the pixel registers an event and sets a new baseline brightness level. All the pixels in the event camera are doing that continuously, with each registered event manifesting as a point in an image.

This makes event cameras particularly good at detecting high-speed movement and allows them to do so using far less data. The problem with putting them in cars has been that they had trouble detecting things that moved slowly or didn’t move at all relative to the camera. To solve that, Gehrig and Scaramuzza went for a hybrid system, where an event camera was combined with a traditional one.

New camera design can ID threats faster, using less memory Read More »