Posted on Categories Discover Magazine
You’re bombarded with sensory information every day — sights, sounds, smells, touches and tastes. A constant barrage that your brain has to manage, deciding which information to trust or which sense to use as a backup when another fails. Understanding how the brain evaluates and juggles all this input could be the key to designing better therapies for patients recovering from stroke, nerve injuries, or other conditions. It could also help engineers build more realistic virtual experiences for everyone from gamers to fighter pilots to medical patients.
Now, some researchers are using virtual reality (VR) and even robots to learn how the brain pulls off this juggling act.
Do You Believe Your Eyes?
At the University of Reading in the U.K., psychologist Peter Scarfe and his team are currently exploring how the brain combines information from touch, vision, and proprioception – our sense of where our body is positioned – to form a clear idea of where objects are in space.
Generally, the brain goes with whichever sense is more reliable at the time. For instance, in a dark room, touch and proprioception trump vision. But when there’s plenty of light, you’re more likely to believe your eyes. Part of what Scarfe’s crew hopes to eventually unravel is how the brain combines information from both senses and whether that combination is more accurate than touch or sight alone. Does the brain trust input from one sense and ignore the other, does it split the difference between the two, or does it do something more complex?
To find out, the team is using a VR headset and a robot called Haptic Master.
While volunteers wear the VR headset, they see four virtual balls – three in a triangle formation and one in the center. They can also reach out and touch four real spheres that appear in the same place as the ones they see in VR: the three in the triangle formation are just plastic and never move, but the fourth is actually a ball bearing at the end of Haptic Master’s robot arm. Researchers use the robot to move this fourth ball between repetitions of the test. Think of the three-ball-triangle as a flat plane in space. The participant has to decide whether the fourth ball is higher or lower than the level of that triangle.
It’s a task that requires the brain to weigh and combine information from multiple senses to decide where the fourth ball is in relation to the other three. Participants get visual cues about the ball’s location through the VR headset, but they also use their haptic sense – the combination of touch and proprioception – to feel where the ball is in space.
The VR setup makes it easier to control the visual input and make sure volunteers aren’t using other cues, like the location of the robot arm or other objects in the room, to make their decisions.
Collectively, volunteers have performed this task hundreds of times. Adams and his colleagues are looking at how accurate the results are when the participant used only their eyes, only their haptic sense, or both senses at once. The team is then comparing those results to several computer models, each predicting how a person would estimate the ball’s position if their brain combined the sensory information in different ways.
So far, the team needs more data to learn which model best describes how the brain combines sensory cues. But they say that their results, and those of others working in the field, could one day help design more accurate haptic feedback, which could make interacting with objects in virtual reality feel more realistic.
On Shaky Footing
Anat Zubetzky, a physical therapy researcher at New York University, is also turning to VR. She uses the burgeoning technology to study how our brains weigh different sensory input to help us when things get shaky — specifically, if people rely on their sense of proprioception or their vision to keep their balance.
Conventional wisdom in sports medicine says that standing on an uneven surface is a good proprioception workout for patients in rehabilitation after an injury. That’s because it forces your somatosensory system, the nerves involved in proprioception, to work harder. So if your balance is suffering because of nerve damage, trying to stabilize yourself while standing on an uneven surface, like a bosu ball, should help.
But Zubetzky’s results tell a different story.
In the lab, Zubetzky’s subjects strap on VR headsets and stand on either a solid floor or an unsteady surface, like a wobble board. She projects some very subtly moving dots onto the VR display and uses a pressure pad on the floor to measure how participants’ bodies sway.
It turns out, when people stand on an unstable surface, they’re more likely to sway in time with the moving dots. But on a stable surface, they seem to pay less attention to the dots.
So rather than working their somatosensory systems harder, it seems people use their vision to look for a fixed reference point to help keep them balanced. In other words, the brain switches from a less reliable sense to a more reliable one, a process called sensory weighting.
Ultimately, Zubetzky hopes her VR setup could help measure how much a patient with somatosensory system damage relies on their vision. This knowledge, in turn, could help measure the severity of the problem so doctors can design a better treatment plan.
As VR gets more realistic and more immersive – partly thanks to experiments like these – it could offer researchers an even more refined tool for picking apart what’s going on in the brain.
Says Zubetzky, “It’s been a pretty amazing revolution.”