A new method to manipulate muscle activity in a patient

The Next Wires title The next step in brain-computer interfaces: how we can use neural networks to change the way we think article The Atlantic article article A brain-machine interface (BMI) that could turn an ordinary smartphone into a virtual version of a brain is now being developed by scientists at UC Berkeley. 

The Berkeley team, which is the first to use neural nets to manipulate the way neurons work in the brain, says their system could be used to treat a wide range of neurological disorders including schizophrenia and epilepsy. 

The system uses a combination of hardware and software to generate and map electrical signals into 3D space that the brain interprets as signals that can be modulated by brain-controlled software. 

“It’s kind of like an optical illusion,” said Michael Mardis, one of the authors of a paper describing the new device in Nature Neuroscience, which describes the research.

“It’s like seeing a light show through a lens.

It’s a lot like looking at a light on a cloudy day, you can’t tell the difference between the two.” 

To achieve this, the Berkeley team turned to a technique called phase-selective enhancement, which involves applying the same kind of neural nets used in the field of machine learning to an electrical signal in a specific location in the neural network, and then applying that pattern to the same signal to achieve the desired effect. 

This technique has already been applied to many areas of neuroscience, including in the visual cortex, which helps to recognize shapes and objects in our environment, and the cortex of the cerebellum, which controls movement. 

One of the most important benefits of this technique is that it can be used in situations where a neural network cannot process the electrical signals that it is learning, such as when it is trying to identify a target object. 

But to make the technique more practical, the team used a method called functional magnetic resonance imaging (fMRI), which measures the activity of the brain’s electrical connections when the brain receives signals and uses them to train the system. 

Using fMRI, the researchers could map out the electrical activity of each individual neuron, which can then be mapped onto a computer to generate a 3D map of the neurons. 

In their research, the brain-training network learned to generate the shape of a hand, which it then used to build an image of a finger, which then turned out to be a robot that could manipulate a robotic arm, a tool that was previously used to help surgeons remove a tumor. 

With the new method, the scientists were able to map out how different parts of the neural networks worked together, and learn how they interacted to create a neural map of a given shape, which allowed them to identify what parts of each neuron were firing in response to different signals, and which parts were firing when they were not. 

After learning the neural mapping, the neural nets could then be used as part of a supervised machine learning algorithm to create an image representing the robot’s hand. 

When the robot was placed in front of the patient, the robot could tell the patient how to pick up the object by manipulating the electrical output of each electrode in the patient’s hand, and what part of the robot worked in combination with the electrode to determine the right direction to move the patient. 

To show that this was a realistic, accurate way to teach the system, the system was then used in a series of experiments to manipulate different aspects of the patients movements, including pulling, bending, and gripping the object.

“It really took us back to the days of the days before MRI,” said Mardes.

“The idea was that you can actually see these connections between neurons in the same way you can see the connections between cells in the retina.” 

Using the new technique, the students were able successfully to modify the shape and position of a small object with a simple tool. 

However, they were also able to modify its behaviour using other tools, such adding a second electrode to the robot, adding another wire to the sensor to generate additional signals, or adding a third electrode to make it more difficult to reach the object using only one hand.

The team hopes to use this technique to help with the development of new types of brain-based tools, and to create new kinds of virtual environments in which patients could be given the opportunity to learn about their condition, and be able to interact with and interact with their robot, for example. 

###   About the Author