
Our muddy brain Although it looks like a solid silicon chip in a computer processor, scientists have a long history of comparing the two. As Alan Turing stated in 1952, “We are not interested in the fact that the brain is as consistent as cold porridge.” In other words, the medium is not important and the computing power. Only important.
Today, the most powerful artificial intelligence systems employ a type of machine learning called deep learning. Their algorithms are learned by processing large amounts of data through a hidden layer of interconnected nodes called deep neural networks. As the name implies, deep neural networks are inspired by real neural networks in the brain, and nodes are modeled on real neurons. At the very least, the influential neuron model was born Perceptron. Since then, biological neurons are known to be more complex than artificial neurons, as their understanding of the computational complexity of single neurons has expanded dramatically. But how much is it?
To investigate, David Beniaguev, Idan Segev, and Michael London of the Hebrew University of Jerusalem trained artificial deep neural networks to mimic the calculations of simulated biological neurons. They showed that deep neural networks require 5 to 8 layers of interconnected “neurons” to represent the complexity of a single biological neuron.
Even the author did not anticipate such complexity. “I thought it would be easier and smaller,” said Veneer Geff. He expected that three or four layers would be sufficient to capture the calculations performed within the cell.
Timothy Lillicrap, who designs decision-making algorithms at Google-owned AI company DeepMind, may need to rethink the old tradition of roughly comparing neurons in the brain to neurons in the context of machine learning. He said it suggests no. “This treatise really helps us think more carefully about it and address the question of how well we can make those analogies,” he said.
The most basic analogies of artificial and real neurons include how they handle incoming information. Both types of neurons receive an incoming signal and, based on that information, decide whether to send their own signal to other neurons. Although artificial neurons rely on simple calculations to make this decision, decades of research have shown that the process is much more complex in biological neurons. Computational neuroscientists use input and output functions to model the relationship between the inputs received by long tree-like branches of biological neurons called dendrites and the determination of neurons to send signals. increase.
This feature teaches the new author an artificial deep neural network that mimics to determine its complexity. They started by creating a large-scale simulation of the input and output functions of a type of neuron from the rat cortex, known as a pyramidal neuron, with distinct dendrite branch trees above and below it. rice field. The simulation was then fed into a deep neural network with up to 256 artificial neurons in each layer. They continued to increase the number of layers until they achieved 99% accuracy at the millisecond level between the input and output of the simulated neurons. Deep neural networks used at least five (but no more than eight) artificial layers to successfully predict the behavior of neuron I / O functions. In most networks, this corresponds to about 1,000 artificial neurons per biological neuron.
Neuroscientists find that, like the pyramidal neurons on the left, the computational complexity of a single neuron depends on dendrite tree-like branches that are attacked by incoming signals. I am. These are represented by changes in the color of the neurons (red means high voltage, blue means low voltage) before deciding whether the neuron sends its own signal called a “spikes”. Causes local voltage changes. This spikes three times, as shown by the traces of the individual branches on the right. Here, the color indicates the position of the dendrites from top (red) to bottom (blue).
Video: David Beniaguev“”[The result] Andreas Trias, a computational neuroscientist at Baylor College of Medicine, said:
However, the authors of the study warn that it is not yet an easy response. “The relationship between the number of layers in a neural network and the complexity of the network is not clear,” London said. So, for example, it’s not really clear how much complexity will be added by moving from layer 4 to layer 5. Also, the need for 1,000 artificial neurons does not mean that biological neurons are exactly 1,000 times more complex. Ultimately, using many artificial neurons exponentially within each layer can eventually lead to a deep neural network with one layer, but much more data to train the algorithm. And may take some time.
How complex is a single neuron computationally? - Texasnewstoday.com
Read More
No comments:
Post a Comment