Login with your Social Account

artificial intelligence human brain

The brain inspires new type of AI algorithms

Machine learning which was developed 70 years ago is based on learning dynamics in the human brain. Deep learning algorithms have been able to generate results equivalent to human specialists in various areas with the help of fast and large-scale processing computers and giant data sets. However, they produce results distinct from the present knowledge of learning in neuroscience.

A team of scientists at Bar-Ilan University in Israel has illustrated a new kind of high-speed artificial intelligence algorithms which are based on the slow brain dynamics exceeding the learning rates attained to date by state-of-the-art learning algorithms using advanced experiments on neuronal cultures and simulations. The paper has been published in The Scientific Reports.

The research lead author, Prof. Ido Kanter, of Bar-Ilan University’s Department of Physics and Gonda (Goldschmied) Multidisciplinary Brain Research Center, said that till now it has been considered that neurobiology and machine learning are separate disciplines that progressed separately and the absence of likely reciprocal influence is puzzling.

He added that the data processing speed of the brain is slower than the first computer invented over 70 years ago because the number of neurons in a brain is less than the number of bits in a usual disc size of modern computers. Prof. Kanter, whose research team includes Herut Uzan, Shira Sardi, Amir Goldental and Roni Vardi also added that learning rates of the brain are very complex and isolated from the principles of learning in artificial intelligence algorithms. Since the biological system has to deal with asynchronous inputs, brain dynamics do not follow a well-defined clock synchronized for the nerve cells.

A key difference between artificial intelligence algorithms and the human brain is the nature of inputs handled. The human brain deals with asynchronous inputs, where the relative position of objects and the temporal ordering in the input are important such as identifying cars, pedestrians, other road signs while driving. On the other hand, AI algorithms deal with synchronous inputs where relative timing is ignored.

Recent studies have found that ultrafast learning rates are unexpectedly identical for small and large networks. So, the disadvantage of the complicated brain’s learning system is indeed an advantage. Another important finding is that learning can occur without learning steps through self-adaptation according to asynchronous inputs. This type of learning-without-learning occurs in the dendrites, several terminals of each neuron, as was recently experimentally observed.

The concept of productive deep learning algorithms based on the very slow brain’s dynamics provides the possibility to execute an advance type of artificial intelligence based on fast computation bridging the gap between neurobiology and artificial intelligence. Researchers conclude that understandings of our brain’s principles have to be at the centre of artificial intelligence once again.

Journal Reference: The Scientific Reports

Godfathers of AI

The Godfathers of AI receive the prestigious Turing Award

The 2018 Turing Award, acknowledged as the “Nobel Prize of computing” has been awarded to a trio of researchers who have set the foundations for the current success in artificial intelligence.

The term artificial intelligence merely refers to the intelligence that is demonstrated by the computers. Artificial intelligence is renowned for its cycles of boom and bust, and the issue of hype is as old as the field itself. When the research fails to meet the inflated expectations, it generates a freeze in the funding and interest known as an “AI winter”. It was at the tail end of one such winter in the late 1980s that Bengio, Hinton, and LeCun began exchanging ideas and working on interconnected problems. These included neural networks. They are computer programs made from connected digital neurons that have become a key building block for contemporary and modern AI.

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun, who are often referred to as the ‘godfathers of AI’ have been recognized with the $1 million annual prize for their work evolving the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s assisted huge breakthroughs in tasks like computer vision and speech recognition. Their work fortifies the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

All three have ever since taken up prominent places in the Artificial intelligence research ecosystem, by straddling with the academic world and the tech industry. Hinton splits his time between Google and the University of Toronto; Bengio is an associate professor at the University of Montreal and has started an AI company called Element AI, while LeCun is Facebook’s chief AI scientist and an instructor and professor at NYU.

Google’s head of AI, Jeff Dean also praised the trio’s achievements. “Deep neural networks are accountable for some of the greatest advances in modern computer science,” said Dean in a statement.

Let us hope that people like this trio improve AI and all of us use AI in the right way. Tell us your view on AI and what do you think about the future of AI with a quick comment.