Login with your Social Account

Kismet robot at MIT Museum

Researchers use magnetic properties for improving artificial intelligence systems

A group of researchers and experts from Purdue University have developed a method to integrate magnets with networks similar to brain for programming and teaching devices such as robots, self-driving cars.

Kaushik Roy, professor of electrical and computer engineering at Purdue University said that the stochastic neural networks developed by them attempts to copy some of the activities performed by the human brain and compute them with the help of a network of neurons and synapses. This helps to distinguish between several objects and make inferences about them besides storing information.

This was announced in the German Physical Sciences Conference and the report has been published in the Frontiers in Neuroscience.

Advertisements

The dynamics of the switching processes of nano-magnets resemble the electrical dynamics of the neurons to a large extent. Switching behavior is also exhibited by magnetic junction devices. Stochastic neural networks have random variations built inside them through stochastic weights.

Researchers have suggested a new stochastic training algorithm with the help of spike timing dependent plasticity for the synapses named as Stochastic-STDP. It has been tried on a rat’s hippocampus. The magnet’s intrinsic stochastic behavior was used to alter the various states of magnetization which are stochastically based on the algorithm for understanding various object representations.

The synaptic weights which have been trained and encoded in the magnetization states of the magnets are used for making inference. The use of high energy barrier magnets come to a great advantage as it allows compact stochastic primitives and makes the device eligible for being a stable memory element which permits data retention. Roy who also leads Brain-inspired Computing Enabling Autonomous Intelligence of Purdue University said that the magnet technology which has been developed is highly energy efficient. The network comprising of neurons and synapses makes optimal use of the memory and energy available similar to the computations done by brain.

These networks resembling the brain can also be used for solving many other problems such as graph coloring, travelling salesman problem and optimization problems in combinatorics. The travelling salesman is a very good problem involving optimization of algorithms. It involves traversing locations in the minimal amount of resources available. It was first defined by Irish mathematician W.R Hamilton and British mathematician Thomas Kirkman. It is an example of a NP hard problem where the smaller component problems will be as complex as the main problem in the minimum case.

The work is aligned with the celebration of Giant Leaps of Purdue University which acknowledges the advancements made in artificial intelligence by the university.

About the author: Kalpit Veerwal
Kalpit Veerwal is a second year Computer Science undergraduate at IIT Bombay. He is well known for being the only person to score 360/360 in JEE (Main). He is registered in the Limca Book of Records for the same. A blogger in his free time, he has also secured top ranks in various exams held in India and the world.

Write Comment!

Comments

No comments yet