Login with your Social Account

reward structures in the human brain

Researchers establish links between aging and changes in cognitive abilities of brain networks

In a recent paper published by the researchers, it has been revealed that the functional regions that are present in the brain get less distinct and interconnected with the increase in age. This occurs mainly in the networks related to cognition and attention span. The study has been published in the Journal Of Neuroscience.

Juan Helen Zhou, an Associate Professor and a neuroscientist from Duke-NUS’ Neuroscience and Behavioural Disorders program said that compared to various cross-sectional studies, it is very crucial to understand the changes in brain which take place both due to healthy and pathological aging so as to reduce the rate of cognitive aging.

The human brain has different segregated neuronal networks with very dense internal connections and less inter-connectivity. Aging is considered to be related to decreased functional specialization and separation of the brain networks.

Professor Michael Chee, Director of Duke-NUS’ Centre for Cognitive Neuroscience and Professor Zhou led a team of neuroscientists for this research. For the same purpose, neuropsychological assessments and functional MRI was performed on a group of  57 young adults and 72 healthy elderly Singaporeans. This accumulation of data for research was done over a span of 4 years where the participants were judged on the basis of various tasks like rate of information processing, how good can participants focus. They checked their ability to memorize verbal and visuospatial data along with planning and execution of tasks.

The accretion of those fMRI images was just one part of the research. Dr. Joanna Chong, the first author of the paper and a Ph.D. graduate under Associate Professor Zhou, was given the responsibility to convert the images into much appealing graphical representations helping them to analyze the intra- and inter-network joins in the brain for the individuals which comprised of adult along with elderly generation.

This analysis aided the team in understanding that there are some functions of brain such as goal-oriented thoughts and deciding where to focus attention which gets affected as one ages, since information transferring becomes less efficient and less distinctive.

We can be assured that this research study surely has some promising future applications since aging has been the reason for various neurodegenerative and cerebrovascular diseases which are a concern for both Governments and healthcare departments. Thus, any sort of future work will facilitate in knowing the reasons for aging and will also help in deciphering the ways of preserving and curing it. Researchers have next plans to examine how factors such as genetic, cardiovascular risks, might influence the age-related changes in the brain networks.

Neural network data matching

Machine learning unlocks mysteries of quantum physics

Understanding electrons’ intricate behaviour has led to discoveries that transformed society, such as the revolution in computing made possible by the invention of the transistor.

Today, through advances in technology, electron behaviour can be studied much more deeply than in the past, potentially enabling scientific breakthroughs as world-changing as the personal computer. However, the data these tools generate are too complex for humans to interpret.

A Cornell-led team has developed a way to use machine learning to analyze the data generated by scanning tunnelling microscopy (STM) – a technique that produces subatomic scale images of electronic motions in material surfaces at varying energies, providing information unattainable by any other method.

“Some of those images were taken on materials that have been deemed important and mysterious for two decades,” said Eun-Ah Kim, professor of physics. “You wonder what kinds of secrets are buried in those images. We would like to unlock those secrets.”

Kim is senior author of “Machine Learning in Electronic Quantum Matter Imaging Experiments,” which published in Nature June 19. First authors are Yi Zhang, formerly a postdoctoral researcher in Kim’s lab and now at Peking University in China, and Andrej Mesaros, a former postdoctoral researcher in Kim’s lab now at the Université Paris-Sud in France.

Co-authors include J.C. Séamus Davis, Cornell’s James Gilbert White Distinguished Professor in the Physical Sciences, an innovator in STM-driven studies.

The research yielded new insights into how electrons interact – and showed how machine learning can be used to drive further discovery in experimental quantum physics.

At the subatomic scale, a given sample will include trillion trillions of electrons interacting with each other and the surrounding infrastructure. Electrons’ behaviour is determined partly by the tension between their two competing tendencies: to move around, associated with kinetic energy; and to stay far away from each other, associated with repulsive interaction energy.

In this study, Kim and collaborators set out to discover which of these tendencies is more important in a high-temperature superconductive material.

Using STM, electrons tunnel through a vacuum between the conducting tip of the microscope and the surface of the sample being examined, providing detailed information about the electrons’ behaviour.

“The problem is, when you take data like that and record it, you get image-like data, but it’s not a natural image, like an apple or a pear,” Kim said. The data generated by the instrument is more like a pattern, she said, and about 10,000 times more complicated than a traditional measurement curve. “We don’t have a good tool to study those kinds of data sets.”

To interpret this data, the researchers simulated an ideal environment and added factors that would cause changes in electron behaviour. They then trained an artificial neural network – a kind of artificial intelligence that can learn a specific task using methods inspired by how the brain works – to recognize the circumstances associated with different theories. When the researchers input the experimental data into the neural network, it determined which of the theories the actual data most resembled.

This method, Kim said, confirmed the hypothesis that the repulsive interaction energy was more influential in the electrons’ behaviour.

A better understanding of how many electrons interact on different materials and under different conditions will likely lead to more discoveries, she said, including the development of new materials.

“The materials that led to the initial revolution of transistors were actually pretty simple materials. Now we have the ability to design much more complex materials,” Kim said. “If these powerful tools can reveal important aspects leading to the desired property, we would like to be able to make a material with that property.”

Also contributing were researchers at Brookhaven National Laboratory, Stanford University, Harvard University, San Jose State University, the National Institute of Advanced Industrial Science in Japan, the University of Tokyo and Oxford University.

Materials provided by Cornell University

MIT Photonic Accelerator

Chip design drastically reduces energy needed to compute with light

MIT researchers have developed a novel “photonic” chip that uses light instead of electricity — and consumes relatively little power in the process. The chip could be used to process massive neural networks millions of times more efficiently than today’s classical computers do.

Neural networks are machine-learning models that are widely used for such tasks as robotic object identification, natural language processing, drug development, medical imaging, and powering driverless cars. Novel optical neural networks, which use optical phenomena to accelerate computation, can run much faster and more efficiently than their electrical counterparts.

But as traditional and optical neural networks grow more complex, they eat up tons of power. To tackle that issue, researchers and major tech companies — including Google, IBM, and Tesla — have developed “AI accelerators,” specialized chips that improve the speed and efficiency of training and testing neural networks.

For electrical chips, including most AI accelerators, there is a theoretical minimum limit for energy consumption. Recently, MIT researchers have started developing photonic accelerators for optical neural networks. These chips perform orders of magnitude more efficiently, but they rely on some bulky optical components that limit their use to relatively small neural networks.

In a paper published in Physical Review X, MIT researchers describe a new photonic accelerator that uses more compact optical components and optical signal-processing techniques, to drastically reduce both power consumption and chip area. That allows the chip to scale to neural networks several orders of magnitude larger than its counterparts.

Simulated training of neural networks on the MNIST image-classification dataset suggest the accelerator can theoretically process neural networks more than 10 million times below the energy-consumption limit of traditional electrical-based accelerators and about 1,000 times below the limit of photonic accelerators. The researchers are now working on a prototype chip to experimentally prove the results.

“People are looking for technology that can compute beyond the fundamental limits of energy consumption,” says Ryan Hamerly, a postdoc in the Research Laboratory of Electronics. “Photonic accelerators are promising … but our motivation is to build a [photonic accelerator] that can scale up to large neural networks.”

Practical applications for such technologies include reducing energy consumption in data centers. “There’s a growing demand for data centers for running large neural networks, and it’s becoming increasingly computationally intractable as the demand grows,” says co-author Alexander Sludds, a graduate student in the Research Laboratory of Electronics. The aim is “to meet computational demand with neural network hardware … to address the bottleneck of energy consumption and latency.”

Joining Sludds and Hamerly on the paper are: co-author Liane Bernstein, an RLE graduate student; Marin Soljacic, an MIT professor of physics; and Dirk Englund, an MIT associate professor of electrical engineering and computer science, a researcher in RLE, and head of the Quantum Photonics Laboratory.

Compact design

Neural networks process data through many computational layers containing interconnected nodes, called “neurons,” to find patterns in the data. Neurons receive input from their upstream neighbors and compute an output signal that is sent to neurons further downstream. Each input is also assigned a “weight,” a value based on its relative importance to all other inputs. As the data propagate “deeper” through layers, the network learns progressively more complex information. In the end, an output layer generates a prediction based on the calculations throughout the layers.

All AI accelerators aim to reduce the energy needed to process and move around data during a specific linear algebra step in neural networks, called “matrix multiplication.” There, neurons and weights are encoded into separate tables of rows and columns and then combined to calculate the outputs.

In traditional photonic accelerators, pulsed lasers encoded with information about each neuron in a layer flow into waveguides and through beam splitters. The resulting optical signals are fed into a grid of square optical components, called “Mach-Zehnder interferometers,” which are programmed to perform matrix multiplication. The interferometers, which are encoded with information about each weight, use signal-interference techniques that process the optical signals and weight values to compute an output for each neuron. But there’s a scaling issue: For each neuron there must be one waveguide and, for each weight, there must be one interferometer. Because the number of weights squares with the number of neurons, those interferometers take up a lot of real estate.

“You quickly realize the number of input neurons can never be larger than 100 or so, because you can’t fit that many components on the chip,” Hamerly says. “If your photonic accelerator can’t process more than 100 neurons per layer, then it makes it difficult to implement large neural networks into that architecture.”

The researchers’ chip relies on a more compact, energy efficient “optoelectronic” scheme that encodes data with optical signals, but uses “balanced homodyne detection” for matrix multiplication. That’s a technique that produces a measurable electrical signal after calculating the product of the amplitudes (wave heights) of two optical signals.

Pulses of light encoded with information about the input and output neurons for each neural network layer — which are needed to train the network — flow through a single channel. Separate pulses encoded with information of entire rows of weights in the matrix multiplication table flow through separate channels. Optical signals carrying the neuron and weight data fan out to grid of homodyne photodetectors. The photodetectors use the amplitude of the signals to compute an output value for each neuron. Each detector feeds an electrical output signal for each neuron into a modulator, which converts the signal back into a light pulse. That optical signal becomes the input for the next layer, and so on.

The design requires only one channel per input and output neuron, and only as many homodyne photodetectors as there are neurons, not weights. Because there are always far fewer neurons than weights, this saves significant space, so the chip is able to scale to neural networks with more than a million neurons per layer.

Finding the sweet spot

With photonic accelerators, there’s an unavoidable noise in the signal. The more light that’s fed into the chip, the less noise and greater the accuracy — but that gets to be pretty inefficient. Less input light increases efficiency but negatively impacts the neural network’s performance. But there’s a “sweet spot,” Bernstein says, that uses minimum optical power while maintaining accuracy.

That sweet spot for AI accelerators is measured in how many joules it takes to perform a single operation of multiplying two numbers — such as during matrix multiplication. Right now, traditional accelerators are measured in picojoules, or one-trillionth of a joule. Photonic accelerators measure in attojoules, which is a million times more efficient.

In their simulations, the researchers found their photonic accelerator could operate with sub-attojoule efficiency. “There’s some minimum optical power you can send in, before losing accuracy. The fundamental limit of our chip is a lot lower than traditional accelerators … and lower than other photonic accelerators,” Bernstein says.

Materials provided by Massachusetts Institute of Technology

Samsung Ai Deepfake

Latest Samsung AI can produce animated deepfake with a single image

Samsung’s AI tech is amazingly creepy and will make our deepfake problem worse as the Samsung engineers from the Samsung AI Center and the Skolkovo Institute of Science and Technology Moscow developed an AI technique using algorithms. It has the ability to transform an image into an animoji like video with face making speech expressions, with the only difference that it would not be exactly like your video. It uses AI to morph the face of the person and get all expressions right, which makes it difficult to recognise if its a video of the person or a morphed one. This requires deepfake to possess huge data sets of images in order to create a realistic forgery.

Artificial intelligence system used by Samsung is different from other modern-day technologies as it does not use 3D modelling and hence, can generate a fake clip which is directly proportional to the number of images.

According to researchers, working on this technology claim that it can be used for a host of applications which includes video games, film and TV and can be applied to paintings also. Though Samsung’s AI is cool, it comes with its own banes. It is sometimes used to morph the pictures of celebrities and other people for doing anti-social activities and may result in an increasing number of crimes.

This new approach provides a decent improvement on past work by teaching the neural network how to manipulate the existing landmark facial features into a more realistic-looking moving video. This knowledge can then be used to be deployed on a few pictures or maybe a single picture of someone, whom the AI has never seen before. The system uses a convolution neural network, a type of neural network based on biological processes in the animal visual cortex. It’s particularly adept at processing stacks of images and recognising what is there in them, that is, the “convolution” essentially recognises and extracts parts of those images.

This technique manages to solve the complexities existing in the artificially generated talking heads system, which is basically dealing with the increasing complexity of the heads, along with our ability to be able to easily spot a fake head. Nowadays, software for making deepfakes is available free of cost and it creates fully convincing lifelike videos. We have to just remember that seeing is no more believing. The research has been published on the pre-print server arXiv.org.

Though this technology looks pretty cool, I strongly think that this technology should not be made available for the public as it can lead to many fake videos and fake news. What do you think about this technology? Tell us with a short a quick comment

Kismet robot at MIT Museum

Researchers use magnetic properties for improving artificial intelligence systems

A group of researchers and experts from Purdue University have developed a method to integrate magnets with networks similar to brain for programming and teaching devices such as robots, self-driving cars.

Kaushik Roy, professor of electrical and computer engineering at Purdue University said that the stochastic neural networks developed by them attempts to copy some of the activities performed by the human brain and compute them with the help of a network of neurons and synapses. This helps to distinguish between several objects and make inferences about them besides storing information.

This was announced in the German Physical Sciences Conference and the report has been published in the Frontiers in Neuroscience.

The dynamics of the switching processes of nano-magnets resemble the electrical dynamics of the neurons to a large extent. Switching behavior is also exhibited by magnetic junction devices. Stochastic neural networks have random variations built inside them through stochastic weights.

Researchers have suggested a new stochastic training algorithm with the help of spike timing dependent plasticity for the synapses named as Stochastic-STDP. It has been tried on a rat’s hippocampus. The magnet’s intrinsic stochastic behavior was used to alter the various states of magnetization which are stochastically based on the algorithm for understanding various object representations.

The synaptic weights which have been trained and encoded in the magnetization states of the magnets are used for making inference. The use of high energy barrier magnets come to a great advantage as it allows compact stochastic primitives and makes the device eligible for being a stable memory element which permits data retention. Roy who also leads Brain-inspired Computing Enabling Autonomous Intelligence of Purdue University said that the magnet technology which has been developed is highly energy efficient. The network comprising of neurons and synapses makes optimal use of the memory and energy available similar to the computations done by brain.

These networks resembling the brain can also be used for solving many other problems such as graph coloring, travelling salesman problem and optimization problems in combinatorics. The travelling salesman is a very good problem involving optimization of algorithms. It involves traversing locations in the minimal amount of resources available. It was first defined by Irish mathematician W.R Hamilton and British mathematician Thomas Kirkman. It is an example of a NP hard problem where the smaller component problems will be as complex as the main problem in the minimum case.

The work is aligned with the celebration of Giant Leaps of Purdue University which acknowledges the advancements made in artificial intelligence by the university.