Login with your Social Account

HIV infected T cell

Researchers eliminate HIV from infected mice with the help of CRISPR

An interdisciplinary group of researchers has claimed to have eliminated HIV from the mice genomes with the help of the gene editing tool CRISPR-Cas9 and a new drug. This is quite promising in the fight against HIV and AIDS, although a lot of work is still left before clinical trials can be started. 

It may seem odd that a gene editing tool is used for removing infectious disease, however, HIV being a retrovirus embeds itself in DNA for replication. ART which stands for Antiretroviral therapy can help in suppressing HIV replication however it is not effective for eliminating the disease completely. The reason being it is incapable of purging cells where the virus has been dormant. The study has been published in the Nature Communications journal. When CRISPR-Cas9 is used with a newer form of ART, it eliminated the virus from the genome, a feat achieved for the first time. 

Experiments were carried out on genetically modified mice which had similarities with human beings. The team led by Kamel Khalili, Lewis Katz School of Medicine, Temple University was successful in eliminating every single trace of HIV in about 30 percent of infected mice. Though not perfect, it provides hope to be optimistic. Khalili said that they can now proceed for trials in non-human primates, with clinical trials in humans within a year. 

Khalili is also the founder and lead scientific advisor of Excision BioTherapeutics. This company uses CRISPR for treating viral diseases. It also possesses an exclusive license for the commercial application of the therapy. However, researchers have to proceed with caution as they have to prove that it is free of long term side effects such as cancer. 

Antiretroviral therapy for the treatment of AIDS has benefitted many people all over the world, but it is not a cure technically. Patients have to be on a regular dosage of medicines for keeping the HIV virus in check. We are in a dire need of a therapy that eliminates HIV from the body completely. 

Khalili took the help Howard Gendelman, professor of infectious diseases at the University of Nebraska Medical Center for the new study. He has been working on a new form of ART known as LASER. It stands for long-acting slow-effective release. It essentially targets the cells where HIV hides and suppresses its replication for long periods of time. For achieving this the drug is wrapped in nanocrystals, through which it spreads to tissues where HIV is supposedly dormant. It then slowly releases the drug. 

LASER ART caught Khalili’s attention. It was incorporated with CRISPR-Cas9 and was successful in eliminating HIV from one-third of infected mice. Scientists have to proceed with caution as CRISPR has the risk of causing cancer in the body. Since it stays in the body for a long time it could cut other sites in an uncontrolled way, hence being cancerous. 

embryoid model gastrulation

Human embryo developed in laboratory reveals a magical step in the development of humans

The human body undergoes through a big step in development after several weeks from the conception that is the growth of a tail and head from a symmetric ball of cells(starts from a step called gastrulation). But it has been a source of great mystery how the body cells figure it out.

Researchers have obtained a better idea of the biochemistry as to how this process actually occurs in the human body. They were able to do this by arranging a collection of stem cells that acted very similar to a real human embryo. The study has been published in Nature Cell Biology.

The diverse group of researchers from Rockefeller University used a combination of physics and biology backgrounds for developing a model of a 10-day old embryo from sphere to a less symmetrical entity. Due to the restrictions of studying the human embryo that is older than 14 days, the research work of the earliest stages of human development faces ethical and practical restrictions.

For demonstrating the exact sequence of early changes that are needed to transform tissues to a human body, scientists can use the same type of nondescript stem cells which form an embryo and arrange them for showing a major step in development. These models resemble the embryonic organoids, the synthetic structures which mimic some of the most important characteristics of the biological counterparts. These embryoids have been used for studying cellular development of the embryo for several years, hence they are are not a new type of equipment.

However, using synthetic approximations it is very difficult to obtain the exact structure for representing a specific developmental stage. Mijo Simunovic, a biochemist and physicist said that several techniques from physics, bioengineering and developmental biology were used for creating the model. Researchers now possess a 3D system that not only resembles the genetic fingerprint of the embryo but also the shape and size.

Several studies on mice embryoids have shown how a line is drawn on a ball of dividing cells which shows which part will develop into a head and which part to a tail. This is known as symmetry breaking.

Symmetry breaking also known as gastrulation is a very significant stage as our heads do not resemble our legs in any way. The embryo divides into two parts, anterior and posterior. Researchers found evidence of it in human embryonic stem cells. For understanding the biochemistry behind the gastrulation, the group tested a growth factor which was found in mice and human embryos, known as bone morphogenetic protein 4. After adding of BMP4, one part of the 3D culture developed to posterior and the other to anterior. This also helps in understanding when an embryo develops and when it fails.
Researchers are not worried about the ethical implications as they are sure that their embryoids would never grow to human beings if allowed.

A Bioengineer's Guide to Design

A Bioengineer’s Guide to Design

A team of researchers at Caltech has developed a set of guidelines for designing biological circuits using tools from mechanical and electrical engineering. Like electric circuits—but made out of cells and living matter—biological circuits show promise in producing pharmaceuticals and biofuels.

For example, the antimalarial compound artemisinin is produced by an expensive tropical plant, so scientists at UC Berkeley have engineered the plant’s metabolism into yeast cells in order to synthesize artemisinin without using plants or soil. However, the ability to predict the behavior of these circuits—to design them on paper and then successfully implement the design—is still rudimentary.

The set of bioengineering design principles described in this new work could make building cellular systems more efficient and predictable.

“We like to put it this way: if you were designing an airplane, you wouldn’t start by making 1,000 different planes and launching them all into the sky to see which ones flew,” says lead author Noah Olsman (PhD ’19), a former Caltech graduate student who is now a postdoctoral scholar at Harvard. “Instead, you would begin by studying the math and physics that are important to flight. In the same way, the process of designing biological circuits can really benefit from some quantitative guiding principles.”

Biological systems are constantly measuring their environments and adapting to maintain homeostasis—a balanced, steady state. Our eyes adjust in response to light and dark, and our bodies maintain roughly the same internal temperature whether we are in searing heat or a freezing snowstorm. Even single cells, the fundamental units of life, are confronted with vastly variable conditions and make precise measurements and adjustments to survive. The process of measuring the outside world and making changes internally in response is called feedback.

Feedback is commonly studied in engineering. One example of a designed feedback system is a car’s cruise-control mode—the car measures its speed and changes acceleration or deceleration accordingly, makes another measurement of speed, makes any needed changes, and so on. Additionally, the thermostat in a house is designed to use feedback, measuring the external temperature and then heating or cooling as needed.

Ideally, a system would reach a desired state quickly and be robust against large or small perturbations. But when designing systems, engineers often cannot have it all. For example, the features that make a motorcycle more efficient and maneuverable than a car also make it much easier to crash. A branch of engineering called control theory describes these performance trade-offs mathematically.

Now, in two new research papers, Olsman and his colleagues use control theory to lay out design principles for constructing biological systems.

“A major question in biology is: Can we understand biological systems the way we understand electrical circuits or mechanical devices? Can we understand how cells put together molecular components to make life, and can we engineer that ourselves?” says Olsman. “Like an understanding of digital circuits leads to engineering a laptop, an understanding of cellular networks would enable us to build biological systems ourselves.”

Olsman and his collaborators studied a simple bacterial model of feedback, developed by another Caltech team, using Escherichia coli. In colonies of these bacteria, each bacterium emits small molecules in order to send signals to one another. The bacteria were engineered to simultaneously produce a toxin when emitting these signaling molecules. The larger the bacterial population, the higher the toxin concentration. At a high enough concentration of toxin, some bacteria begin to die and thus reduce the toxin concentration. This system of feedback regulates bacterial population growth.

The researchers characterized this system to develop mathematical descriptions of biological feedback.

“Cells are sophisticated machines, but so are airplanes and satellites. The right mathematical thinking can reveal simple principles that govern a complicated world,” says Olsman.

A paper describing the research appears in the journal Cell Systems and is titled “Hard Limits and Performance Tradeoffs in a Class of Antithetic Integral Feedback Networks.” In addition to Olsman, additional co-authors are former Caltech graduate student Ania-Ariadna Baetica (PhD ’18), now of UC San Francisco; graduate student Fangzhou Xiao; former graduate student Yoke Peng Leong (PhD ’18); Richard Murray (BS ’85), Thomas E. and Doris Everhart Professor of Control and Dynamical Systems and Bioengineering; and John Doyle, Jean-Lou Chameau Professor of Control and Dynamical Systems, Electrical Engineering, and Bioengineering.

A second paper describing the work appears in the journal iScience and is titled “Architectural Principles for Characterizing the Performance of Antithetic Integral Feedback Networks.” In addition to Olsman, Fangzhou Xiao, and John Doyle are co-authors.

Materials provided by California Institute of Technology

Unraveling the brain’s reward circuits

Unraveling the brain’s reward circuits

To some, a chocolate cake may spark a shot of pleasure typically associated with illicit drugs. A new study by Penn biologists offers some insights into that link, revealing new information about how the brain responds to rewards such as food and drugs.

In the work, which appears this week online in the journal Neuron, a team led by Assistant Professor J. Nicholas Betley, postdoctoral researcher Amber L. Alhadeff, and graduate student Nitsan Goldstein of the School of Arts and Sciences shows that, in mice, consuming food turns down the activity of neurons that signal hunger in the brain via a different pathway than alcohol and drugs, which can likewise act as appetite suppressants. Yet the research also reveals that the circuits that trigger the pleasurable release of dopamine are interconnected with the activity of hunger neurons, suggesting that drugs and alcohol can hijack not only the brain’s reward circuits but also those responsible for signaling hunger, serving to create a behavior that reinforces itself.

“Signals of reward, whether it’s food or drugs, access the brain through different pathways,” says Betley, senior author on the work. “But once they’re in the brain, they engage an interconnected network between hypothalamic hunger neurons and reward neurons. It could be that drugs are reinforced not only by increasing a dopamine spike, but also by decreasing the activity of hunger neurons that make you feel bad.”

With a greater understanding of these pathways, the researchers say their findings could inform the creation of more effective weight loss drugs or even addiction therapies.

Betley and colleagues’ work has previously shown infusing any type of macronutrient (any calorie-containing food) into a mouse turned down the activity of AgRP neurons, which are responsible for the unpleasant feelings associated with hunger. The signal by which the stomach tells the brain it has consumed food travels along what is known as the vagal nerve.

Curious about whether alcohol, which is also caloric, could trigger the same effect, they found that it did so in mice, even when the vagal pathway was disrupted.

“If we cut that highway, highly caloric and rewarding foods like fat can no longer get that signal to the hunger neurons, but ethanol could,” says Alhadeff.

The team next tried to do the same thing with cocaine, nicotine, and amphetamine, drugs that have been shown to have appetite suppressing activity, and found the same thing. It’s the first time, the team says, that a non-nutrient has been shown to regulate AgRP neurons for a sustained period of time.

“What is exciting is that the results suggest there are pharmacological mechanisms out there that can be harnessed to reduce the activity of these neurons to alleviate hunger if someone was on a weight-loss diet,” Alhadeff says.

Knowing that alcohol and drugs also trigger the release of dopamine, a neurotransmitter associated with a sensation of “reward” that is also implicated in addiction, the team observed that dopamine neuron activity increased in parallel to the decrease in AgRP neuron activity.

They went after that lead. Using a technique by which they could activate AgRP neurons without depriving an animal of food, the researchers explored how these hunger neurons influence dopamine signaling. In the absence of a food reward, they found little response in the dopamine neurons to activation of AgRP neurons. But when an animal with active AgRP was fed, the surge of dopamine was even higher than it would have been normally, without activated AgRP neurons. In other words, AgRP neurons made food more “rewarding” when the animals were hungry.

“We were surprised to find these AgRP neurons seemed to be signaling the dopamine neurons, but we couldn’t detect that until the animal gets the reward,” Goldstein says. “This suggested that either an indirect or modulatory circuit mediates the interaction between hunger and reward neurons in the brain.”

The same thing happened when the animal received a drug, such as nicotine.

Moving ahead, the research team is investigating the differences between the reward signals that come from alcohol and drugs versus food and unpacking the connection they have revealed between the dopamine neurons and AgRP neurons. Using sophisticated new technology, they’ll also be studying individual neurons to see if the effects they’ve observed are due to the activity of small subpopulations of neurons in the brain.

If they’re successful at identifying a new, druggable pathway that could target these linked circuits, Betley says it would be welcome, as many currently available weight-loss drugs have unpleasant side effects such as nausea.

“It’s hard to have somebody adhere to these drugs when they’re feeling poorly,” he says. “Our findings suggest there are multiple ways into the brain, and maybe by combining these strategies we can overcome these problems.”

Materials provided by the University of Pennsylvania

Brain to brain communication system

Play video games with your friends using only your minds

Researchers from the University of Washington have created a method that allows three people to work together for solving a problem with the help of their minds. It brings telepathic communication one step closer to reality.

Three people play a game similar to Tetris with the help of a brain-to-brain interface. It demonstrates two things for the first time, a brain-to-brain network involving more than two people and a person able to send and receive information only with the help of their brains. The results of the study were published in the journal, Scientific Reports.

Rajesh Rao, a professor of Computer Science at the Paul G. Allen School of UW, and a co-director at the Center of Neurotechnology commented that human beings are social animals who are able to solve problems only with the help of others and not alone. Thus they wanted to know if people could collaborate with the help of only their brains. This led to the idea of Brain-Net in which two people helped a third person in solving a task.

In the game, similar to Tetris, there is a block at the top of the screen and a line which needs to be completed. Two people who are the Senders decide whether to rotate the block or not and send the information to the brain of the Receiver through the internet. The receiver processes the information, decides whether to accept the signal or not and then provides the command, to rotate or not rotate the block to the game directly from their brain.

Five groups of participants were asked to play the games in 16 rounds. For each group, the participants were unable to hear, see or speak to each other as they were in separate rooms. For the Senders, there were two options on the screen, Yes and No. Beneath each option, there were LEDs. Yes LED blinked 17 times a second whereas No LED blinked 15 times a second. If a receiver decided to rotate a block, then the corresponding signal was sent by concentrating on the respective light.

The electrical activity of the brains were picked by the electroencephalography caps which the Senders wore. The caps pick up the unique activity of the brain due to the distinct flashing pattern of the light. As per the selection of the Senders, the cursor on the screen pointed towards either Yes or No.

For delivering the message to the brain, researchers used a cable with a wand behind the head of the Receiver. It stimulated the part of the brain that translated the signals. If the answer was Yes, then Receiver saw a bright flash. They could then decide whether to accept the signal or not using the same technique. Scientists also flipped the responses in some cases of the Senders, due to which they formed a bias towards the better signal. With this, scientists have also started discussions on the ethics and privacy of the information of the people involved.

Processed Meats supermarket

Studies showed what a processed diet can do to our body

Everybody today is promoting whole foods compared to processed foods. Many terms like ‘whole grain’, ‘clean eating’, ‘all natural’, ‘functional’ and ‘local’ have been taken over the lexicon. However, until recently there has been little scientific evidence to support the eat whole food movement.

The British Medical Journal published two studies of populations that found a lower risk of heart diseases and greater longevity among adults who ate less processed food. A further study by the National Institute of Health (NIH) showed that eating ultra-processed foods results in higher calorie intake and weight gain when compared to the ones with minimum consumption of processed food. The paper published by NIH explains the benefit of whole foods and acknowledges that ultra-processed food makes important contributions to the nation’s diet.

The paper was published by Kevin Hall who is a mathematical modeler and global obesity expert. It enrolled 20 young healthy adults( 10 men and 10 women ) who agreed to a clinical setup and testing for 28 days which provided access to food given only by the clinic. The results showed that persons gained 500 more calories per day when given ultra-processed food. It resulted in two pounds of weight gain in just 14 days. On the other hand, people who changed from ultra-processed food to minimally processed foods lost two pounds in 14 days. The data suggest that ultra-processed foods lead to more energy intake and weight gain whereas minimally processed food leads to low energy intake and weight loss.

The weight gains occurred even after the investigators tried to make both the meals alike with the same percentages of carbohydrates, proteins, fats, sugars, sodium, etc. It was achieved by adding soluble fiber to beverages for ultra-processed meals and making sure that the minimally processed diet included plenty of fresh fruit, high in natural sugars. The ultra-processed food contained canned soups and grains in a pouch. So, the ultra-processed food which was used was not that junky. If the ultra-processed diets were made to vary in nutrient intake then there would be a larger difference in calorie intake. The ultra-processed foods are formulated with industrial ingredients and contain little or no intact food. Ultra-processed foods are soft and easier to chew, swallow which led to calories being consumed 50% faster than minimally processed foods.

When we eat quickly our calorie consumption may race ahead of the gut-brain connection. This is because it takes 20 minutes for the gut to release hormones telling the brain that it is full. Insoluble fiber which is included in minimally processed foods moves through the stomach and GI system without breaking down and being absorbed which leads to reduced calorie absorption. Processed food is 60% cheaper than unprocessed food and offers convenience in hyper-speed world. We must choose the food and eat them either in whole or in the minimum processed form.

Scientists discover how plants breathe – and how humans shaped their 'lungs'

Scientists discover how plants breathe and how humans shaped their ‘lungs’

Botanists have known since the 19th century that leaves have pores – called stomata – and contain an intricate internal network of air channels. But until now it wasn’t understood how those channels form in the right places in order to provide a steady flow of CO2 to every plant cell.

The new study, led by scientists at the University of Sheffield’s Institute for Sustainable Food and published in Nature Communications, used genetic manipulation techniques to reveal that the more stomata a leaf has, the more airspace it forms. The channels act like bronchioles – the tiny passages that carry air to the exchange surfaces of human and animal lungs.

In collaboration with colleagues at the University of Nottingham and Lancaster University, they showed that the movement of CO2 through the pores most likely determines the shape and scale of the air channel network.

This major discovery shows that the movement of air through leaves shapes their internal workings – which has implications for the way we think about evolution in plants.

Professor Andrew Fleming

Institute for Sustainable Food at the University of Sheffield

The discovery marks a major step forward in our understanding of the internal structure of a leaf, and how the function of tissues can influence how they develop – which could have ramifications beyond plant biology, in fields such as evolutionary biology.

The study also shows that wheat plants have been bred by generations of people to have fewer pores on their leaves and fewer air channels, which makes their leaves more dense and allows them to be grown with less water.

This new insight highlights the potential for scientists to make staple crops like wheat even more water-efficient by altering the internal structure of their leaves. This approach is being pioneered by other scientists at the Institute for Sustainable Food, who have developed climate-ready rice and wheat which can survive extreme drought conditions.

Professor Andrew Fleming from the Institute for Sustainable Food at the University of Sheffield said: “Until now, the way plants form their intricate patterns of air channels has remained surprisingly mysterious to plant scientists.

“This major discovery shows that the movement of air through leaves shapes their internal workings – which has implications for the way we think about evolution in plants.

“The fact that humans have already inadvertently influenced the way plants breathe by breeding wheat that uses less water suggests we could target these air channel networks to develop crops that can survive the more extreme droughts we expect to see with climate breakdown.”

Dr Marjorie Lundgren, Leverhulme Early Career Fellow at Lancaster University, said: “Scientists have suspected for a long time that the development of stomata and the development of air spaces within a leaf are coordinated. However, we weren’t really sure which drove the other. So this started as a ‘what came first, the chicken or the egg?’ question.

“Using a clever set of experiments involving X-ray CT image analyses, our collaborative team answered these questions using species with very different leaf structures. While we show that the development of stomata initiates the expansion of air spaces, we took it one step further to show that the stomata actually need to be exchanging gases in order for the air spaces to expand. This paints a much more interesting story, linked to physiology.”

The X-ray imaging work was undertaken at the Hounsfield Facility at the University of Nottingham. The Director of the Facility, Professor Sacha Mooney, said: “Until recently the application of X-ray CT, or CAT scanning, in plant sciences has mainly been focused on visualising the hidden half of the plant – the roots – as they grow in soil.

“Working with our partners in Sheffield we have now developed the technique to visualize the cellular structure of a plant leaf in 3D – allowing us to see how the complex network of air spaces inside the leaf controls its behavior. It’s very exciting.”

Materials provided by the University of Sheffield

iceworm

Unlocking secrets of the ice worm

The ice worm is one of the largest organisms that spends its entire life in ice and Washington State University scientist Scott Hotaling is one of the only people on the planet studying it.

He is the author of a new paper that shows ice worms in the interior of British Columbia have evolved into what may be a genetically distinct species from Alaskan ice worms.

Hotaling and colleagues also identified an ice worm on Vancouver Island that is closely related to a separate population of ice worms located 1,200 miles away in southern Alaska. The researchers believe the genetic intermingling is the result of birds eating the glacier-bound worms (or their eggs) at one location and then dropping them off at another as they migrate up and down the west coast.

“If you are a worm isolated on a mountaintop glacier, the expectation is you aren’t going anywhere,” said Hotaling, a postdoctoral biology researcher. “But lo and behold, we found this one ice worm on Vancouver Island that is super closely related to ice worms in southern Alaska. The only reasonable explanation we can think of to explain this is birds.”

Super cool organism

The ice worm resembles the common earthworm but is smaller and darker in color.  What sets the ice worm apart from other members of the Mesenchytraeus genus is its ability to live its entire life in glacial ice.

Millions, perhaps hundreds of millions, of ice worms can be seen wriggling to the top of glaciers from the Chugach Mountains in southeast Alaska to the Cascade Volcanoes of Washington and Oregon during the summer months. In the fall and winter, ice worms subsist deep beneath the surface of glaciers where temperatures stay around freezing.

Scott Hotaling

Hotaling’s interest in ice worms began back in 2009 while he was working as a mountaineering ranger on the high elevation slopes of Mt. Rainer. He was climbing at three in the morning when he noticed a lot of small, black worms crawling around on the surface of a glacier.

“I wasn’t even a biology undergraduate yet but I remember being so fascinated by the fact that there is this worm that can live in a glacier,” he said. “It is not a place where we think of life being able to flourish and these things can be present at like 200 per sq. meter, so dense you can’t walk without stepping in them.”

Hotaling eventually went back to school and earned a PhD in biology at the University of Kentucky where he studied how climate change is affecting mountain biodiversity.

In the summer of 2017, he finally got the opportunity to circle back and do some research on the ice worm when he arrived in Pullman to start a postdoc position in the laboratory of Associate Professor Joanna Kelley, senior author of the study who specializes in evolutionary genomics and extremophile lifeforms.

“In the Kelley lab, we study organisms that have evolved to live in places that are inhospitable to pretty much everything else,” Hotaling said. “Determining the evolutionary mechanisms that enable something like an ice worm to live in a glacier or bacteria to live in a Yellowstone hot spring is a really exciting way to learn about what is possible at the bounds of evolution. That’s where we are working now, understanding the evolution of ice worms.”

In the study

Hotaling and colleagues extracted and sequenced DNA from 59 ice worms collected from nine glaciers across most of their geographical range. Their analysis revealed a genetic divergence between populations of ice worms that are north and west and south and east of the Coast Mountains of British Columbia.

The researchers predict that this deeper split into two genetically distinct ice worm groups occurred as a result of glacial ice sheets contracting around a few hundred thousand years ago, isolating worms in the Pacific Northwest from their counterparts in Alaska.

The most surprising finding of the study was the discovery of a single ice worm on Vancouver Island that was closely related to a population of ice worms 1,200 miles away in Alaska.

“At first we thought there has to be some kind of error in the analysis or prep methods but upon further investigation we confirmed our initial results,” Hotaling said. “These are worms isolated on mountain tops and there is no explanation for how they covered that gap than on, or perhaps within, migrating birds.”

A Gray-Crowned Rosy Finch eating ice worms.
A Gray-Crowned Rosy Finch eating ice worms on a glacier. Photo by Scott Hotaling

The research illuminates an important relationship between two of the few large organisms that inhabit North America’s high elevation alpine ecosystems, the ice worm and the Gray-Crowned Rosy Finch, one of North America’s highest elevation nesting birds.

“We knew that ice worms were an important source of food for the birds but we didn’t know until now that the birds are also likely very important for the ice worms,” Hotaling said. “If you are super isolated like an ice worm, you could easily become inbred. But if birds are bringing little bits of new diversity to your mountaintop glacier that could be really good for you.”

Hotaling and Kelley’s study was published this month in Proceedings B of the Royal Society of Publishing.

Materials provided by Washington State University

Translating proteins into music, and back

Translating proteins into music, and back – Listen to Protein Music

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

The new method translates an amino acid sequence of proteins into this sequence of percussive and rhythmic sounds. Courtesy of Markus Buehler.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described today in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

The percussive, rhythmic, and musical sounds heard here are generated entirely from amino acid sequences. Courtesy of Markus Buehler.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.”

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

“Markus Buehler has been gifted with a most creative soul, and his explorations into the inner workings of biomolecules are advancing our understanding of the mechanical response of biological materials in a most significant manner,” says Marc Meyers, a professor of materials science at the University of California at San Diego, who was not involved in this work.

Meyers adds, “The focusing of this imagination to music is a novel and intriguing direction. This is experimental music at its best. The rhythms of life, including the pulsations of our heart, were the initial sources of repetitive sounds that engendered the marvelous world of music. Markus has descended into the nanospace to extract the rythms of the amino acids, the building blocks of life.”

“Protein sequences are complex, as are comparisons between protein sequences,” says Anthony Weiss, a professor of biochemistry and molecular biotechnology at the University of Sydney, Australia, who also was not connected to this work. The MIT team “provides an impressive, entertaining and unusual approach to accessing and interpreting this complexity. … The approach benefits from our innate ability to hear complex musical patterns. Through harmony and discord, we now have an entertaining and useful tool to compare and contrast amino acid sequences.”

Materials provided by Massachusetts Institute of Technology

smartphone lock revelas age

How you lock your smartphone can reveal your age: UBC study

Older smartphone users tend to rely more on their phones’ auto lock feature compared to younger users, a new UBC study has found. They also prefer using PINs over fingerprints to unlock their phones.

Researchers also found that older users are more likely to unlock their phones when they’re stationary, such as when working at a desk or sitting at home.

Konstantin Beznosov

Konstantin Beznosov

The study is the first to explore the link between age and smartphone use, says Konstantin Beznosov, an electrical and computer engineering professor at UBC who supervised the research.

“As researchers working to protect smartphones from unauthorized access, we need to first understand how users use their devices,” said Beznosov. “By tracking actual users during their daily interactions with their device, we now have real-world insights that can be used to inform future smartphone designs.”

The analysis also showed that older users used their phone less frequently than younger users. For every 10-year interval in age, there was a corresponding 25 percent decrease in the number of user sessions. In other words, a 25-year-old might use their phone 20 times a day, but a 35-year-old might use it only 15 times.

The study tracked 134 volunteers, ranging from 19 to 63 years of age, through a custom app installed on their Android phones. For two consecutive months, the app collected data on lock and unlock events, choice of auto or manual lock and whether the phone was locked or unlocked while in motion. The app also recorded the duration of user sessions.

The study also found gender differences in authentication choices. As they age, men are much more likely to rely on auto locks, as opposed to manually locking their devices, compared to women. In terms of overall use, women on average use their phone longer than men, with women in their 20s using their smartphones significantly longer than their male peers. However, the balance shifts with age, with men in their 50s logging longer usage sessions than women of the same age.

While the study didn’t look at the reasons for these behaviours, Beznosov says the findings can help smartphone companies design better products.

“Factors such as age should be considered when designing new smartphone authentication systems, and devices should allow users to pick the locking method that suits their needs and usage patterns,” he said, adding that future research should look into other demographic factors and groups of participants, and explore the factors involved in authentication decisions.

The study was presented at last month’s CHI Conference on Human Factors in Computing Systems in Glasgow, Scotland.

Materials provided by the University of British Columbia