Login with your Social Account

Nanowires

Nanowires replace Newton’s famous glass prism

The device, made from a single nanowire 1000 times thinner than a human hair, is the smallest spectrometer ever designed. It could be used in potential applications such as assessing the freshness of foods, the quality of drugs, or even identifying counterfeit objects, all from a smartphone camera. Details are reported in the journal Science.

In the 17th century, Isaac Newton, through his observations on the splitting of light by a prism, sowed the seeds for a new field of science studying the interactions between light and matter – spectroscopy. Today, optical spectrometers are essential tools in industry and almost all fields of scientific research. Through analysing the characteristics of light, spectrometers can tell us about the processes within galactic nebulae, millions of light years away, down to the characteristics of protein molecules.

However, even now, the majority of spectrometers are based around principles similar to what Newton demonstrated with his prism: the spatial separation of light into different spectral components. Such a basis fundamentally limits the size of spectrometers in respect: they are usually bulky and complex, and challenging to shrink to sizes much smaller than a coin. Four hundred years after Newton, University of Cambridge researchers have overcome this challenge to produce a system up to a thousand times smaller than those previously reported.

The Cambridge team, working with colleagues from the UK, China and Finland, used a nanowire whose material composition is varied along its length, enabling it to be responsive to different colours of light across the visible spectrum. Using techniques similar to those used for the manufacture of computer chips, they then created a series of light-responsive sections on this nanowire.

“We engineered a nanowire that allows us to get rid of the dispersive elements, like a prism, producing a far simpler, ultra-miniaturised system than conventional spectrometers can allow,” said first author Zongyin Yang from the Cambridge Graphene Centre. “The individual responses we get from the nanowire sections can then be directly fed into a computer algorithm to reconstruct the incident light spectrum.”

“When you take a photograph, the information stored in pixels is generally limited to just three components – red, green, and blue,” said co-first author Tom Albrow-Owen. “With our device, every pixel contains data points from across the visible spectrum, so we can acquire detailed information far beyond the colours which our eyes can perceive. This can tell us, for instance, about chemical processes occurring in the frame of the image.”

“Our approach could allow unprecedented miniaturisation of spectroscopic devices, to an extent that could see them incorporated directly into smartphones, bringing powerful analytical technologies from the lab to the palm of our hands,” said Dr Tawfique Hasan, who led the study.

One of the most promising potential uses of the nanowire could be in biology. Since the device is so tiny, it can directly image single cells without the need for a microscope. And unlike other bioimaging techniques, the information obtained by the nanowire spectrometer contains a detailed analysis of the chemical fingerprint of each pixel.

The researchers hope that the platform they have created could lead to an entirely new generation of ultra-compact spectrometers working from the ultraviolet to the infrared range. Such technologies could be used for a wide range of consumer, research and industrial applications, including in lab-on-a-chip systems, biological implants, and smart wearable devices.

The Cambridge team has filed a patent on the technology, and hopes to see real-life applications within the next five years.

Reference:
Zongyin Yang et al. ‘Single nanowire spectrometers.’ Science (2019). DOI: 10.1126/science.aax8814

Materials provided by the University of Cambridge

Colour-changing artificial ‘chameleon skin’ powered by nanomachines

Colour-changing artificial ‘chameleon skin’ powered by nanomachines

The material, developed by researchers from the University of Cambridge, is made of tiny particles of gold coated in a polymer shell, and then squeezed into microdroplets of water in oil. When exposed to heat or light, the particles stick together, changing the colour of the material. The results are reported in the journal Advanced Optical Materials.

In nature, animals such as chameleons and cuttlefish are able to change colour thanks to chromatophores: skin cells with contractile fibres that move pigments around. The pigments are spread out to show their colour, or squeezed together to make the cell clear.

The artificial chromatophores developed by the Cambridge researchers are built on the same principle, but instead of contractile fibres, their colour-changing abilities rely on light-powered nano-mechanisms, and the ‘cells’ are microscopic drops of water.

When the material is heated above 32C, the nanoparticles store large amounts of elastic energy in a fraction of a second, as the polymer coatings expel all the water and collapse. This has the effect of forcing the nanoparticles to bind together into tight clusters. When the material is cooled, the polymers take on water and expand, and the gold nanoparticles are strongly and quickly pushed apart, like a spring.

“Loading the nanoparticles into the microdroplets allows us to control the shape and size of the clusters, giving us dramatic colour changes,” said Dr Andrew Salmon from Cambridge’s Cavendish Laboratory, the study’s co-first author.

The geometry of the nanoparticles when they bind into clusters determines which colour they appear as: when the nanoparticles are spread apart they are red and when they cluster together they are dark blue. However, the droplets of water also compress the particle clusters, causing them to shadow each other and make the clustered state nearly transparent.

At the moment, the material developed by the Cambridge researchers is in a single layer, so is only able to change to a single colour. However, different nanoparticle materials and shapes could be used in extra layers to make a fully dynamic material, like real chameleon skin.

The researchers also observed that the artificial cells can ‘swim’ in simple ways, similar to the algae Volvox. Shining a light on one edge of the droplets causes the surface to peel towards the light, pushing it forward. Under stronger illumination, high pressure bubbles briefly form to push the droplets along a surface.

“This work is a big advance in using nanoscale technology to do biomimicry,” said co-author Sean Cormier. “We’re now working to replicate this on roll-to-roll films so that we can make metres of colour changing sheets. Using structured light we also plan to use the light-triggered swimming to ‘herd’ droplets. It will be really exciting to see what collective behaviours are generated.”

The research was funded by the European Research Council (ERC) and the Engineering and Physical Sciences Research Council (EPSRC).

Reference:
Andrew R Salmon et al. ‘Motile Artificial Chromatophores: Light-Triggered Nanoparticles for Microdroplet Locomotion and Color Change.’ Advanced Optical Materials (2019). DOI: 10.1002/adom.201900951

Materials provided by the University of Cambridge

Cambridge scientists reverse ageing process in rat brain stem cells

Cambridge scientists reverse ageing process in rat brain stem cells

The results, published today in Nature, have far-reaching implications for how we understand the ageing process, and how we might develop much-needed treatments for age-related brain diseases.

As our bodies age, our muscles and joints can become stiff, making everyday movements more difficult. This study shows the same is true in our brains, and that age-related brain stiffening has a significant impact on the function of brain stem cells.

A multi-disciplinary research team, based at the Wellcome-MRC Cambridge Stem Cell Institute at the University of Cambridge, studied young and old rat brains to understand the impact of age-related brain stiffening on the function of oligodendrocyte progenitor cells (OPCs). These cells are a type of brain stem cell important for maintaining normal brain function, and for the regeneration of myelin – the fatty sheath that surrounds our nerves, which is damaged in multiple sclerosis (MS). The effects of age on these cells contributes to MS, but their function also declines with age in healthy people.

To determine whether the loss of function in aged OPCs was reversible, the researchers transplanted older OPCs from aged rats into the soft, spongy brains of younger animals. Remarkably, the older brain cells were rejuvenated, and began to behave like the younger, more vigorous cells.

To study this further, the researchers developed new materials in the lab with varying degrees of stiffness, and used these to grow and study the rat brain stem cells in a controlled environment. The materials were engineered to have a similar softness to either young or old brains.

To fully understand how brain softness and stiffness influences cell behavior, the researchers investigated Piezo1 – a protein found on the cell surface, which informs the cell whether the surrounding environment is soft or stiff.

Dr Kevin Chalut, who co-led the research, said: “We were fascinated to see that when we grew young, functioning rat brain stem cells on the stiff material, the cells became dysfunctional and lost their ability to regenerate, and in fact began to function like aged cells. What was especially interesting, however, was that when the old brain cells were grown on the soft material, they began to function like young cells – in other words, they were rejuvenated.”

“When we removed Piezo1 from the surface of aged brain stem cells, we were able to trick the cells into perceiving a soft surrounding environment, even when they were growing on the stiff material,” explained Professor Robin Franklin, who co-led the research with Dr Chalut. “What’s more, we were able to delete Piezo1 in the OPCs within the aged rat brains, which lead to the cells becoming rejuvenated and once again able to assume their normal regenerative function.”

Dr Susan Kohlhaas, Director of Research at the MS Society, who part funded the research, said: “MS is relentless, painful, and disabling, and treatments that can slow and prevent the accumulation of disability over time are desperately needed. The Cambridge team’s discoveries on how brain stem cells age and how this process might be reversed have important implications for future treatment, because it gives us a new target to address issues associated with aging and MS, including how to potentially regain lost function in the brain.”

This research was supported by the European Research Council, MS Society, Biotechnology and Biological Sciences Research Council, The Adelson Medical Research Foundation, Medical Research Council and Wellcome.

Niche stiffness underlies the ageing of central nervous system progenitor cells, M. Segel, B. Neumann, M. Hill, I. Weber, C. Viscomi, C. Zhao, A. Young, C. Agley, A. Thompson, G. Gonzalez, A. Sharma, S. Holmqvist, D. Rowitch, K. Franze, R. Franklin and K. Chalut is published in Nature.

Materials provided by the University of Cambridge

Machine learning to help develop self-healing robots that ‘feel pain’

Machine learning to help develop self-healing robots that ‘feel pain’

The goal of the €3 million Self-healing soft robot (SHERO) project, funded by the European Commission, is to create a next-generation robot made from self-healing materials (flexible plastics) that can detect damage, take the necessary steps to temporarily heal itself and then resume its work – all without the need for human interaction.

Led by the University of Brussels (VUB), the research consortium includes the Department of Engineering (University of Cambridge), École Supérieure de Physique et de Chimie Industrielles de la ville de Paris (ESPCI), Swiss Federal Laboratories for Materials Science and Technology (Empa), and the Dutch Polymer manufacturer SupraPolix.

As part of the SHERO project, the Cambridge team, led by Dr Fumiya Iida from the Department of Engineering are looking at integrating self-healing materials into soft robotic arms.

Dr Thomas George Thuruthel, also from the Department of Engineering, said self-healing materials could have future applications in modular robotics, educational robotics and evolutionary robotics where a single robot can be ‘recycled’ to generate a fresh prototype.

“We will be using machine learning to work on the modelling and integration of these self-healing materials, to include self-healing actuators and sensors, damage detection, localisation and controlled healing,” he said. “The adaptation of models after the loss of sensory data and during the healing process is another area we are looking to address. The end goal is to integrate the self-healing sensors and actuators into demonstration platforms in order to perform specific tasks.”

Professor Bram Vanderborght, from VUB, who is leading the project with scientists from the robotics research centre Brubotics and the polymer research lab FYSC, said: “We are obviously very pleased to be working on the next generation of robots. Over the past few years, we have already taken the first steps in creating self-healing materials for robots. With this research we want to continue and, above all, ensure that robots that are used in our working environment are safer, but also more sustainable. Due to the self-repair mechanism of this new kind of robot, complex, costly repairs may be a thing of the past.”

Materials provided by Cambridge University

The curious tale of the cancer ‘parasite’ that sailed the seas

The curious tale of the cancer ‘parasite’ that sailed the seas

‘Canine transmissible venereal tumour’ is a cancer that spreads between dogs through the transfer of living cancer cells, primarily during mating. The disease usually manifests as genital tumours in both male and female domestic dogs. It first arose in an individual dog, but survived beyond the death of the original dog by spreading to new dogs. The cancer is now found in dog populations worldwide, and is the oldest and most prolific cancer lineage known in nature.

One of the most remarkable aspects of these tumours is that their cells are those of the original dog in which the cancer arose, and not the carrier dog. The only differences between cells in the modern dogs’ tumours and cells in the original tumour are those that have arisen over time either through spontaneous changes in the cells’ DNA or through changes caused by carcinogens.

An international team of researchers, led by scientists at the Transmissible Cancer Group at the University of Cambridge, has compared differences in tumours taken from 546 dogs worldwide to try to understand how the disease arose and how it managed to spread around the world. Their results are published today in Science.

“This tumour has spread to almost every continent, evolving as it spreads,” says Adrian Baez-Ortega, a PhD student in the Transmissible Cancer Group, part of Cambridge’s Department of Veterinary Medicine. “Changes to its DNA tell a story of where it has been and when, almost like a historical travel journal.”

Using the data, they created a phylogenetic tree – a type of family tree of the different mutations in the tumours. This allowed them to estimate that the cancer first arose between 4,000 and 8,500 years ago, most likely in Asia or Europe. All of the modern tumours can be traced back to a common ancestor around 1,900 years ago.

The researchers say that the cancer first spread from Europe to the Americas around 500 years ago, when European settlers first arrived at the continent by sea. Almost all the tumours found today in North, Central and South America descend from this single introduction event.

From the Americas, the disease spread further, to Africa and back into the Indian subcontinent – almost all places that were, at the time, European colonies. For example, the cancer is seen in Reunion, but this was where European travellers would stop off on the way to India. All of this evidence suggests that the tumour was spread by sea-faring dogs, transported through maritime activities.

While the findings related to the historical spread of the disease are interesting, it is the tumour’s evolution that particularly excites the researchers.

Recent developments in cancer biology have enabled scientists to look at the mutations in tumour DNA and identify unique signatures left by carcinogens. This allows them to see, for example, the damage that ultraviolet (UV) light causes.

Using these techniques, the researchers identified signatures for five different biological processes that have damaged the canine tumour over its history. Four of these, including exposure to UV light, are known processes already linked to human cancers. However, one of them – termed ‘Signature A’ – has a very distinctive mutational signature, different to any seen previously: it caused mutations only in the tumour’s distant past, several thousand years ago, and has never been seen since.

“This is really exciting – we’ve never seen anything like the pattern caused by this carcinogen before,” says Dr Elizabeth Murchison, who leads the Transmissible Cancer Group at the University of Cambridge.

“It looks like the tumour was exposed to something thousands of years ago that caused changes to its DNA for some length of time and then disappeared. It’s a mystery what the carcinogen could be. Perhaps it was something present in the environment where the cancer first arose.”

Another intriguing discovery related to how the tumours evolve. There are two main types of selection in evolutionary theory – positive and negative. Positive selection is where mutations that provide an organism with a particular advantage are more likely to be passed down generations; negative selection is where mutations that are likely to have a deleterious effect are less likely to be passed on. Such selection tends to occur by way of sexual reproduction.

When the researchers analysed the tumours, they found no evidence of either positive or negative selection. This implies that the tumour will be accumulating more and more potentially damaging mutations over time, making it less and less fit to its environment.

Baez-Ortega explains: “Normally, we see selection pressures acting on an organism’s evolution. These canine tumours are foreign bodies, so one would expect to see a battle between them and the dog’s immune system, leading to only the strongest tumours successfully being transmitted. This doesn’t seem to be happening here.

“This cancer ‘parasite’ has proved remarkably successful at surviving over thousands of years, yet is steadily deteriorating. It suggests that its days may be numbered – but it’s likely to be tens of thousands of years before it disappears.”

Materials provided by University of Cambridge

Robot uses machine learning to harvest lettuce

The ‘Vegebot’, developed by a team at the University of Cambridge, was initially trained to recognize and harvest iceberg lettuce in a lab setting. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, local fruit and vegetable co-operative.

For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot

–Josie Hughes

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The results are published in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the UK, iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr. Fumiya Iida.

The Vegebot first identifies the ‘target’ crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested, and finally cuts the lettuce from the rest of the plant without crushing it so that it is ‘supermarket ready’. “For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image, and then for each lettuce, classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognise healthy lettuces in the lab, it was then trained in the field, in a variety of weather conditions, on thousands of real lettuces.

A second camera on the Vegebot is positioned near the cutting blade and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In future, robotic harvesters could help address problems with labour shortages in agriculture, and could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded. However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6m for the new CDT, which will support at least 50 PhD students.

Reference:
Simon Birrell et al. ‘A Field-Tested Robotic Harvesting System for Iceberg Lettuce.’ Journal of Field Robotics (2019). DOI: 10.1002/rob.21888

Materials provided by the University of Cambridge

Graphene goes to space

Graphene goes to space

The Materials Science Experiment Rocket (MASER) 14 was launched from the European Space Centre in Esrange, Sweden, in collaboration with the European Space Agency (ESA) and the Swedish Space Corporation (SSC).

This rocket launch is the next step towards our major milestone: bringing graphene to the International Space Station

–Andrea Ferrari

The experiment aims to test the possibilities of printing graphene inks in space. Graphene inks can be used in the production of batteries, supercapacitors, printed electronics, and more. If researchers are able to demonstrate how these inks work in space, astronauts could potentially print their own devices on the go, or they can repair electronics with graphene ink printers.

The experiments conducted this week were a collaboration led by the University of Brussels, with Cambridge, Pisa, and ESA. The inks that were tested in the experiments were produced by the research group of Professor Andrea Ferrari, Director of the Cambridge Graphene Centre.

Studying the different self-assembly modes of graphene into functional patterns in zero gravity will enable the fabrication of graphene electronic devices during long-term space missions, as well as help understand fundamental properties of graphene printing on Earth.

Cambridge researchers pioneered the use of liquid phase exfoliation, one of the most common means of producing graphene, to prepare inks from graphene and related materials. Such inks are now used to print devices ranging from flexible electronic sensors and gauges to batteries and supercapacitors.

The experiments will allow researchers to better understand the fundamentals of the printing process on Earth, by removing the presence of gravity and studying how graphene flakes self-assemble.

These experiments are a first step towards making graphene printing available for long term space exploration, since astronauts may need to print electronic devices on demand during long-term missions. Graphene-based composites may also be used to offer radiation protection, a compulsory requirement for human spaceflight, for example during Mars-bound missions.

During its short flight, the MASER rocket experiences microgravity for six minutes, during which time the researchers carry out the tests of graphene’s properties. When the rocket returns to Earth, the samples are retrieved and analyses are carried out. The rocket tests are an extension of a zero-gravity parabolic flight in May 2018, where experiments were conducted during just 24 seconds of microgravity.

“There is no better way to validate graphene’s potential than to send it to the environment it will be used in,” said Carlo lorio, leader of the space activities carried out by the Graphene Flagship, and a researcher at Graphene Flagship partner Université Libre de Bruxelles. “Graphene has unique conductivity properties that scientists are continuing to take advantage of in new processes, devices and in this case, coatings. Experiments like these are fundamental to graphene’s success and integral for building the material’s reputation as the leading material for space applications.”

“The Graphene Flagship has pioneered the exploration of graphene for space applications since 2017,” said Ferrari, who is also Science and Technology Officer of the Graphene Flagship and Chair of its Management Panel. “With three microgravity campaigns in parabolic flights already concluded and a fourth one on the way, this rocket launch is the next step towards our major milestone: bringing graphene to the International Space Station. Space is the limit for graphene. Or, is it?”

Materials provided by the University of Cambridge

The two 31,000-year-old milk teeth

DNA from 31,000-year-old milk teeth leads to discovery of new group of ancient Siberians

The finding was part of a wider study which also discovered 10,000-year-old human remains in another site in Siberia are genetically related to Native Americans – the first time such close genetic links have been discovered outside of the US.

This individual is the missing link of Native American ancestry

Eske Willerslev

The international team of scientists, led by Professor Eske Willerslev who holds positions at St John’s College, University of Cambridge, and is director of The Lundbeck Foundation Centre for GeoGenetics at the University of Copenhagen, have named the new people group the ‘Ancient North Siberians’ and described their existence as ‘a significant part of human history’.

The DNA was recovered from the only human remains discovered from the era – two tiny milk teeth – that were found in a large archaeological site found in Russia near the Yana River. The site, known as Yana Rhinoceros Horn Site (RHS), was found in 2001 and features more than 2,500 artefacts of animal bones and ivory along with stone tools and evidence of human habitation.

The discovery is published as part of a wider study in Nature and shows the Ancient North Siberians endured extreme conditions in the region 31,000 years ago and survived by hunting woolly mammoths, woolly rhinoceroses, and bison.

Professor Willerslev said: “These people were a significant part of human history, they diversified almost at the same time as the ancestors of modern-day Asians and Europeans and it’s likely that at one point they occupied large regions of the northern hemisphere.”

Dr Martin Sikora, of The Lundbeck Foundation Centre for GeoGenetics and first author of the study, added: “They adapted to extreme environments very quickly, and were highly mobile. These findings have changed a lot of what we thought we knew about the population history of northeastern Siberia but also what we know about the history of human migration as a whole.”

Researchers estimate that the population numbers at the site would have been around 40 people with a wider population of around 500. Genetic analysis of the milk teeth revealed the two individuals sequenced showed no evidence of inbreeding which was occurring in the declining Neanderthal populations at the time.

The complex population dynamics during this period and genetic comparisons to other people groups, both ancient and recent, are documented as part of the wider study which analysed 34 samples of human genomes found in ancient archaeological sites across northern Siberia and central Russia.

Professor Laurent Excoffier from the University of Bern, Switzerland, said: “Remarkably, the Ancient North Siberians people are more closely related to Europeans than Asians and seem to have migrated all the way from Western Eurasia soon after the divergence between Europeans and Asians.”

Scientists found the Ancient North Siberians generated the mosaic genetic make-up of contemporary people who inhabit a vast area across northern Eurasia and the Americas – providing the ‘missing link’ of understanding the genetics of Native American ancestry.

It is widely accepted that humans first made their way to the Americas from Siberia into Alaska via a land bridge spanning the Bering Strait which was submerged at the end of the last Ice Age. The researchers were able to pinpoint some of these ancestors as Asian people groups who mixed with the Ancient North Siberians.

Professor David Meltzer, Southern Methodist University, Dallas, one of the paper’s authors, explained: “We gained important insight into population isolation and admixture that took place during the depths of the Last Glacial Maximum – the coldest and harshest time of the Ice Age – and ultimately the ancestry of the peoples who would emerge from that time as the ancestors of the indigenous people of the Americas.”

This discovery was based on the DNA analysis of a 10,000-year-old male remains found at a site near the Kolyma River in Siberia. The individual derives his ancestry from a mixture of Ancient North Siberian DNA and East Asian DNA, which is very similar to that found in Native Americans. It is the first time human remains this closely related to the Native American populations have been discovered outside of the US.

Professor Willerslev added: “The remains are genetically very close to the ancestors of Paleo-Siberian speakers and close to the ancestors of Native Americans. It is an important piece in the puzzle of understanding the ancestry of Native Americans as you can see the Kolyma signature in the Native Americans and Paleo-Siberians. This individual is the missing link of Native American ancestry.”

Materials provided by University of Cambridge

industries using non-profit organisations

Food and drinks industry uses non-profit organisation to campaign against public health policies, study finds

The study, published today in the journal Globalization and Health, analysed over 17,000 pages of emails obtained through Freedom of Information requests made between 2015 and 2018. The documents captured exchanges between academics at US universities and senior figures at a non-profit organisation called the International Life Science Institute, or ILSI.

We contend that the International Life Sciences Institute should be regarded as an industry group – a private body – and regulated as such, not as a body acting for the greater good

Sarah Steele

Comprising of 18 bodies, each of which covers a specific topic or part of the globe, ILSI has always maintained its independence and scientific rigour, despite being funded by multinational corporations such as Nestle, General Mills, Mars Inc, Monsanto, and Coca-Cola.

Founded by former Coca-Cola senior vice president Alex Malaspina in 1978, ILSI states on its website that none of its bodies “conduct lobbying activities or make policy recommendations”. As a non-profit organisation, ILSI is currently exempt from taxation under US Internal Revenue codes.

However, researchers from the University of Cambridge, London School of Hygiene and Tropical Medicine, University of Bocconi, and US Right to Know, found emails explicitly discussing tactics for countering public health policies around sugar reduction, as “[T]his threat to our business is serious”.

These include exchanges with an epidemiology professor at the University of Washington, as well as the US Centre for Disease Control’s then director of heart disease and stroke prevention, all strategising how best to approach the World Health Organisation’s then Director-General Dr Margaret Chan, to shift her position on sugar-sweetened products.

“It has been previously suggested that the International Life Sciences Institute is little more than a pseudo-scientific front group for some of the biggest multinational food and drink corporations globally,” said the study lead author Dr Sarah Steele, a researcher at Cambridge’s Department of Politics and International Studies.

“Our findings add to the evidence that this non-profit organisation has been used by its corporate backers for years to counter public health policies. We contend that the International Life Sciences Institute should be regarded as an industry group – a private body – and regulated as such, not as a body acting for the greater good.”

In one email, Malaspina, who also served as long-time president at ILSI, described new US guidelines bolstering child and adult education on limiting sugar intake as a “real disaster!”. He writes: “We have to consider how to become ready to mount a strong defence”. Suzanne Harris, then executive director of ILSI, was among the email’s recipients.

James Hill, then director of the Center for Human Nutrition at the University of Colorado, was involved in a separate exchange on the issue of defending industry from the health consequences of its products. Hill argues for greater funding for ILSI from industry as part of “dealing aggressively with this issue”. He writes that, if companies keep their heads down, “our opponents will win and we will all lose”.

The FOI emails also suggest ILSI constructs campaigns favourable to artificial sweeteners. Emails reveal Malaspina passing on praise from another former ILSI President to a former Coca-Cola employee and the Professor, describing both as “the architects to plan and execute the studies showing saccharine is not a carcinogen”, resulting in the reversal of many government bans.

The FOI responses suggest that ILSI operates strategically with other industry-funded entities, including IFIC, a science communication non-profit organisation. “IFIC is a kind of sister entity to ILSI,” writes Malaspina. “ILSI generates the scientific facts and IFIC communicates them to the media and public.”

“The emails suggest that both ILSI and IFIC act to counter unfavourable policies and positions, while promoting industry-favourable science under a disguised front, including to the media,” said Steele.

In fact, the emails suggest ILSI considers sanctioning its own regional subsidiaries when they fail to promote the agreed industry-favourable messaging. The correspondence reveals discussion of suspending ILSI’s Mexico branch from the parent organisation after soft drink taxation was debated at a conference it sponsored. Mexico has one of the highest adult obesity rates in the world.

Email conversations between Malaspina and the CDC’s Barbara Bowman are open about the need to get the WHO to “start working with ILSI again” and to take into account “lifestyle changes” as well as sugary foods when combatting obesity.

Further exchanges between Malaspina and Washington Professor Adam Drewnowski support ILSI’s role in this. Drewnowski writes of Dr Chan that “we ought to start with some issue where ILSI and WHO are in agreement” to help “get her to the table”.

In a further email, Malaspina points out that he had meetings with the two previous heads of the WHO, going back to the mid-90s, and that if they do not start a dialogue with Dr Chan “she will continue to blast us with significant negative consequences on a global basis”.

The tide has begun to turn against ILSI in recent years. The WHO quietly ended their “special relations” with ILSI in 2017, and ILSI’s links to the European Food Safety Authority were the subject of enquiry at the European Parliament. The CDC’s Bowman retired in 2016, in the wake of revelations about her close ties with ILSI. Last year, long-time ILSI funder Mars Inc. stopped supporting the organisation. Much of the study’s correspondence precedes these events.

“It becomes clear from the emails and forwards that ILSI is seen as central to pushing pro-industry content to international organisations to support approaches that uncouple sugary foods and obesity,” added Steele.

“Our analysis of ILSI serves as a caution to those involved in global health governance to be wary of putatively independent research groups, and to practice due diligence before relying upon their funded studies.”

Materials provided by University of Cambridge

Alzheimer's detection by virtual reality

Virtual reality can spot navigation problems in early Alzheimer’s disease

Virtual reality (VR) can identify early Alzheimer’s disease more accurately than ‘gold standard’ cognitive tests currently in use, suggests new research from the University of Cambridge.

We’ve wanted to do this for years, but it’s only now that virtual reality technology has evolved to the point that we can readily undertake this research in patients

–Dennis Chan

The study highlights the potential of new technologies to help diagnose and monitor conditions such as Alzheimer’s disease, which affects more than 525,000 people in the UK.

In 2014, Professor John O’Keefe of UCL was jointly awarded the Nobel Prize in Physiology or Medicine for ‘discoveries of cells that constitute a positioning system in the brain’. Essentially, this means that the brain contains a mental ‘satnav’ of where we are, where we have been, and how to find our way around.

A key component of this internal satnav is a region of the brain known as the entorhinal cortex. This is one of the first regions to be damaged in Alzheimer’s disease, which may explain why ‘getting lost’ is one of the first symptoms of the disease. However, the pen-and-paper cognitive tests used in clinic to diagnose the condition are unable to test for navigation difficulties.

In collaboration with Professor Neil Burgess at UCL, a team of scientists at the Department of Clinical Neurosciences at the University of Cambridge led by Dr Dennis Chan, previously Professor O’Keefe’s PhD student, developed and trialled a VR navigation test in patients at risk of developing dementia. The results of their study are published today in the journal Brain.

In the test, a patient dons a VR headset and undertakes a test of navigation while walking within a simulated environment. Successful completion of the task requires intact functioning of the entorhinal cortex, so Dr Chan’s team hypothesised that patients with early Alzheimer’s disease would be disproportionately affected on the test.

The team recruited 45 patients with mild cognitive impairment (MCI) from the Cambridge University Hospitals NHS Trust Mild Cognitive Impairment and Memory Clinics. Patients with MCI typically exhibit memory impairment, but while MCI can indicate early Alzheimer’s, it can also be caused by other conditions such as anxiety and even normal aging. As such, establishing the cause of MCI is crucial for determining whether affected individuals are at risk of developing dementia in the future.

The researchers took samples of cerebrospinal fluid (CSF) to look for biomarkers of underlying Alzheimer’s disease in their MCI patients, with 12 testing positive. The researchers also recruited 41 age-matched healthy controls for comparison.

All of the patients with MCI performed worse on the navigation task than the healthy controls. However, the study yielded two crucial additional observations. First, MCI patients with positive CSF markers – indicating the presence of Alzheimer’s disease, thus placing them at risk of developing dementia – performed worse than those with negative CSF markers at low risk of future dementia.

Secondly, the VR navigation task was better at differentiating between these low and high risk MCI patients than a battery of currently-used tests considered to be gold standard for the diagnosis of early Alzheimer’s.

“These results suggest a VR test of navigation may be better at identifying early Alzheimer’s disease than tests we use at present in clinic and in research studies,” says Dr Chan.

VR could also help clinical trials of future drugs aimed at slowing down, or even halting, progression of Alzheimer’s disease. Currently, the first stage of drug trials involves testing in animals, typically mouse models of the disease. To determine whether treatments are effective, scientists study their effect on navigation using tests such as a water maze, where mice have to learn the location of hidden platforms beneath the surface of opaque pools of water. If new drugs are found to improve memory on this task, they proceed to trials in human subjects, but using word and picture memory tests. This lack of comparability of memory tests between animal models and human participants represents a major problem for current clinical trials.

“The brain cells underpinning navigation are similar in rodents and humans, so testing navigation may allow us to overcome this roadblock in Alzheimer’s drug trials and help translate basic science discoveries into clinical use,” says Dr Chan. “We’ve wanted to do this for years, but it’s only now that VR technology has evolved to the point that we can readily undertake this research in patients.”

In fact, Dr Chan believes technology could play a crucial role in diagnosing and monitoring Alzheimer’s disease. He is working with Professor Cecilia Mascolo at Cambridge’s Centre for Mobile, Wearable Systems and Augmented Intelligence to develop apps for detecting the disease and monitoring its progression. These apps would run on smartphones and smartwatches. As well as looking for changes in how we navigate, the apps will track changes in other everyday activities such as sleep and communication.

“We know that Alzheimer’s affects the brain long before symptoms become apparent,” says Dr Chan. “We’re getting to the point where everyday tech can be used to spot the warning signs of the disease well before we become aware of them.

“We live in a world where mobile devices are almost ubiquitous, and so app-based approaches have the potential to diagnose Alzheimer’s disease at minimal extra cost and at a scale way beyond that of brain scanning and other current diagnostic approaches.”

The VR research was funded by the Medical Research Council and the Cambridge NIHR Biomedical Research Centre. The app-based research is funded by the Wellcome, the European Research Council and the Alan Turing Institute.

Reference
Howett, D, Castegnaro, A, et al. Differentiation of mild cognitive impairment using an entorhinal cortex based test of VR navigation. Brain; 28 May 2019; DOI: 10.1093/brain/awz116

Materials provided by University of Cambridge