Login with your Social Account

Guided by AI, robotic platform automates molecule manufacture

Guided by AI, robotic platform automates molecule manufacture

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.

“And as a chemist, it may give you inspirations for new reactions that you hadn’t thought about before,” he adds.

Other MIT authors on the Science paper include Connor W. Coley, Dale A. Thomas III, Justin A. M. Lummiss, Jonathan N. Jaworski, Christopher P. Breen, Victor Schultz, Travis Hart, Joshua S. Fishman, Luke Rogers, Hanyu Gao, Robert W. Hicklin, Pieter P. Plehiers, Joshua Byington, John S. Piotti, William H. Green, and A. John Hart.

From inspiration to recipe to finished product

The new system combines three main steps. First, software guided by artificial intelligence suggests a route for synthesizing a molecule, then expert chemists review this route and refine it into a chemical “recipe,” and finally the recipe is sent to a robotic platform that automatically assembles the hardware and performs the reactions that build the molecule.

Coley and his colleagues have been working for more than three years to develop the open-source software suite that suggests and prioritizes possible synthesis routes. At the heart of the software are several neural network models, which the researchers trained on millions of previously published chemical reactions drawn from the Reaxys and U.S. Patent and Trademark Office databases. The software uses these data to identify the reaction transformations and conditions that it believes will be suitable for building a new compound.

“It helps makes high-level decisions about what kinds of intermediates and starting materials to use, and then slightly more detailed analyses about what conditions you might want to use and if those reactions are likely to be successful,” says Coley.

“One of the primary motivations behind the design of the software is that it doesn’t just give you suggestions for molecules we know about or reactions we know about,” he notes. “It can generalize to new molecules that have never been made.”

Chemists then review the suggested synthesis routes produced by the software to build a more complete recipe for the target molecule. The chemists sometimes need to perform lab experiments or tinker with reagent concentrations and reaction temperatures, among other changes.

“They take some of the inspiration from the AI and convert that into an executable recipe file, largely because the chemical literature at present does not have enough information to move directly from inspiration to execution on an automated system,” Jamison says.

The final recipe is then loaded on to a platform where a robotic arm assembles modular reactors, separators, and other processing units into a continuous flow path, connecting pumps and lines that bring in the molecular ingredients.

“You load the recipe — that’s what controls the robotic platform — you load the reagents on, and press go, and that allows you to generate the molecule of interest,” says Thomas. “And then when it’s completed, it flushes the system and you can load the next set of reagents and recipe, and allow it to run.”

Unlike the continuous flow system the researchers presented last year, which had to be manually configured after each synthesis, the new system is entirely configured by the robotic platform.

“This gives us the ability to sequence one molecule after another, as well as generate a library of molecules on the system, autonomously,” says Jensen.

The design for the platform, which is about two cubic meters in size — slightly smaller than a standard chemical fume hood — resembles a telephone switchboard and operator system that moves connections between the modules on the platform.

“The robotic arm is what allowed us to manipulate the fluidic paths, which reduced the number of process modules and fluidic complexity of the system, and by reducing the fluidic complexity we can increase the molecular complexity,” says Thomas. “That allowed us to add additional reaction steps and expand the set of reactions that could be completed on the system within a relatively small footprint.”

Toward full automation

The researchers tested the full system by creating 15 different medicinal small molecules of different synthesis complexity, with processes taking anywhere between two hours for the simplest creations to about 68 hours for manufacturing multiple compounds.

The team synthesized a variety of compounds: aspirin and the antibiotic secnidazole in back-to-back processes; the painkiller lidocaine and the antianxiety drug diazepam in back-to-back processes using a common feedstock of reagents; the blood thinner warfarin and the Parkinson’s disease drug safinamide, to show how the software could design compounds with similar molecular components but differing 3-D structures; and a family of five ACE inhibitor drugs and a family of four nonsteroidal anti-inflammatory drugs.

“I’m particularly proud of the diversity of the chemistry and the kinds of different chemical reactions,” says Jamison, who said the system handled about 30 different reactions compared to about 12 different reactions in the previous continuous flow system.

“We are really trying to close the gap between idea generation from these programs and what it takes to actually run a synthesis,” says Coley. “We hope that next-generation systems will increase further the fraction of time and effort that scientists can focus their efforts on creativity and design.”

Materials provided by Massachusetts Institute of Technology

Automated system generates robotic parts for novel tasks

Automated system generates robotic parts for novel tasks

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.

Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.

In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.

Materials provided by Massachusetts Institute of Technology

Robot uses machine learning to harvest lettuce

The ‘Vegebot’, developed by a team at the University of Cambridge, was initially trained to recognize and harvest iceberg lettuce in a lab setting. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, local fruit and vegetable co-operative.

For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot

–Josie Hughes

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for crops like iceberg lettuce which are particularly challenging to harvest mechanically. The results are published in The Journal of Field Robotics.

Crops such as potatoes and wheat have been harvested mechanically at scale for decades, but many other crops have to date resisted automation. Iceberg lettuce is one such crop. Although it is the most common type of lettuce grown in the UK, iceberg is easily damaged and grows relatively flat to the ground, presenting a challenge for robotic harvesters.

“Every field is different, every lettuce is different,” said co-author Simon Birrell from Cambridge’s Department of Engineering. “But if we can make a robotic harvester work with iceberg lettuce, we could also make it work with many other crops.”

“At the moment, harvesting is the only part of the lettuce life cycle that is done manually, and it’s very physically demanding,” said co-author Julia Cai, who worked on the computer vision components of the Vegebot while she was an undergraduate student in the lab of Dr. Fumiya Iida.

The Vegebot first identifies the ‘target’ crop within its field of vision, then determines whether a particular lettuce is healthy and ready to be harvested, and finally cuts the lettuce from the rest of the plant without crushing it so that it is ‘supermarket ready’. “For a human, the entire process takes a couple of seconds, but it’s a really challenging problem for a robot,” said co-author Josie Hughes.

The Vegebot has two main components: a computer vision system and a cutting system. The overhead camera on the Vegebot takes an image of the lettuce field and first identifies all the lettuces in the image, and then for each lettuce, classifies whether it should be harvested or not. A lettuce might be rejected because it’s not yet mature, or it might have a disease that could spread to other lettuces in the harvest.

The researchers developed and trained a machine learning algorithm on example images of lettuces. Once the Vegebot could recognise healthy lettuces in the lab, it was then trained in the field, in a variety of weather conditions, on thousands of real lettuces.

A second camera on the Vegebot is positioned near the cutting blade and helps ensure a smooth cut. The researchers were also able to adjust the pressure in the robot’s gripping arm so that it held the lettuce firmly enough not to drop it, but not so firm as to crush it. The force of the grip can be adjusted for other crops.

“We wanted to develop approaches that weren’t necessarily specific to iceberg lettuce so that they can be used for other types of above-ground crops,” said Iida, who leads the team behind the research.

In future, robotic harvesters could help address problems with labour shortages in agriculture, and could also help reduce food waste. At the moment, each field is typically harvested once, and any unripe vegetables or fruits are discarded. However, a robotic harvester could be trained to pick only ripe vegetables, and since it could harvest around the clock, it could perform multiple passes on the same field, returning at a later date to harvest the vegetables that were unripe during previous passes.

“We’re also collecting lots of data about lettuce, which could be used to improve efficiency, such as which fields have the highest yields,” said Hughes. “We’ve still got to speed our Vegebot up to the point where it could compete with a human, but we think robots have lots of potential in agri-tech.”

Iida’s group at Cambridge is also part of the world’s first Centre for Doctoral Training (CDT) in agri-food robotics. In collaboration with researchers at the University of Lincoln and the University of East Anglia, the Cambridge researchers will train the next generation of specialists in robotics and autonomous systems for application in the agri-tech sector. The Engineering and Physical Sciences Research Council (EPSRC) has awarded £6.6m for the new CDT, which will support at least 50 PhD students.

Reference:
Simon Birrell et al. ‘A Field-Tested Robotic Harvesting System for Iceberg Lettuce.’ Journal of Field Robotics (2019). DOI: 10.1002/rob.21888

Materials provided by the University of Cambridge

MIT Micro Robots

Tiny motor can “walk” to carry out tasks

Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an International robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

Their work offers an alternative to today’s approaches to contructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”

“Standardization is an extremely important issue in microrobotics, to reduce the production costs and, as a result, to improve acceptance of this technology to the level of regular industrial robots,” says Sergej Fatikow, head of the Division of Microrobotics and Control Engineering, at the University of Oldenburg, Germany, who was not associated with this research. The new work “addresses assembling of sophisticated microrobotic systems from a small set of standard building blocks, which may revolutionize the field of microrobotics and open up numerous applications at small scales,” he says.

Materials Provided By Massachusetts Institute of Technology

Robot fish

Robot circulatory system powers possibilities

Untethered robots suffer from a stamina problem. A possible solution: a circulating liquid – “robot blood” – to store energy and power its applications for sophisticated, long-duration tasks.

Humans and other complex organisms manage life through integrated systems. Humans store energy in fat reserves spread across the body, and an intricate circulatory system transports oxygen and nutrients to power trillions of cells.

But crack open the hood of an untethered robot and things are much more segmented: Over here is the solid battery and over there are the motors, with cooling systems and other components scattered throughout.

Cornell researchers have created a synthetic vascular system capable of pumping an energy-dense hydraulic liquid that stores energy, transmits force, operates appendages and provides structure, all in an integrated design.

“In nature, we see how long organisms can operate while doing sophisticated tasks. Robots can’t perform similar feats for very long,” said Rob Shepherd, associate professor of mechanical and aerospace engineering. “Our bio-inspired approach can dramatically increase the system’s energy density while allowing soft robots to remain mobile for far longer.”

Shepherd, director of the Organic Robotics Lab, is senior author of “Electrolytic Vascular Systems for Energy Dense Robots,” which was published on June 19 in Nature. Doctoral student Cameron Aubin is the lead author.

Engineers rely on lithium-ion batteries for their dense energy-storage potential. But solid batteries are bulky and present design constraints. Alternatively, redox flow batteries (RFB) rely on a solid anode and highly soluble catholyte to function. The dissolved components store energy until it is released in a chemical reduction and oxidation, or redox, reaction.

Soft robots are mostly fluid – up to around 90% fluid by volume, and many times use the hydraulic liquid. Using that fluid to store energy offers the possibility of increased energy density without added weight.

The researchers tested the concept by creating an aquatic soft robot inspired by a lionfish, designed by co-author James Pikul, a former postdoctoral researcher now an assistant professor at the University of Pennsylvania. Lionfish use undulating fanlike fins to glide through coral-reef environments (In one sacrifice to verisimilitude, the researchers opted not to add venomous fins like the robots’ living counterparts).

Silicone skin on the outside and flexible electrodes and an ion separator membrane within allow the robot to bend and flex. Interconnected zinc-iodide flow cell batteries power onboard pumps and electronics through electrochemical reactions. The researchers achieved energy density equal to about half that of a Tesla Model S lithium-ion battery.

The robot swims using power transmitted to the fins from the pumping of the flow cell battery. The initial design provided enough power to swim upstream for more than 36 hours.

Current RFB technology is typically used in large, stationary applications, such as storing energy from wind and solar sources. RFB design has historically suffered from low power density and operating voltage. The researchers overcame those issues by wiring the fan battery cells in series, and maximized power density by distributing electrodes throughout the fin areas.

“We want to take as many components in a robot and turn them into the energy system. If you have hydraulic liquids in your robot already, then you can tap into large stores of energy and give robots increased freedom to operate autonomously,” Shepherd said.

Underwater soft robots offer tantalizing possibilities for research and exploration. Since aquatic soft robots are supported by buoyancy, they don’t require an exoskeleton or endoskeleton to maintain the structure. By designing power sources that give robots the ability to function for longer stretches of time, Shepherd thinks autonomous robots could soon be roaming Earth’s oceans on vital scientific missions and for delicate environmental tasks like sampling coral reefs. These devices could also be sent to extraterrestrial worlds for underwater reconnaissance missions.

Materials provided by Cornell University

A miniature robot that could check colons for early signs of disease

A miniature robot that could check colons for early signs of disease

Engineers have shown it is technically possible to guide a tiny robotic capsule inside the colon to take micro-ultrasound images.

Known as a Sonopill, the device could one day replace the need for patients to undergo an endoscopic examination, where a semi-rigid scope is passed into the bowel – an invasive procedure that can be painful.

Micro-ultrasound images also have the advantage of being better able to identify some types of cell change associated with cancer.

The Sonopill is the culmination of a decade of research by an international consortium of engineers and scientists. The results of their feasibility study are published today in the journal Science Robotics.

The technology has the potential to change the way doctors conduct examinations of the gastrointestinal tract.

PROFESSOR PIETRO VALDASTRI

The consortium has developed a technique called intelligent magnetic manipulation. Based on the principle that magnets can attract and repel one another, a series of magnets on a robotic arm that passes over the patient interacts with a magnet inside the capsule, gently manoeuvring it through the colon.

The magnetic forces used are harmless and can pass through human tissue, doing away with the need for a physical connection between the robotic arm and the capsule.

An artificial intelligence system (AI) ensures the smooth capsule can position itself correctly against the gut wall to get the best quality micro-ultrasound images. The feasibility study also showed should the capsule get dislodged, the AI system can navigate it back to the required location.

Professor Pietro Valdastri, who holds the Chair in Robotics and Autonomous Systems in Leeds’ School of Electronic and Electrical Engineering and was senior author of the paper, said: “The technology has the potential to change the way doctors conduct examinations of the gastrointestinal tract.

“Previous studies showed that micro-ultrasound was able to capture high-resolution images and visualise small lesions in the superficial layers of the gut, providing valuable information about the early signs of disease.

“With this study, we show that intelligent magnetic manipulation is an effective technique to guide a micro-ultrasound capsule to perform targeted imaging deep inside the human body.

“The platform is able to localise the position of the Sonopill at any time and adjust the external driving magnet to perform a diagnostic scan while maintaining a high-quality ultrasound signal. This discovery has the potential to enable painless diagnosis via a micro-ultrasound pill in the entire gastrointestinal tract.”

Sandy Cochran, Professor of Ultrasound Materials and Systems at the University of Glasgow and lead researcher, said: “We’re really excited by the results of this feasibility study. With an increasing demand for endoscopies, it is more important than ever to be able to deliver a precise, targeted, and cost-effective treatment that is comfortable for patients.

“Today, we are one step closer to delivering that.

“We hope that in the near future, the Sonopill will be available to all patients as part of regular medical check-ups, effectively catching serious diseases at an early stage and monitoring the health of everyone’s digestive system.”

 

The Sonopill is a small capsule – with a diameter of 21mm and length of 39mm, which the engineers say can be scaled down. The capsule houses a micro ultrasound transducer, an LED light, camera and magnet.

A very small flexible cable is tethered to the capsule which also passes into the body via the rectum and sends ultrasound images back to a computer in the examination room.

The feasibility tests were conducted on laboratory models and in animal studies involving pigs.

Diseases of the gastrointestinal tract account for approximately 8 million deaths a year across the world, including some bowel cancers which are linked with high mortality.

The research was funded by the Engineering and Physical Sciences Research Council, the Royal Society and the US National Institutes of Health.

Materials Provided by the University of Leeds