Login with your Social Account

New vaccine strategy boosts T-cell therapy

New vaccine strategy boosts T-cell therapy

A promising new way to treat some types of cancer is to program the patient’s own T cells to destroy the cancerous cells. This approach, termed CAR-T cell therapy, is now used to combat some types of leukemia, but so far it has not worked well against solid tumors such as lung or breast tumors.

MIT researchers have now devised a way to super-charge this therapy so that it could be used as a weapon against nearly any type of cancer. The research team developed a vaccine that dramatically boosts the antitumor T cell population and allows the cells to vigorously invade solid tumors.

In a study of mice, the researchers found that they could completely eliminate solid tumors in 60 percent of the animals that were given T-cell therapy along with the booster vaccination. Engineered T cells on their own had almost no effect.

“By adding the vaccine, a CAR-T cell treatment which had no impact on survival can be amplified to give a complete response in more than half of the animals,” says Darrell Irvine, who is the Underwood-Prescott Professor with appointments in Biological Engineering and Materials Science and Engineering, an associate director of MIT’s Koch Institute for Integrative Cancer Research, a member of the Ragon Institute of MGH, MIT, and Harvard, and the senior author of the study.

Leyuan Ma, an MIT postdoc, is the lead author of the study, which appears in the July 11 online edition of Science.

Targeting tumors

So far, the FDA has approved two types of CAR-T cell therapy, both used to treat leukemia. In those cases, T cells removed from the patient’s blood are programmed to target a protein, or antigen, found on the surface of B cells. (The “CAR” in CAR-T cell therapy is for “chimeric antigen receptor.”)

Scientists believe one reason this approach hasn’t worked well for solid tumors is that tumors usually generate an immunosuppressive environment that disarms the T cells before they can reach their target. The MIT team decided to try to overcome this by giving a vaccine that would go to the lymph nodes, which host huge populations of immune cells, and stimulate the CAR-T cells there.

“Our hypothesis was that if you boosted those T cells through their CAR receptor in the lymph node, they would receive the right set of priming cues to make them more functional so they’d be resistant to shutdown and would still function when they got into the tumor,” Irvine says.

To create such a vaccine, the MIT team used a trick they had discovered several years ago. They found that they could deliver vaccines more effectively to the lymph nodes by linking them to a fatty molecule called a lipid tail. This lipid tail binds to albumin, a protein found in the bloodstream, allowing the vaccine to hitch a ride directly to the lymph nodes.

In addition to the lipid tail, the vaccine contains an antigen that stimulates the CAR-T cells once they reach the lymph nodes. This antigen could be either the same tumor antigen targeted by the T cells, or an arbitrary molecule chosen by the researchers. For the latter case, the CAR-T cells have to be re-engineered so that they can be activated by both the tumor antigen and the arbitrary antigen.

In tests in mice, the researchers showed that either of these vaccines dramatically enhanced the T-cell response. When mice were given about 50,000 CAR-T cells but no vaccine, the CAR-T cells were nearly undetectable in the animals’ bloodstream. In contrast, when the booster vaccine was given the day after the T-cell infusion, and again a week later, CAR-T cells expanded until they made up 65 percent of the animals’ total T cell population, two weeks after treatment.

This huge boost in the CAR-T cell population translated to complete elimination of glioblastoma, breast, and melanoma tumors in many of the mice. CAR-T cells given without the vaccine had no effect on tumors, while CAR-T cells given with the vaccine eliminated tumors in 60 percent of the mice.

Long-term memory

This technique also holds promise for preventing tumor recurrence, Irvine says. About 75 days after the initial treatment, the researchers injected tumor cells identical to those that formed the original tumor, and these cells were cleared by the immune system. About 50 days after that, the researchers injected slightly different tumor cells, which did not express the antigen that the original CAR-T cells targeted; the mice could also eliminate those tumor cells.

This suggests that once the CAR-T cells begin destroying tumors, the immune system is able to detect additional tumor antigens and generate populations of “memory” T cells that also target those proteins.

“If we take the animals that appear to be cured and we rechallenge them with tumor cells, they will reject all of them,” Irvine says. “That is another exciting aspect of this strategy. You need to have T cells attacking many different antigens to succeed, because if you have a CAR-T cell that sees only one antigen, then the tumor only has to mutate that one antigen to escape immune attack. If the therapy induces new T-cell priming, this kind of escape mechanism becomes much more difficult.”

While most of the study was done in mice, the researchers showed that human cells coated with CAR antigens also stimulated human CAR-T cells, suggesting that the same approach could work in human patients. The technology has been licensed to a company called Elicio Therapeutics, which is seeking to test it with CAR-T cell therapies that are already in development.

“There’s really no barrier to doing this in patients pretty soon, because if we take a CAR-T cell and make an arbitrary peptide ligand for it, then we don’t have to change the CAR-T cells,” Irvine says. “I’m hopeful that one way or another this can get tested in patients in the next one to two years.”

Journal: https://science.sciencemag.org/content/365/6449/162

Materials provided by Massachusetts Institute of Technology

Cancer biologists identify new drug combo

Cancer biologists identify new drug combo

When it comes to killing cancer cells, two drugs are often better than one. Some drug combinations offer a one-two punch that kills cells more effectively, requires lower doses of each drug, and can help to prevent drug resistance.

MIT biologists have now found that by combining two existing classes of drugs, both of which target cancer cells’ ability to divide, they can dramatically boost the drugs’ killing power. This drug combination also appears to largely spare normal cells, because cancer cells divide differently than healthy cells, the researchers say. They hope a clinical trial of this combination can be started within a year or two.

“This is a combination of one class of drugs that a lot of people are already using, with another type of drug that multiple companies have been developing,” says Michael Yaffe, a David H. Koch Professor of Science and the director of the MIT Center for Precision Cancer Medicine. “I think this opens up the possibility of rapid translation of these findings in patients.”

The discovery was enabled by a new software program the researchers developed, which revealed that one of the drugs had a previously unknown mechanism of action that strongly enhances the effect of the other drug.

Yaffe, who is also a member of the Koch Institute for Integrative Cancer Research, is the senior author of the study, which appears in the July 10 issue of Cell Systems. Koch Institute research scientists Jesse Patterson and Brian Joughin are the first authors of the paper.

Unexpected synergy

Yaffe’s lab has a longstanding interest in analyzing cellular pathways that are active in cancer cells, to find how these pathways work together in signaling networks to create disease-specific vulnerabilities that can be targeted with multiple drugs. When the researchers began this study, they were looking for a drug that would amplify the effects of a type of drug known as a PLK1 inhibitor. Several PLK1 inhibitors, which interfere with cell division, have been developed, and some are now in phase 2 clinical trials.

Based on their previous work, the researchers knew that PLK1 inhibitors also produce a type of DNA and protein damage known as oxidation. They hypothesized that pairing PLK1 inhibitors with a drug that prevents cells from repairing oxidative damage could make them work even better.

To explore that possibility, the researchers tested a PLK1 inhibitor along with a drug called TH588, which blocks MTH1, an enzyme that helps cells counteract oxidative damage. This combination worked extremely well against many types of human cancer cells. In some cases, the researchers could use one-tenth of the original doses of each drug, given together, and achieve the same rates of cell death of either drug given on its own.

“It’s really striking,” Joughin says. “It’s more synergy than you generally see from a rationally designed combination.”

However, they soon realized that this synergy had nothing to do with oxidative damage. When the researchers treated cancer cells missing the gene for MTH1, which they thought was TH588’s target, they found that the drug combination still killed cancer cells at the same high rates.

“Then we were really stuck, because we had a good combination, but we didn’t know why it worked,” Yaffe says.

To solve the mystery, they developed a new software program that allowed them to identify the cellular networks most affected by the drugs. The researchers tested the drug combination in 29 different types of human cancer cells, then fed the data into the software, which compared the results to gene expression data for those cell lines. This allowed them to discover patterns of gene expression that were linked with higher or lower levels of synergy between the two drugs.

This analysis suggested that both drugs were targeting the mitotic spindle, a structure that forms when chromosomes align in the center of a cell as it prepares to divide. Experiments in the lab confirmed that this was correct. The researchers had already known that PLK1 inhibitors target the mitotic spindle, but they were surprised to see that TH588 affected the same structure.

“This combination that we found was very nonobvious,” Yaffe says. “I would never have given two drugs that both targeted the same process and expected anything better than just additive effects.”

“This is an exciting paper for two reasons,” says David Pellman, associate director for basic science at Dana-Farber/Harvard Cancer Center, who was not involved in the study. “First, Yaffe and colleagues make an important advance for the rational design of drug therapy combinations. Second, if you like scientific mysteries, this is a riveting example of molecular sleuthing. A drug that was thought to act in one way is unmasked to work through an entirely different mechanism.”

Disrupting mitosis

The researchers found that while both of the drugs they tested disrupt mitosis, they appear to do so in different ways. TH588 binds to microtubules, which form the mitotic spindle, and slows their assembly. Many similar microtubule inhibitors are already used clinically to treat cancer. The researchers showed that some of those microtubule inhibitors also synergize with PLK1 inhibitors, and they believe those would likely be more readily available for rapid use in patients than TH588, the drug they originally tested.

While the PLK1 protein is involved in multiple aspects of cell division and spindle formation, it’s not known exactly how PLK1 inhibitors interfere with the mitotic spindle to produce this synergy. Yaffe said he suspects they may block a motor protein that is necessary for chromosomes to travel along the spindle.

One potential benefit of this drug combination is that the synergistic effects appear to specifically target cancer cell division and not normal cell division. The researchers believe this could be because cancer cells are forced to rely on alternative strategies for cell division because they often have too many or too few chromosomes, a state known as aneuploidy.

“Based on the work we have done, we propose that this drug combination targets something fundamentally different about the way cancer cells divide, such as altered cell division checkpoints, chromosome number and structure, or other structural differences in cancer cells,” Patterson says.

The researchers are now working on identifying biomarkers that could help them to predict which patients would respond best to this drug combination. They are also trying to determine the exact function of PLK1 that is responsible for this synergy, in hopes of finding additional drugs that would block that interaction.

Materials provided by Massachusetts Institute of Technology

Breaching a “carbon threshold” could lead to mass extinction

Breaching a “carbon threshold” could lead to mass extinction

In the brain, when neurons fire off electrical signals to their neighbours, this happens through an “all-or-none” response. The signal only happens once conditions in the cell breach a certain threshold.

Now an MIT researcher has observed a similar phenomenon in a completely different system: Earth’s carbon cycle.

Daniel Rothman, professor of geophysics and co-director of the Lorenz Center in MIT’s Department of Earth, Atmospheric and Planetary Sciences, has found that when the rate at which carbon dioxide enters the oceans pushes past a certain threshold — whether as the result of a sudden burst or a slow, steady influx — the Earth may respond with a runaway cascade of chemical feedbacks, leading to extreme ocean acidification that dramatically amplifies the effects of the original trigger.

This global reflex causes huge changes in the amount of carbon contained in the Earth’s oceans, and geologists can see evidence of these changes in layers of sediments preserved over hundreds of millions of years.

Rothman looked through these geologic records and observed that over the last 540 million years, the ocean’s store of carbon changed abruptly, then recovered, dozens of times in a fashion similar to the abrupt nature of a neuron spike. This “excitation” of the carbon cycle occurred most dramatically near the time of four of the five great mass extinctions in Earth’s history.

Scientists have attributed various triggers to these events, and they have assumed that the changes in ocean carbon that followed were proportional to the initial trigger — for instance, the smaller the trigger, the smaller the environmental fallout.

But Rothman says that’s not the case. It didn’t matter what initially caused the events; for roughly half the disruptions in his database, once they were set in motion, the rate at which carbon increased was essentially the same.  Their characteristic rate is likely a property of the carbon cycle itself — not the triggers, because different triggers would operate at different rates.

What does this all have to do with our modern-day climate? Today’s oceans are absorbing carbon about an order of magnitude faster than the worst case in the geologic record — the end-Permian extinction. But humans have only been pumping carbon dioxide into the atmosphere for hundreds of years, versus the tens of thousands of years or more that it took for volcanic eruptions or other disturbances to trigger the great environmental disruptions of the past. Might the modern increase of carbon be too brief to excite a major disruption?

According to Rothman, today we are “at the precipice of excitation,” and if it occurs, the resulting spike — as evidenced through ocean acidification, species die-offs, and more — is likely to be similar to past global catastrophes.

“Once we’re over the threshold, how we got there may not matter,” says Rothman, who is publishing his results this week in the Proceedings of the National Academy of Sciences.“Once you get over it, you’re dealing with how the Earth works, and it goes on its own ride.”

A carbon feedback

In 2017, Rothman made a dire prediction: By the end of this century, the planet is likely to reach a critical threshold, based on the rapid rate at which humans are adding carbon dioxide to the atmosphere. When we cross that threshold, we are likely to set in motion a freight train of consequences, potentially culminating in the Earth’s sixth mass extinction.

Rothman has since sought to better understand this prediction, and more generally, the way in which the carbon cycle responds once it’s pushed past a critical threshold. In the new paper, he has developed a simple mathematical model to represent the carbon cycle in the Earth’s upper ocean and how it might behave when this threshold is crossed.

Scientists know that when carbon dioxide from the atmosphere dissolves in seawater, it not only makes the oceans more acidic, but it also decreases the concentration of carbonate ions. When the carbonate ion concentration falls below a threshold, shells made of calcium carbonate dissolve. Organisms that make them fare poorly in such harsh conditions.

Shells, in addition to protecting marine life, provide a “ballast effect,” weighing organisms down and enabling them to sink to the ocean floor along with detrital organic carbon, effectively removing carbon dioxide from the upper ocean. But in a world of increasing carbon dioxide, fewer calcifying organisms should mean less carbon dioxide is removed.

“It’s a positive feedback,” Rothman says. “More carbon dioxide leads to more carbon dioxide. The question from a mathematical point of view is, is such a feedback enough to render the system unstable?”

An inexorable rise

Rothman captured this positive feedback in his new model, which comprises two differential equations that describe interactions between the various chemical constituents in the upper ocean. He then observed how the model responded as he pumped additional carbon dioxide into the system, at different rates and amounts.

He found that no matter the rate at which he added carbon dioxide to an already stable system, the carbon cycle in the upper ocean remained stable. In response to modest perturbations, the carbon cycle would go temporarily out of whack and experience a brief period of mild ocean acidification, but it would always return to its original state rather than oscillating into a new equilibrium.

When he introduced carbon dioxide at greater rates, he found that once the levels crossed a critical threshold, the carbon cycle reacted with a cascade of positive feedbacks that magnified the original trigger, causing the entire system to spike, in the form of severe ocean acidification. The system did, eventually, return to equilibrium, after tens of thousands of years in today’s oceans — an indication that, despite a violent reaction, the carbon cycle will resume its steady state.

This pattern matches the geological record, Rothman found. The characteristic rate exhibited by half his database results from excitations above, but near, the threshold. Environmental disruptions associated with mass extinction are outliers — they represent excitations well beyond the threshold. At least three of those cases may be related to sustained massive volcanism.

“When you go past a threshold, you get a free kick from the system responding by itself,” Rothman explains. “The system is on an inexorable rise. This is what excitability is, and how a neuron works too.”

Although carbon is entering the oceans today at an unprecedented rate, it is doing so over a geologically brief time. Rothman’s model predicts that the two effects cancel: Faster rates bring us closer to the threshold, but shorter durations move us away. Insofar as the threshold is concerned, the modern world is in roughly the same place it was during longer periods of massive volcanism.

In other words, if today’s human-induced emissions cross the threshold and continue beyond it, as Rothman predicts they soon will, the consequences may be just as severe as what the Earth experienced during its previous mass extinctions.

“It’s difficult to know how things will end up given what’s happening today,” Rothman says. “But we’re probably close to a critical threshold. Any spike would reach its maximum after about 10,000 years. Hopefully, that would give us time to find a solution.”

“We already know that our CO2-emitting actions will have consequences for many millennia,” says Timothy Lenton, professor of climate change and earth systems science at the University of Exeter. “This study suggests those consequences could be much more dramatic than previously expected. If we push the Earth system too far, then it takes over and determines its own response — past that point there will be little we can do about it.”

Materials provided by Massachusetts Institute of Technology

 

A new way of making complex structures in thin films

A new way of making complex structures in thin films

Self-assembling materials called block copolymers, which are known to form a variety of predictable, regular patterns can now be made into much more complex patterns that may open up new areas of materials design, a team of MIT researchers says.

The new findings appear in the journal Nature Communications, in a paper by postdoc Yi Ding, professors of materials science and engineering Alfredo Alexander-Katz and Caroline Ross, and three others.

“This is a discovery that was in some sense fortuitous,” says Alexander-Katz. “Everyone thought this was not possible,” he says, describing the team’s discovery of a phenomenon that allows the polymers to self-assemble in patterns that deviate from regular symmetrical arrays.

Self-assembling block copolymers are materials whose chain-like molecules, which are initially disordered, will spontaneously arrange themselves into periodic structures. Researchers had found that if there was a repeating pattern of lines or pillars created on a substrate, and then a thin film of the block copolymer was formed on that surface, the patterns from the substrate would be duplicated in the self-assembled material. But this method could only produce simple patterns such as grids of dots or lines.

In the new method, there are two different, mismatched patterns. One is from a set of posts or lines etched on a substrate material, and the other is an inherent pattern that is created by the self-assembling copolymer. For example, there may be a rectangular pattern on the substrate and a hexagonal grid that the copolymer forms by itself. One would expect the resulting block copolymer arrangement to be poorly ordered, but that’s not what the team found. Instead, “it was forming something much more unexpected and complicated,” Ross says.

There turned out to be a subtle but complex kind of order — interlocking areas that formed slightly different but regular patterns, of a type similar to quasicrystals, which don’t quite repeat the way normal crystals do. In this case, the patterns do repeat, but over longer distances than in ordinary crystals. “We’re taking advantage of molecular processes to create these patterns on the surface” with the block copolymer material, Ross says.

This potentially opens the door to new ways of making devices with tailored characteristics for optical systems or for “plasmonic devices” in which electromagnetic radiation resonates with electrons in precisely tuned ways, the researchers say. Such devices require very exact positioning and symmetry of patterns with nanoscale dimensions, something this new method can achieve.

Katherine Mizrahi Rodriguez, who worked on the project as an undergraduate, explains that the team prepared many of these block copolymer samples and studied them under a scanning electron microscope. Yi Ding, who worked on this for his doctoral thesis, “started looking over and over to see if any interesting patterns came up,” she says. “That’s when all of these new findings sort of evolved.”

The resulting odd patterns are “a result of the frustration between the pattern the polymer would like to form, and the template,” explains Alexander-Katz. That frustration leads to a breaking of the original symmetries and the creation of new subregions with different kinds of symmetries within them, he says. “That’s the solution nature comes up with. Trying to fit in the relationship between these two patterns, it comes up with a third thing that breaks the patterns of both of them.” They describe the new patterns as a “superlattice.”

Having created these novel structures, the team went on to develop models to explain the process. Co-author Karim Gadelrab PhD ’19, says, “The modeling work showed that the emergent patterns are in fact thermodynamically stable, and revealed the conditions under which the new patterns would form.”

Ding says “We understand the system fully in terms of the thermodynamics,” and the self-assembling process “allows us to create fine patterns and to access some new symmetries that are otherwise hard to fabricate.”

He says this removes some existing limitations in the design of optical and plasmonic materials, and thus “creates a new path” for materials design.

So far, the work the team has done has been confined to two-dimensional surfaces, but in ongoing work they are hoping to extend the process into the third dimension, says Ross. “Three dimensional fabrication would be a game changer,” she says. Current fabrication techniques for microdevices build them up one layer at a time, she says, but “if you can build up entire objects in 3-D in one go,” that would potentially make the process much more efficient.

These findings “open new pathways to generate templates for nanofabrication with symmetries not achievable from the copolymer alone,” says Thomas P. Russell, the Silvio O. Conte Distinguished Professor of Polymer Science and Engineering at the University of Massachusetts, Amherst, who was not involved in this work. He adds that it “opens the possibility of exploring a large parameter space for uncovering other symmetries than those discussed in the manuscript.”

Russel says “The work is of the highest quality,” and adds “The pairing of theory and experiment is quite powerful and, as can be seen in the text, the agreement between the two is remarkably good.”

Materials provided by Massachusetts Institute of Technology

Experiments show dramatic increase in solar cell output

Experiments show dramatic increase in solar cell output

In any conventional silicon-based solar cell, there is an absolute limit on overall efficiency, based partly on the fact that each photon of light can only knock loose a single electron, even if that photon carried twice the energy needed to do so. But now, researchers have demonstrated a method for getting high-energy photons striking silicon to kick out two electrons instead of one, opening the door for a new kind of solar cell with greater efficiency than was thought possible.

While conventional silicon cells have an absolute theoretical maximum efficiency of about 29.1 percent conversion of solar energy, the new approach, developed over the last several years by researchers at MIT and elsewhere, could bust through that limit, potentially adding several percentage points to that maximum output. The results are described today in the journal Nature, in a paper by graduate student Markus Einzinger, professor of chemistry Moungi Bawendi, professor of electrical engineering and computer science Marc Baldo, and eight others at MIT and at Princeton University.

The basic concept behind this new technology has been known for decades, and the first demonstration that the principle could work was carried out by some members of this team six years ago. But actually translating the method into a full, operational silicon solar cell took years of hard work, Baldo says.

That initial demonstration “was a good test platform” to show that the idea could work, explains Daniel Congreve PhD ’15, an alumnus now at the Rowland Institute at Harvard, who was the lead author in that prior report and is a co-author of the new paper. Now, with the new results, “we’ve done what we set out to do” in that project, he says.

The original study demonstrated the production of two electrons from one photon, but it did so in an organic photovoltaic cell, which is less efficient than a silicon solar cell. It turned out that transferring the two electrons from a top collecting layer made of tetracene into the silicon cell “was not straightforward,” Baldo says. Troy Van Voorhis, a professor of chemistry at MIT who was part of that original team, points out that the concept was first proposed back in the 1970s, and says wryly that turning that idea into a practical device “only took 40 years.”

The key to splitting the energy of one photon into two electrons lies in a class of materials that possess “excited states” called excitons, Baldo says: In these excitonic materials, “these packets of energy propagate around like the electrons in a circuit,” but with quite different properties than electrons. “You can use them to change energy — you can cut them in half, you can combine them.” In this case, they were going through a process called singlet exciton fission, which is how the light’s energy gets split into two separate, independently moving packets of energy. The material first absorbs a photon, forming an exciton that rapidly undergoes fission into two excited states, each with half the energy of the original state.

But the tricky part was then coupling that energy over into the silicon, a material that is not excitonic. This coupling had never been accomplished before.

As an intermediate step, the team tried coupling the energy from the excitonic layer into a material called quantum dots. “They’re still excitonic, but they’re inorganic,” Baldo says. “That worked; it worked like a charm,” he says. By understanding the mechanism taking place in that material, he says, “we had no reason to think that silicon wouldn’t work.”

What that work showed, Van Voorhis says, is that the key to these energy transfers lies in the very surface of the material, not in its bulk. “So it was clear that the surface chemistry on silicon was going to be important. That was what was going to determine what kinds of surface states there were.” That focus on the surface chemistry may have been what allowed this team to succeed where others had not, he suggests.

The key was in a thin intermediate layer. “It turns out this tiny, tiny strip of material at the interface between these two systems [the silicon solar cell and the tetracene layer with its excitonic properties] ended up defining everything. It’s why other researchers couldn’t get this process to work, and why we finally did.” It was Einzinger “who finally cracked that nut,” he says, by using a layer of a material called hafnium oxynitride.

The layer is only a few atoms thick, or just 8 angstroms (ten-billionths of a meter), but it acted as a “nice bridge” for the excited states, Baldo says. That finally made it possible for the single high-energy photons to trigger the release of two electrons inside the silicon cell. That produces a doubling of the amount of energy produced by a given amount of sunlight in the blue and green part of the spectrum. Overall, that could produce an increase in the power produced by the solar cell — from a theoretical maximum of 29.1 percent, up to a maximum of about 35 percent.

Actual silicon cells are not yet at their maximum, and neither is the new material, so more development needs to be done, but the crucial step of coupling the two materials efficiently has now been proven. “We still need to optimize the silicon cells for this process,” Baldo says. For one thing, with the new system those cells can be thinner than current versions. Work also needs to be done on stabilizing the materials for durability. Overall, commercial applications are probably still a few years off, the team says.

Other approaches to improving the efficiency of solar cells tend to involve adding another kind of cell, such as a perovskite layer, over the silicon. Baldo says “they’re building one cell on top of another. Fundamentally, we’re making one cell — we’re kind of turbocharging the silicon cell. We’re adding more current into the silicon, as opposed to making two cells.”

The researchers have measured one special property of hafnium oxynitride that helps it transfer the excitonic energy. “We know that hafnium oxynitride generates additional charge at the interface, which reduces losses by a process called electric field passivation. If we can establish better control over this phenomenon, efficiencies may climb even higher.” Einzinger says. So far, no other material they’ve tested can match its properties.

Materials provided by Massachusetts Institute of Technology

MIT Micro Robots

Tiny motor can “walk” to carry out tasks

Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?

Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an International robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.

Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.

Their work offers an alternative to today’s approaches to contructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.

Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.

The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.

Langford’s initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.

“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.

To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.

The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that’s like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project’s ultimate goal of “making an assembler that can assemble itself out of the parts that it’s assembling.”

“Standardization is an extremely important issue in microrobotics, to reduce the production costs and, as a result, to improve acceptance of this technology to the level of regular industrial robots,” says Sergej Fatikow, head of the Division of Microrobotics and Control Engineering, at the University of Oldenburg, Germany, who was not associated with this research. The new work “addresses assembling of sophisticated microrobotic systems from a small set of standard building blocks, which may revolutionize the field of microrobotics and open up numerous applications at small scales,” he says.

Materials Provided By Massachusetts Institute of Technology

MIT Heat Collection

Getting more heat out of sunlight

A newly developed material that is so perfectly transparent you can barely see it could unlock many new uses for solar heat. It generates much higher temperatures than conventional solar collectors do — enough to be used for home heating or for industrial processes that require heat of more than 200 degrees Celsius (392 degrees Fahrenheit).

The key to the process is a new kind of aerogel, a lightweight material that consists mostly of air, with a structure made of silica (which is also used to make glass). The material lets sunlight pass through easily but blocks solar heat from escaping. The findings are described in the journal ACS Nano, in a paper by Lin Zhao, an MIT graduate student; Evelyn Wang, professor and head of the Department of Mechanical Engineering; Gang Chen, the Carl Richard Soderberg Professor in Power Engineering; and five others.

The key to efficient collection of solar heat, Wang explains, is being able to keep something hot internally while remaining cold on the outside. One way of doing that is using a vacuum between a layer of glass and a dark, heat-absorbing material, which is the method used in many concentrating solar collectors but is relatively expensive to install and maintain. There has been great interest in finding a less expensive, passive system for collecting solar heat at the higher temperature levels needed for space heating, food processing, or many industrial processes.

Aerogels, a kind of foam-like material made of silica particles, have been developed for years as highly efficient and lightweight insulating materials, but they have generally had limited transparency to visible light, with around a 70 percent transmission level. Wang says developing a way of making aerogels that are transparent enough to work for solar heat collection was a long and difficult process involving several researchers for about four years. But the result is an aerogel that lets through over 95 percent of incoming sunlight while maintaining its highly insulating properties.

The key to making it work was in the precise ratios of the different materials used to create the aerogel, which are made by mixing a catalyst with grains of a silica-containing compound in a liquid solution, forming a kind of gel, and then drying it to get all the liquid out, leaving a matrix that is mostly air but retains the original mixture’s strength. Producing a mix that dries out much faster than those in conventional aerogels, they found, produced a gel with smaller pore spaces between its grains, and that therefore scattered the light much less.

In tests on a rooftop on the MIT campus, a passive device consisting of a heat-absorbing dark material covered with a layer of the new aerogel was able to reach and maintain a temperature of 220 C, in the middle of a Cambridge winter when the outside air was below 0 C.

Such high temperatures have previously only been practical by using concentrating systems, with mirrors to focus sunlight onto a central line or point, but this system requires no concentration, making it simpler and less costly. That could potentially make it useful for a wide variety of applications that require higher levels of heat.

For example, simple flat rooftop collectors are often used for domestic hot water, producing temperatures of around 80 C. But the higher temperatures enabled by the aerogel system could make such simple systems usable for home heating as well, and even for powering an air conditioning system. Large-scale versions could be used to provide heat for a wide variety of applications in chemical, food production, and manufacturing processes.

Zhao describes the basic function of the aerogel layer as “like a greenhouse effect. The material we use to increase the temperature acts like the Earth’s atmosphere does to provide insulation, but this is an extreme example of it.”

For most purposes, the passive heat collection system would be connected to pipes containing a liquid that could circulate to transfer the heat to wherever it’s needed. Alternatively, Wang suggests, for some uses the system could be connected to heat pipes, devices that can transfer heat over a distance without requiring pumps or any moving parts.

Because the principle is essentially the same, an aerogel-based solar heat collector could directly replace the vacuum-based collectors used in some existing applications, providing a lower-cost option. The materials used to make the aerogel are all abundant and inexpensive; the only costly part of the process is the drying, which requires a specialized device called a critical point dryer to allow for a very precise drying process that extracts the solvents from the gel while preserving its nanoscale structure.

Because that is a batch process rather than a continuous one that could be used in roll-to-roll manufacturing, it could limit the rate of production if the system is scaled up to industrial production levels. “The key to scaleup is how we can reduce the cost of that process,” Wang says. But even now, a preliminary economic analysis shows that the system can be economically viable for some uses, especially in comparison with vacuum-based systems.

The research team included research scientist Bikram Bhatia, postdoc Sungwoo Yang, graduate student Elise Strobach, instructor Lee Weinstein and postdoc Thomas Cooper. The work was primarily funded by the U.S. Department of Energy’s ARPA-E program.

Materials provided by Massachusetts Institute of Technology

Translating proteins into music, and back

Translating proteins into music, and back – Listen to Protein Music

Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.

In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.

The new method translates an amino acid sequence of proteins into this sequence of percussive and rhythmic sounds. Courtesy of Markus Buehler.

Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.

The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described today in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.

While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”

Learning the language of proteins

The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”

By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.

The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.

The percussive, rhythmic, and musical sounds heard here are generated entirely from amino acid sequences. Courtesy of Markus Buehler.

Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”

“Composing” new proteins

By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.”

This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”

The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”

The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.

The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.

“Markus Buehler has been gifted with a most creative soul, and his explorations into the inner workings of biomolecules are advancing our understanding of the mechanical response of biological materials in a most significant manner,” says Marc Meyers, a professor of materials science at the University of California at San Diego, who was not involved in this work.

Meyers adds, “The focusing of this imagination to music is a novel and intriguing direction. This is experimental music at its best. The rhythms of life, including the pulsations of our heart, were the initial sources of repetitive sounds that engendered the marvelous world of music. Markus has descended into the nanospace to extract the rythms of the amino acids, the building blocks of life.”

“Protein sequences are complex, as are comparisons between protein sequences,” says Anthony Weiss, a professor of biochemistry and molecular biotechnology at the University of Sydney, Australia, who also was not connected to this work. The MIT team “provides an impressive, entertaining and unusual approach to accessing and interpreting this complexity. … The approach benefits from our innate ability to hear complex musical patterns. Through harmony and discord, we now have an entertaining and useful tool to compare and contrast amino acid sequences.”

Materials provided by Massachusetts Institute of Technology

New AI programming language goes beyond deep learning

New AI programming language goes beyond deep learning

A team of MIT researchers is making it easier for novices to get their feet wet with artificial intelligence, while also helping experts advance the field.

In a paper presented at the Programming Language Design and Implementation conference this week, the researchers describe a novel probabilistic-programming system named “Gen.” Users write models and algorithms from multiple fields where AI techniques are applied — such as computer vision, robotics, and statistics — without having to deal with equations or manually write high-performance code. Gen also lets expert researchers write sophisticated models and inference algorithms — used for prediction tasks — that were previously infeasible.

In their paper, for instance, the researchers demonstrate that a short Gen program can infer 3-D body poses, a difficult computer-vision inference task that has applications in autonomous systems, human-machine interactions, and augmented reality. Behind the scenes, this program includes components that perform graphics rendering, deep-learning, and types of probability simulations. The combination of these diverse techniques leads to better accuracy and speed on this task than earlier systems developed by some of the researchers.

Due to its simplicity — and, in some use cases, automation — the researchers say Gen can be used easily by anyone, from novices to experts. “One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math,” says first author Marco Cusumano-Towner, a Ph.D student in the Department of Electrical Engineering and Computer Science. “We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems.”

The researchers also demonstrated Gen’s ability to simplify data analytics by using another Gen program that automatically generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data. That builds on the researchers’ previous work that let users write a few lines of code to uncover insights into financial trends, air travel, voting patterns, and the spread of disease, among other trends. This is different from earlier systems, which required a lot of hand coding for accurate predictions.

“Gen is the first system that’s flexible, automated, and efficient enough to cover those very different types of examples in computer vision and data science and give state of-the-art performance,” says Vikash K. Mansinghka ’05, MEng ’09, PhD ’09, a researcher in the Department of Brain and Cognitive Sciences who runs the Probabilistic Computing Project.

Joining Cusumano-Towner and Mansinghka on the paper are Feras Saad and Alexander K. Lew, both CSAIL graduate students and members of the Probabilistic Computing Project.

Best of all worlds

In 2015, Google released TensorFlow, an open-source library of application programming interfaces (APIs) that helps beginners and experts automatically generate machine-learning systems without doing much math. Now widely used, the platform is helping democratize some aspects of AI. But, although it’s automated and efficient, it’s narrowly focused on deep-learning models which are both costly and limited compared to the broader promise of AI in general.

But there are plenty of other AI techniques available today, such as statistical and probabilistic models, and simulation engines. Some other probabilistic programming systems are flexible enough to cover several kinds of AI techniques, but they run inefficiently.

The researchers sought to combine the best of all worlds — automation, flexibility, and speed — into one. “If we do that, maybe we can help democratize this much broader collection of modeling and inference algorithms, like TensorFlow did for deep learning,” Mansinghka says.

In probabilistic AI, inference algorithms perform operations on data and continuously readjust probabilities based on new data to make predictions. Doing so eventually produces a model that describes how to make predictions on new data.

Building off concepts used in their earlier probabilistic-programming system, Church, the researchers incorporate several custom modeling languages into Julia, a general-purpose programming language that was also developed at MIT. Each modeling language is optimized for a different type of AI modeling approach, making it more all-purpose. Gen also provides high-level infrastructure for inference tasks, using diverse approaches such as optimization, variational inference, certain probabilistic methods, and deep learning. On top of that, the researchers added some tweaks to make the implementations run efficiently.

Beyond the lab

External users are already finding ways to leverage Gen for their AI research. For example, Intel is collaborating with MIT to use Gen for 3-D pose estimation from its depth-sense cameras used in robotics and augmented-reality systems. MIT Lincoln Laboratory is also collaborating on applications for Gen in aerial robotics for humanitarian relief and disaster response.

Gen is beginning to be used on ambitious AI projects under the MIT Quest for Intelligence. For example, Gen is central to an MIT-IBM Watson AI Lab project, along with the U.S. Department of Defense’s Defense Advanced Research Projects Agency’s ongoing Machine Common Sense project, which aims to model human common sense at the level of an 18-month-old child. Mansinghka is one of the principal investigators on this project.

“With Gen, for the first time, it is easy for a researcher to integrate a bunch of different AI techniques. It’s going to be interesting to see what people discover is possible now,” Mansinghka says.

Zoubin Ghahramani, chief scientist and vice president of AI at Uber and a professor at Cambridge University, who was not involved in the research, says, “Probabilistic programming is one of most promising areas at the frontier of AI since the advent of deep learning. Gen represents a significant advance in this field and will contribute to scalable and practical implementations of AI systems based on probabilistic reasoning.”

Peter Norvig, director of research at Google, who also was not involved in this research, praised the work as well. “[Gen] allows a problem-solver to use probabilistic programming, and thus have a more principled approach to the problem, but not be limited by the choices made by the designers of the probabilistic programming system,” he says. “General-purpose programming languages … have been successful because they … make the task easier for a programmer, but also make it possible for a programmer to create something brand new to efficiently solve a new problem. Gen does the same for probabilistic programming.”

Gen’s source code is publicly available and is being presented at upcoming open-source developer conferences, including Strange Loop and JuliaCon. The work is supported, in part, by DARPA.

Materials provided by Massachusetts Institute of Technology

MIT Nanoemulsions

“Nanoemulsion” gels offer new way to deliver drugs through the skin

MIT chemical engineers have devised a new way to create very tiny droplets of one liquid suspended within another liquid, known as nanoemulsion. Such emulsions are similar to the mixture that forms when you shake an oil-and-vinegar salad dressing, but with much smaller droplets. Their tiny size allows them to remain stable for relatively long periods of time.

The researchers also found a way to easily convert the liquid nanoemulsion to a gel when they reach body temperature (37 degrees Celsius), which could be useful for developing materials that can deliver medication when rubbed on the skin or injected into the body.

“The pharmaceutical industry is hugely interested in nanoemulsions as a way of delivering small molecule therapeutics. That could be topically, through ingestion, or by spraying into the nose, because once you start getting into the size range of hundreds of nanometers you can permeate much more effectively into the skin,” says Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering and the senior author of the study.

In their new study, which appears in the June 21 issue of Nature Communications, the researchers created nanoemulsions that were stable for more than a year. To demonstrate the emulsions’ potential usefulness for delivering drugs, the researchers showed that they could incorporate ibuprofen into the droplets.

Seyed Meysam Hashemnejad, a former MIT postdoc, is the first author of the study. Other authors include former postdoc Abu Zayed Badruddoza, L’Oréal senior scientist Brady Zarket, and former MIT summer research intern Carlos Ricardo Castaneda.

Energy reduction

One of the easiest ways to create an emulsion is to add energy — by shaking your salad dressing, for example, or using a homogenizer to break down fat globules in milk. The more energy that goes in, the smaller the droplets, and the more stable they are.

Nanoemulsions, which contain droplets with a diameter 200 nanometers or smaller, are desirable not only because they are more stable, but they also have a higher ratio of surface area to volume, which allows them to carry larger payloads of active ingredients such as drugs or sunscreens.

Over the past few years, Doyle’s lab has been working on lower-energy strategies for making nanoemulsions, which could make the process easier to adapt for large-scale industrial manufacturing.

Detergent-like chemicals called surfactants can speed up the formation of emulsions, but many of the surfactants that have previously been used for creating nanoemulsions are not FDA-approved for use in humans. Doyle and his students chose two surfactants that are uncharged, which makes them less likely to irritate the skin, and are already FDA-approved as food or cosmetic additives. They also added a small amount of polyethylene glycol (PEG), a biocompatible polymer used for drug delivery that helps the solution to form even smaller droplets, down to about 50 nanometers in diameter.

“With this approach, you don’t have to put in much energy at all,” Doyle says. “In fact, a slow stirring bar almost spontaneously creates these super small emulsions.”

Active ingredients can be mixed into the oil phase before the emulsion is formed, so they end up loaded into the droplets of the emulsion.

Once they had developed a low-energy way to create nanoemulsions, using nontoxic ingredients, the researchers added a step that would allow the emulsions to be easily converted to gels when they reach body temperature. They achieved this by incorporating heat-sensitive polymers called poloxamers, or Pluronics, which are already FDA-approved and used in some drugs and cosmetics.

Pluronics contain three “blocks” of polymers: The outer two regions are hydrophilic, while the middle region is slightly hydrophobic. At room temperature, these molecules dissolve in water but do not interact much with the droplets that form the emulsion. However, when heated, the hydrophobic regions attach to the droplets, forcing them to pack together more tightly and creating a jelly-like solid. This process happens within seconds of heating the emulsion to the necessary temperature.

MIT chemical engineers have devised a way to convert liquid nanoemulsions into solid gels. These gels (red) form almost instantaneously when drops of the liquid emulsion enter warm water.

MIT chemical engineers have devised a way to convert liquid nanoemulsions into solid gels. These gels (red) form almost instantaneously when drops of the liquid emulsion enter warm water.

Tunable properties

The researchers found that they could tune the properties of the gels, including the temperature at which the material becomes a gel, by changing the size of the emulsion droplets and the concentration and structure of the Pluronics that they added to the emulsion. They can also alter traits such as elasticity and yield stress, which is a measure of how much force is needed to spread the gel.

Doyle is now exploring ways to incorporate a variety of active pharmaceutical ingredients into this type of gel. Such products could be useful for delivering topical medications to help heal burns or other types of injuries, or could be injected to form a “drug depot” that would solidify inside the body and release drugs over an extended period of time. These droplets could also be made small enough that they could be used in nasal sprays for delivering inhalable drugs, Doyle says.

For cosmetic applications, this approach could be used to create moisturizers or other products that are more shelf-stable and feel smoother on the skin.

Materials provided by Massachusetts Institute of Technology