Login with your Social Account

Astrophysical shock phenomena reproduced in the laboratory

Astrophysical shock phenomena reproduced in the laboratory

Vast interstellar events where clouds of charged matter hurtle into each other and spew out high-energy particles have now been reproduced in the lab with high fidelity. The work, by MIT researchers and an international team of colleagues, should help resolve longstanding disputes over exactly what takes place in these gigantic shocks.

Many of the largest-scale events, such as the expanding bubble of matter hurtling outward from a supernova, involve a phenomenon called collisionless shock. In these interactions, the clouds of gas or plasma are so rarefied that most of the particles involved actually miss each other, but they nevertheless interact electromagnetically or in other ways to produces visible shock waves and filaments. These high-energy events have so far been difficult to reproduce under laboratory conditions that mirror those in an astrophysical setting, leading to disagreements among physicists as to the mechanisms at work in these astrophysical phenomena.

Now, the researchers have succeeded in reproducing critical conditions of these collisionless shocks in the laboratory, allowing for detailed study of the processes taking place within these giant cosmic smashups. The new findings are described in the journal Physical Review Letters, in a paper by MIT Plasma Science and Fusion Center Senior Research Scientist Chikang Li, five others at MIT, and 14 others around the world.

Virtually all visible matter in the universe is in the form of plasma, a kind of soup of subatomic particles where negatively charged electrons swim freely along with positively charged ions instead of being connected to each other in the form of atoms. The sun, the stars, and most clouds of interstellar material are made of plasma.

Most of these interstellar clouds are extremely tenuous, with such low density that true collisions between their constituent particles are rare even when one cloud slams into another at extreme velocities that can be much faster than 1,000 kilometers per second. Nevertheless, the result can be a spectacularly bright shock wave, sometimes showing a great deal of structural detail including long trailing filaments.

Astronomers have found that many changes take place at these shock boundaries, where physical parameters “jump,” Li says. But deciphering the mechanisms taking place in collisionless shocks has been difficult, since the combination of extremely high velocities and low densities has been hard to match on Earth.

While collisionless shocks had been predicted earlier, the first one that was directly identified, in the 1960s, was the bow shock formed by the solar wind, a tenuous stream of particles emanating from the sun, when it hits Earth’s magnetic field. Soon, many such shocks were recognized by astronomers in interstellar space. But in the decades since, “there has been a lot of simulations and theoretical modeling, but a lack of experiments” to understand how the processes work, Li says.

Li and his colleagues found a way to mimic the phenomena in the laboratory by generating a jet of low-density plasma using a set of six powerful laser beams, at the OMEGA laser facility at the University of Rochester, and aiming it at a thin-walled polyimide plastic bag filled with low-density hydrogen gas. The results reproduced many of the detailed instabilities observed in deep space, thus confirming that the conditions match closely enough to allow for detailed, close-up study of these elusive phenomena. A quantity called the mean free path of the plasma particles was measured as being much greater than the widths of the shock waves, Li says, thus meeting the formal definition of a collisionless shock.

At the boundary of the lab-generated collisionless shock, the density of the plasma spiked dramatically. The team was able to measure the detailed effects on both the upstream and downstream sides of the shock front, allowing them to begin to differentiate the mechanisms involved in the transfer of energy between the two clouds, something that physicists have spent years trying to figure out. The results are consistent with one set of predictions based on something called the Fermi mechanism, Li says, but further experiments will be needed to definitively rule out some other mechanisms that have been proposed.

“For the first time we were able to directly measure the structure” of important parts of the collisionless shock, Li says. “People have been pursuing this for several decades.”

The research also showed exactly how much energy is transferred to particles that pass through the shock boundary, which accelerates them to speeds that are a significant fraction of the speed of light, producing what are known as cosmic rays. A better understanding of this mechanism “was the goal of this experiment, and that’s what we measured” Li says, noting that they captured a full spectrum of the energies of the electrons accelerated by the shock.

“This report is the latest installment in a transformative series of experiments, annually reported since 2015, to emulate an actual astrophysical shock wave for comparison with space observations,” says Mark Koepke, a professor of physics at West Virginia University and chair of the Omega Laser Facility User Group, who was not involved in the study. “Computer simulations, space observations, and these experiments reinforce the physics interpretations that are advancing our understanding of the particle acceleration mechanisms in play in high-energy-density cosmic events such as gamma-ray-burst-induced outflows of relativistic plasma.”

Materials provided by Massachusetts Institute of Technology

Model predicts cognitive decline due to Alzheimer’s, up to two years out

Model predicts cognitive decline due to Alzheimer’s, up to two years out

A new model developed at MIT can help predict if patients at risk for Alzheimer’s disease will experience clinically significant cognitive decline due to the disease, by predicting their cognition test scores up to two years in the future.

The model could be used to improve the selection of candidate drugs and participant cohorts for clinical trials, which have been notoriously unsuccessful thus far. It would also let patients know they may experience rapid cognitive decline in the coming months and years, so they and their loved ones can prepare.

Pharmaceutical firms over the past two decades have injected hundreds of billions of dollars into Alzheimer’s research. Yet the field has been plagued with failure: Between 1998 and 2017, there were 146 unsuccessful attempts to develop drugs to treat or prevent the disease, according to a 2018 report from the Pharmaceutical Research and Manufacturers of America. In that time, only four new medicines were approved, and only to treat symptoms. More than 90 drug candidates are currently in development.

Studies suggest greater success in bringing drugs to market could come down to recruiting candidates who are in the disease’s early stages, before symptoms are evident, which is when treatment is most effective. In a paper to be presented next week at the Machine Learning for Health Care conference, MIT Media Lab researchers describe a machine-learning model that can help clinicians zero in on that specific cohort of participants.

They first trained a “population” model on an entire dataset that included clinically significant cognitive test scores and other biometric data from Alzheimer’s patients, and also healthy individuals, collected between biannual doctor’s visits. From the data, the model learns patterns that can help predict how the patients will score on cognitive tests taken between visits. In new participants, a second model, personalized for each patient, continuously updates score predictions based on newly recorded data, such as information collected during the most recent visits.

Experiments indicate accurate predictions can be made looking ahead six, 12, 18, and 24 months. Clinicians could thus use the model to help select at-risk participants for clinical trials, who are likely to demonstrate rapid cognitive decline, possibly even before other clinical symptoms emerge. Treating such patients early on may help clinicians better track which antidementia medicines are and aren’t working.

“Accurate prediction of cognitive decline from six to 24 months is critical to designing clinical trials,” says Oggi Rudovic, a Media Lab researcher. “Being able to accurately predict future cognitive changes can reduce the number of visits the participant has to make, which can be expensive and time-consuming. Apart from helping develop a useful drug, the goal is to help reduce the costs of clinical trials to make them more affordable and done on larger scales.”

Joining Rudovic on the paper are: Yuria Utsumi, an undergraduate student, and Kelly Peterson, a graduate student, both in the Department of Electrical Engineering and Computer Science; Ricardo Guerrero and Daniel Rueckert, both of Imperial College London; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.

Population to personalization

For their work, the researchers leveraged the world’s largest Alzheimer’s disease clinical trial dataset, called Alzheimer’s Disease Neuroimaging Initiative (ADNI). The dataset contains data from around 1,700 participants, with and without Alzheimer’s, recorded during semiannual doctor’s visits over 10 years.

Data includes their AD Assessment Scale-cognition sub-scale (ADAS-Cog13) scores, the most widely used cognitive metric for clinical trials of Alzheimer’s disease drugs. The test assesses memory, language, and orientation on a scale of increasing severity up to 85 points. The dataset also includes MRI scans, demographic and genetic information, and cerebrospinal fluid measurements.

In all, the researchers trained and tested their model on a sub-cohort of 100 participants, who made more than 10 visits and had less than 85 percent missing data, each with more than 600 computable features. Of those participants, 48 were diagnosed with Alzheimer’s disease. But data are sparse, with different combinations of features missing for most of the participants.

To tackle that, the researchers used the data to train a population model powered by a “nonparametric” probability framework, called Gaussian Processes (GPs), which has flexible parameters to fit various probability distributions and to process uncertainties in data. This technique measures similarities between variables, such as patient data points, to predict a value for an unseen data point — such as a cognitive score. The output also contains an estimate for how certain it is about the prediction. The model works robustly even when analyzing datasets with missing values or lots of noise from different data-collecting formats.

But, in evaluating the model on new patients from a held-out portion of participants, the researchers found the model’s predictions weren’t as accurate as they could be. So, they personalized the population model for each new patient. The system would then progressively fill in data gaps with each new patient visit and update the ADAS-Cog13 score prediction accordingly, by continuously updating the previously unknown distributions of the GPs. After about four visits, the personalized models significantly reduced the error rate in predictions. It also outperformed various traditional machine-learning approaches used for clinical data.

Learning how to learn

But the researchers found the personalized models’ results were still suboptimal. To fix that, they invented a novel “metalearning” scheme that learns to automatically choose which type of model, population or personalized, works best for any given participant at any given time, depending on the data being analyzed. Metalearning has been used before for computer vision and machine translation tasks to learn new skills or adapt to new environments rapidly with a few training examples. But this is the first time it’s been applied to tracking cognitive decline of Alzheimer’s patients, where limited data is a main challenge, Rudovic says.

The scheme essentially simulates how the different models perform on a given task — such as predicting an ADAS-Cog13 score — and learns the best fit. During each visit of a new patient, the scheme assigns the appropriate model, based on the previous data. With patients with noisy, sparse data during early visits, for instance, population models make more accurate predictions. When patients start with more data or collect more through subsequent visits, however, personalized models perform better.

This helped reduce the error rate for predictions by a further 50 percent. “We couldn’t find a single model or fixed combination of models that could give us the best prediction,” Rudovic says. “So, we wanted to learn how to learn with this metalearning scheme. It’s like a model on top of a model that acts as a selector, trained using metaknowledge to decide which model is better to deploy.”

Next, the researchers are hoping to partner with pharmaceutical firms to implement the model into real-world Alzheimer’s clinical trials. Rudovic says the model can also be generalized to predict various metrics for Alzheimer’s and other diseases.

Materials provided by Massachusetts Institute of Technology

TESS discovers three new planets nearby, including temperate “sub-Neptune”

TESS discovers three new planets nearby, including temperate “sub-Neptune”

NASA’s Transiting Exoplanet Survey Satellite, or TESS, has discovered three new worlds that are among the smallest, nearest exoplanets known to date. The planets orbit a star just 73 light-years away and include a small, rocky super-Earth and two sub-Neptunes — planets about half the size of our own icy giant.

The sub-Neptune furthest out from the star appears to be within a “temperate” zone, meaning that the very top of the planet’s atmosphere is within a temperature range that could support some forms of life. However, scientists say the planet’s atmosphere is likely a thick, ultradense heat trap that renders the planet’s surface too hot to host water or life.

Nevertheless, this new planetary system, which astronomers have dubbed TOI-270, is proving to have other curious qualities. For instance, all three planets appear to be relatively close in size. In contrast, our own solar system is populated with planetary extremes, from the small, rocky worlds of Mercury, Venus, Earth, and Mars, to the much more massive Jupiter and Saturn, and the more remote ice giants of Neptune and Uranus.

There’s nothing in our solar system that resembles an intermediate planet, with a size and composition somewhere in the middle of Earth and Neptune. But TOI-270 appears to host two such planets: both sub-Neptunes are smaller than our own Neptune and not much larger than the rocky planet in the system.

Astronomers believe TOI-270’s sub-Neptunes may be a “missing link” in planetary formation, as they are of an intermediate size and could help researchers determine whether small, rocky planets like Earth and more massive, icy worlds like Neptune follow the same formation path or evolve separately.

TOI-270 is an ideal system for answering such questions, because the star itself is nearby and therefore bright, and also unusually quiet. The star is an M-dwarf, a type of star that is normally extremely active, with frequent flares and solar storms. TOI-270 appears to be an older M-dwarf that has since quieted down, giving off a steady brightness, against which scientists can measure many properties of the orbiting planets, such as their mass and atmospheric composition.

“There are a lot of little pieces of the puzzle that we can solve with this system,” says Maximilian Günther, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and lead author of a study published today in Nature Astronomy that details the discovery. “You can really do all the things you want to do in exoplanet science, with this system.”

Compare and contrast worlds in the TOI 270 system with these illustrations. Temperatures given for TOI 270 planets are equilibrium temperatures, calculated without the warming effects of any possible atmospheres. Credit: NASA’s Goddard Space Flight Center

A planetary pattern

Günther and his colleagues detected the three new planets after looking through measurements of stellar brightness taken by TESS. The MIT-developed satellite stares at patches of the sky for 27 days at a time, monitoring thousands of stars for possible transits — characteristic dips in brightness that could signal a planet temporarily blocking the star’s light as it passes in front of it.

The team isolated several such signals from a nearby  star, located 73 light years away in the southern sky. They named the star TOI-270, for the 270th “TESS Object of Interest” identified to date. The researchers used ground-based instruments to follow up on the star’s activity, and confirmed that the signals are the result of three orbiting exoplanets: planet b, a rocky super-Earth with a roughly three-day orbit; planet c, a sub-Neptune with a five-day orbit; and planet d, another sub-Neptune slightly further out, with an 11-day orbit.

Günther notes that the planets seem to line up in what astronomers refer to as a “resonant chain,” meaning that the ratio of their orbits are close to whole integers — in this case, 3:5 for the inner pair, and 2:1 for the outer pair — and that the planets are therefore in “resonance” with each other. Astronomers have discovered other small stars with similarly resonant planetary formations. And in our own solar system, the moons of Jupiter also happen to line up in resonance with each other.

“For TOI-270, these planets line up like pearls on a string,” Günther says. “That’s a very interesting thing, because it lets us study their dynamical behavior. And you can almost expect, if there are more planets, the next one would be somewhere further out, at another integer ratio.”

“An exceptional laboratory”

TOI-270’s discovery initially caused a stir of excitement within the TESS science team, as it seemed, in the first analysis, that planet d might lie in the star’s habitable zone, a region that would be cool enough for the planet’s surface to support water, and possibly life. But the researchers soon realized that the planet’s atmosphere was probably extremely thick, and would therefore generate an intense greenhouse effect, causing the planet’s surface to be too hot to be habitable.

But Günther says there is a good possibility that the system hosts other planets, further out from planet d, that might well lie within the habitable zone. Planet d, with an 11-day orbit, is about 10 million kilometers out from the star. Günther says that, given that the star is small and relatively cool — about half as hot as the sun — its habitable zone could potentially begin at around 15 million kilometers. But whether a planet exists within this zone, and whether it is habitable, depends on a host of other parameters, such as its size, mass, and atmospheric conditions.

Fortunately, the team writes in their paper that “the host star, TOI-270, is remarkably well-suited for future habitability searches, as it is particularly quiet.” The researchers plan to focus other instruments, including the upcoming James Webb Space Telescope, on TOI-270, to pin down various properties of the three planets, as well as search for additional planets in the star’s habitable zone.

“TOI-270 is a true Disneyland for exoplanet science, and one of the prime systems TESS was set out to discover,” Günther says. “It is an exceptional laboratory for not one, but many reasons — it really ticks all the boxes.”

Materials provided by Massachusetts Institute of Technology

Seeking new physics, scientists borrow from social networks

Seeking new physics, scientists borrow from social networks

When two protons collide, they release pyrotechnic jets of particles, the details of which can tell scientists something about the nature of physics and the fundamental forces that govern the universe.

Enormous particle accelerators such as the Large Hadron Collider can generate billions of such collisions per minute by smashing together beams of protons at close to the speed of light. Scientists then search through measurements of these collisions in hopes of unearthing weird, unpredictable behavior beyond the established playbook of physics known as the Standard Model.

Now MIT physicists have found a way to automate the search for strange and potentially new physics, with a technique that determines the degree of similarity between pairs of collision events. In this way, they can estimate the relationships among hundreds of thousands of collisions in a proton beam smashup, and create a geometric map of events according to their degree of similarity.

The researchers say their new technique is the first to relate multitudes of particle collisions to each other, similar to a social network.

“Maps of social networks are based on the degree of connectivity between people, and for example, how many neighbors you need before you get from one friend to another,” says Jesse Thaler, associate professor of physics at MIT. “It’s the same idea here.”

Thaler says this social networking of particle collisions can give researchers a sense of the more connected, and therefore more typical, events that occur when protons collide. They can also quickly spot the dissimilar events, on the outskirts of a collision network, which they can further investigate for potentially new physics. He and his collaborators, graduate students Patrick Komiske and Eric Metodiev, carried out the research at the MIT Center for Theoretical Physics and the MIT Laboratory for Nuclear Science. They detail their new technique this week in the journal Physical Review Letters.

Seeing the data agnostically

Thaler’s group focuses, in part, on developing techniques to analyze open data from the LHC and other particle collider facilities in hopes of digging up interesting physics that others might have initially missed.

“Having access to this public data has been wonderful,” Thaler says. “But it’s daunting to sift through this mountain of data to figure out what’s going on.”

Physicists normally look through collider data for specific patterns or energies of collisions that they believe to be of interest based on theoretical predictions. Such was the case for the discovery of the Higgs boson, the elusive elementary particle that was predicted by the Standard Model. The particle’s properties were theoretically outlined in detail but had not been observed until 2012, when physicists, knowing approximately what to look for, found signatures of the Higgs boson hidden amid trillions of proton collisions.

But what if particles exhibit behavior beyond what the Standard Model predicts, that physicists have no theory to anticipate?

Thaler, Komiske, and Metodiev have landed on a novel way to sift through collider data without knowing ahead of time what to look for. Rather than consider a single collision event at a time, they looked for ways to compare multiple events with each other, with the idea that perhaps by determining which events are more typical and which are less so, they might pick out outliers with potentially interesting, unexpected behavior.

“What we’re trying to do is to be agnostic about what we think is new physics or not,” says Metodiev.  “We want to let the data speak for itself.”

Moving dirt

Particle collider data are jam-packed with billions of proton collisions, each of which comprises individual sprays of particles. The team realized these sprays are essentially point clouds — collections of dots, similar to the point clouds that represent scenes and objects in computer vision. Researchers in that field have developed an arsenal of techniques to compare point clouds, for example to enable robots to accurately identify objects and obstacles in their environment.

Metodiev and Komiske utilized similar techniques to compare point clouds between pairs of collisions in particle collider data. In particular, they adapted an existing algorithm that is designed to calculate the optimal amount of energy, or “work” that is needed to transform one point cloud into another. The crux of the algorithm is based on an abstract idea known as the “earth’s mover’s distance.”

“You can imagine deposits of energy as being dirt, and you’re the earth mover who has to move that dirt from one place to another,” Thaler explains. “The amount of sweat that you expend getting from one configuration to another is the notion of distance that we’re calculating.”

In other words, the more energy it takes to rearrange one point cloud to resemble another, the farther apart they are in terms of their similarity. Applying this idea to particle collider data, the team was able to calculate the optimal energy it would take to transform a given point cloud into another, one pair at a time. For each pair, they assigned a number, based on the “distance,” or degree of similarity they calculated between the two. They then considered each point cloud as a single point and arranged these points in a social network of sorts.

Three particle collision events, in the form of jets, obtained from the CMS Open Data, form a triangle to represent an abstract “space of events.” The animation depicts how one jet can be optimally rearranged into another.

The team has been able to construct a social network of 100,000 pairs of collision events, from open data provided by the LHC, using their technique. The researchers hope that by looking at collision datasets as networks, scientists may be able to quickly flag potentially interesting events at the edges of a given network.

“We’d like to have an Instagram page for all the craziest events, or point clouds, recorded by the LHC on a given day,” says Komiske. “This technique is an ideal way to determine that image. Because you just find the thing that’s farthest away from everything else.”

Typical collider datasets that are made publicly available normally include several million events, which have been preselected from an original chaos of billions of collisions that occurred at any given moment in a particle accelerator. Thaler says the team is working on ways to scale up their technique to construct larger networks, to potentially visualize the “shape,” or general relationships within an entire dataset of particle collisions.

In the near future, he envisions testing the technique on historical data that physicists now know contain milestone discoveries, such as the first detection in 1995 of the top quark, the most massive of all known elementary particles.

“The top quark is an object that gives rise to these funny, three-pronged sprays of radiation, which are very dissimilar from typical sprays of one or two prongs,” Thaler says. “If we could rediscover the top quark in this archival data, with this technique that doesn’t need to know what new physics it is looking for, it would be very exciting and could give us confidence in applying this to current datasets, to find more exotic objects.”

Materials provided by Massachusetts Institute of Technology

Biologists and mathematicians team up to explore tissue folding

Biologists and mathematicians team up to explore tissue folding

As embryos develop, they follow predetermined patterns of tissue folding, so that individuals of the same species end up with nearly identically shaped organs and very similar body shapes.

MIT scientists have now discovered a key feature of embryonic tissue that helps explain how this process is carried out so faithfully each time. In a study of fruit flies, they found that the reproducibility of tissue folding is generated by a network of proteins that connect like a fishing net, creating many alternative pathways that tissues can use to fold the right way.

“What we found is that there’s a lot of redundancy in the network,” says Adam Martin, an MIT associate professor of biology and the senior author of the study. “The cells are interacting and connecting with each other mechanically, but you don’t see individual cells taking on an all-important role. This means that if one cell gets damaged, other cells can still connect to disparate parts of the tissue.”

To uncover these network features, Martin worked with Jörn Dunkel, an MIT associate professor of physical applied mathematics and an author of the paper, to apply an algorithm normally used by astronomers to study the structure of galaxies.

Hannah Yevick, an MIT postdoc, is the lead author of the study, which appears today in Developmental Cell. Graduate student Pearson Miller is also an author of the paper.

A safety net

During embryonic development, tissues change their shape through a process known as morphogenesis. One important way tissues change shape is to fold, which allows flat sheets of embryonic cells to become tubes and other important shapes for organs and other body parts. Previous studies in fruit flies have shown that even when some of these embryonic cells are damaged, sheets can still fold into their correct shapes.

“This is a process that’s fairly reproducible, and so we wanted to know what makes it so robust,” Martin says.

In this study, the researchers focused on the process of gastrulation, during which the embryo is reorganized from a single-layered sphere to a more complex structure with multiple layers. This process, and other morphogenetic processes similar to fruit fly tissue folding, also occur in human embryos. The embryonic cells involved in gastrulation contain in their cytoplasm proteins called myosin and actin, which form cables and connect at junctions between cells to form a network across the tissue. Martin and Yevick had hypothesized that the network of cell connectivity might play a role in the robustness of the tissue folding, but until now, there was no good way to trace the connections of the network.

To achieve that, Martin’s lab joined forces with Dunkel, who studies the physics of soft surfaces and flowing matter — for example, wrinkle formation and patterns of bacterial streaming. For this study, Dunkel had the idea to apply a mathematical procedure that can identify topological features of a three-dimensional structure, analogous to ridges and valleys in a landscape. Astronomers use this algorithm to identify galaxies, and in this case, the researchers used it to trace the actomyosin networks across and between the cells in a sheet of tissue.

“Once you have the network, you can apply standard methods from network analysis — the same kind of analysis that you would apply to streets or other transport networks, or the blood circulation network, or any other form of network,” Dunkel says.

Among other things, this kind of analysis can reveal the structure of the network and how efficiently information flows along it. One important question is how well a network adapts if part of it gets damaged or blocked. The MIT team found that the actomyosin network contains a great deal of redundancy — that is, most of the “nodes” of the network are connected to many other nodes.

This built-in redundancy is analogous to a good public transit system, where if one bus or train line goes down, you can still get to your destination. Because cells can generate mechanical tension along many different pathways, they can fold the right way even if many of the cells in the network are damaged.

“If you and I are holding a single rope, and then we cut it in the middle, it would come apart. But if you have a net, and cut it in some places, it still stays globally connected and can transmit forces, as long as you don’t cut all of it,” Dunkel says.

Folding framework

The researchers also found that the connections between cells preferentially organize themselves to run in the same direction as the furrow that forms in the early stages of folding.

“We think this is setting up a frame around which the tissue will adopt its shape,” Martin says. “If you prevent the directionality of the connections, then what happens is you can still get folding but it will fold along the wrong axis.”

Although this study was done in fruit flies, similar folding occurs in vertebrates (including humans) during the formation of the neural tube, which is the precursor to the brain and spinal cord. Martin now plans to apply the techniques he used in fruit flies to see if the actomyosin network is organized the same way in the neural tube of mice. Defects in the closure of the neural tube can lead to birth defects such as spina bifida.

“We would like to understand how it goes wrong,” Martin says. “It’s still not clear whether it’s the sealing up of the tube that’s problematic or whether there are defects in the folding process.”

Materials provided by Massachusetts Institute of Technology

Hydration sensor could improve dialysis

Hydration sensor could improve dialysis

For patients with kidney failure who need dialysis, removing fluid at the correct rate and stopping at the right time is critical. This typically requires guessing how much water to remove and carefully monitoring the patient for sudden drops in blood pressure.

Currently, there is no reliable, easy way to measure hydration levels in these patients, who number around half a million in the United States. However, researchers from MIT and Massachusetts General Hospital have now developed a portable sensor that can accurately measure patients’ hydration levels using a technique known as Nuclear Magnetic Resonance (NMR) relaxometry.

Such a device could be useful for not only dialysis patients but also people with congestive heart failure, as well as athletes and elderly people who may be in danger of becoming dehydrated, says Michael Cima, the David H. Koch Professor of Engineering in MIT’s Department of Materials Science and Engineering.

“There’s a tremendous need across many different patient populations to know whether they have too much water or too little water,” says Cima, who is the senior author of the study and a member of MIT’s Koch Institute for Integrative Cancer Research. “This is a way we could measure directly, in every patient, how close they are to a normal hydration state.”

The portable device is based on the same technology as magnetic resonance imaging (MRI) scanners but can obtain measurements at a fraction of the cost of MRI, and in much less time, because there is no imaging involved.

Lina Colucci, a former graduate student in health sciences and technology, is the lead author of the paper, which appears in the July 24 issue of Science Translational Medicine. Other authors of the paper include MIT graduate student Matthew Li; MGH nephrologists Kristin Corapi, Andrew Allegretti, and Herbert Lin; MGH research fellow Xavier Vela Parada; MGH Chief of Medicine Dennis Ausiello; and Harvard Medical School assistant professor in radiology Matthew Rosen.


Hydration status

Cima began working on this project about 10 years ago, after realizing that there was a critical need for an accurate, noninvasive way to measure hydration. Currently, the available methods are either invasive, subjective, or unreliable. Doctors most frequently assess overload (hypervolemia) by a few physical signs such as examining the size of the jugular vein, pressing on the skin, or examining the ankles where water might pool.

The MIT team decided to try a different approach, based on NMR. Cima had previously launched a company called T2 Biosystems that uses small NMR devices to diagnose bacterial infections by analyzing patient blood samples. One day, he had the idea to use the devices to try to measure water content in tissue, and a few years ago, the researchers got a grant from the MIT-MGH Strategic Partnership to do a small clinical trial for monitoring hydration. They studied both healthy controls and patients with end-stage renal disease who regularly underwent dialysis.

One of the main goals of dialysis is to remove fluid in order bring patients to their “dry weight,” which is the weight at which their fluid levels are optimized. Determining a patient’s dry weight is extremely challenging, however. Doctors currently estimate dry weight based on physical signs as well as through trial-and-error over multiple dialysis sessions.

The MIT/MGH team showed that quantitative NMR, which works by measuring a property of hydrogen atoms called T2 relaxation time, can provide much more accurate measurements. The Tsignal measures both the environment and quantity of hydrogen atoms (or water molecules) present.

“The beauty of magnetic resonance compared to other modalities for assessing hydration is that the magnetic resonance signal comes exclusively from hydrogen atoms. And most of the hydrogen atoms in the human body are found in water molecules,” Colucci says.

The researchers used their device to measure fluid volume in patients before and after they underwent dialysis. The results showed that this technique could distinguish healthy patients from those needing dialysis with just the first measurement. In addition, the measurement correctly showed dialysis patients moving closer to a normal hydration state over the course of their treatment.

Furthermore, the NMR measurements were able to detect the presence of excess fluid in the body before traditional clinical signs — such as visible fluid accumulation below the skin — were present. The sensor could be used by physicians to determine when a patient has reached their true dry weight, and this determination could be personalized at each dialysis treatment.

Better monitoring

The researchers are now planning additional clinical trials with dialysis patients. They expect that dialysis, which currently costs the United States more than $40 billion per year, would be one of the biggest applications for this technology. This kind of monitoring could also be useful for patients with congestive heart failure, which affects about 5 million people in the United States.

“The water retention issues of congestive heart failure patients are very significant,” Cima says. “Our sensor may offer the possibility of a direct measure of how close they are to a normal fluid state. This is important because identifying fluid accumulation early has been shown to reduce hospitalization, but right now there are no ways to quantify low-level fluid accumulation in the body. Our technology could potentially be used at home as a way for the care team to get that early warning.”

Sahir Kalim, a nephrologist and assistant professor of medicine at Massachusetts General Hospital, described the MIT approach as “highly novel”.

“The development of a bedside device that can accurately inform providers about how much fluid a patient should ideally have removed during their dialysis treatment would likely be one of the most significant developments in dialysis care in many years,” says Kalim, who was not involved in the study. “Colucci and colleagues have made a promising innovation that may one day yield this impact.”

In their study of the healthy control subjects, the researchers also incidentally discovered that they could detect dehydration. This could make the device useful for monitoring elderly people, who often become dehydrated because their sense of thirst lessens with age, or athletes taking part in marathons or other endurance events. The researchers are planning future clinical trials to test the potential of their technology to detect dehydration.

Materials provided by Massachusetts Institute of Technology

Microfluidics device helps diagnose sepsis in minutes

Microfluidics device helps diagnose sepsis in minutes

A novel sensor designed by MIT researchers could dramatically accelerate the process of diagnosing sepsis, a leading cause of death in U.S. hospitals that kills nearly 250,000 patients annually.

Sepsis occurs when the body’s immune response to infection triggers an inflammation chain reaction throughout the body, causing high heart rate, high fever, shortness of breath, and other issues. If left unchecked, it can lead to septic shock, where blood pressure falls and organs shut down. To diagnose sepsis, doctors traditionally rely on various diagnostic tools, including vital signs, blood tests, and other imaging and lab tests.

In recent years, researchers have found protein biomarkers in the blood that are early indicators of sepsis. One promising candidate is interleukin-6 (IL-6), a protein produced in response to inflammation. In sepsis patients, IL-6 levels can rise hours before other symptoms begin to show. But even at these elevated levels, the concentration of this protein in the blood is too low overall for traditional assay devices to detect it quickly.

In a paper being presented this week at the Engineering in Medicine and Biology Conference, MIT researchers describe a microfluidics-based system that automatically detects clinically significant levels of IL-6 for sepsis diagnosis in about 25 minutes, using less than a finger prick of blood.

In one microfluidic channel, microbeads laced with antibodies mix with a blood sample to capture the IL-6 biomarker. In another channel, only beads containing the biomarker attach to an electrode. Running voltage through the electrode produces an electrical signal for each biomarker-laced bead, which is then converted into the biomarker concentration level.

“For an acute disease, such as sepsis, which progresses very rapidly and can be life-threatening, it’s helpful to have a system that rapidly measures these nonabundant biomarkers,” says first author Dan Wu, a PhD student in the Department of Mechanical Engineering. “You can also frequently monitor the disease as it progresses.”

Joining Wu on the paper is Joel Voldman, a professor and associate head of the Department of Electrical Engineering and Computer Science, co-director of the Medical Electronic Device Realization Center, and a principal investigator in the Research Laboratory of Electronics and the Microsystems Technology Laboratories.

Integrated, automated design

Traditional assays that detect protein biomarkers are bulky, expensive machines relegated to labs that require about a milliliter of blood and produce results in hours. In recent years, portable “point-of-care” systems have been developed that use microliters of blood to get similar results in about 30 minutes.

But point-of-care systems can be very expensive since most use pricey optical components to detect the biomarkers. They also capture only a small number of proteins, many of which are among the more abundant ones in blood. Any efforts to decrease the price, shrink down components, or increase protein ranges negatively impacts their sensitivity.

In their work, the researchers wanted to shrink components of the magnetic-bead-based assay, which is often used in labs, onto an automated microfluidics device that’s roughly several square centimeters. That required manipulating beads in micron-sized channels and fabricating a device in the Microsystems Technology Laboratory that automated the movement of fluids.

The beads are coated with an antibody that attracts IL-6, as well as a catalyzing enzyme called horseradish peroxidase. The beads and blood sample are injected into the device, entering into an “analyte-capture zone,” which is basically a loop. Along the loop is a peristaltic pump — commonly used for controlling liquids — with valves automatically controlled by an external circuit. Opening and closing the valves in specific sequences circulates the blood and beads to mix together. After about 10 minutes, the IL-6 proteins have bound to the antibodies on the beads.

Automatically reconfiguring the valves at that time forces the mixture into a smaller loop, called the “detection zone,” where they stay trapped. A tiny magnet collects the beads for a brief wash before releasing them around the loop. After about 10 minutes, many beads have stuck on an electrode coated with a separate antibody that attracts IL-6. At that time, a solution flows into the loop and washes the untethered beads, while the ones with IL-6 protein remain on the electrode.

The solution carries a specific molecule that reacts to the horseradish enzyme to create a compound that responds to electricity. When a voltage is applied to the solution, each remaining bead creates a small current. A common chemistry technique called “amperometry” converts that current into a readable signal. The device counts the signals and calculates the concentration of IL-6.

“On their end, doctors just load in a blood sample using a pipette. Then, they press a button and 25 minutes later they know the IL-6 concentration,” Wu says.

The device uses about 5 microliters of blood, which is about a quarter the volume of blood drawn from a finger prick and a fraction of the 100 microliters required to detect protein biomarkers in lab-based assays. The device captures IL-6 concentrations as low as 16 picograms per milliliter, which is below the concentrations that signal sepsis, meaning the device is sensitive enough to provide clinically relevant detection.

A general platform

The current design has eight separate microfluidics channels to measure as many different biomarkers or blood samples in parallel. Different antibodies and enzymes can be used in separate channels to detect different biomarkers, or different antibodies can be used in the same channel to detect several biomarkers simultaneously.

Next, the researchers plan to create a panel of important sepsis biomarkers for the device to capture, including interleukin-6, interleukin-8, C-reactive protein, and procalcitonin. But there’s really no limit to how many different biomarkers the device can measure, for any disease, Wu says. Notably, more than 200 protein biomarkers for various diseases and conditions have been approved by the U.S. Food and Drug Administration.

“This is a very general platform,” Wu says. “If you want to increase the device’s physical footprint, you can scale up and design more channels to detect as many biomarkers as you want.”

Materials provided by Massachusetts Institute of Technology

How expectation influences perception

How expectation influences perception

For decades, research has shown that our perception of the world is influenced by our expectations. These expectations, also called “prior beliefs,” help us make sense of what we are perceiving in the present, based on similar past experiences. Consider, for instance, how a shadow on a patient’s X-ray image, easily missed by a less experienced intern, jumps out at a seasoned physician. The physician’s prior experience helps her arrive at the most probable interpretation of a weak signal.

The process of combining prior knowledge with uncertain evidence is known as Bayesian integration and is believed to widely impact our perceptions, thoughts, and actions. Now, MIT neuroscientists have discovered distinctive brain signals that encode these prior beliefs. They have also found how the brain uses these signals to make judicious decisions in the face of uncertainty.

“How these beliefs come to influence brain activity and bias our perceptions was the question we wanted to answer,” says Mehrdad Jazayeri, the Robert A. Swanson Career Development Professor of Life Sciences, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

The researchers trained animals to perform a timing task in which they had to reproduce different time intervals. Performing this task is challenging because our sense of time is imperfect and can go too fast or too slow. However, when intervals are consistently within a fixed range, the best strategy is to bias responses toward the middle of the range. This is exactly what animals did. Moreover, recording from neurons in the frontal cortex revealed a simple mechanism for Bayesian integration: Prior experience warped the representation of time in the brain so that patterns of neural activity associated with different intervals were biased toward those that were within the expected range.

MIT postdoc Hansem Sohn, former postdoc Devika Narain, and graduate student Nicolas Meirhaeghe are the lead authors of the study, which appears in the July 15 issue of Neuron.

Ready, set, go

Statisticians have known for centuries that Bayesian integration is the optimal strategy for handling uncertain information. When we are uncertain about something, we automatically rely on our prior experiences to optimize behavior.

“If you can’t quite tell what something is, but from your prior experience you have some expectation of what it ought to be, then you will use that information to guide your judgment,” Jazayeri says. “We do this all the time.”

In this new study, Jazayeri and his team wanted to understand how the brain encodes prior beliefs, and put those beliefs to use in the control of behavior. To that end, the researchers trained animals to reproduce a time interval, using a task called “ready-set-go.” In this task, animals measure the time between two flashes of light (“ready” and “set”) and then generate a “go” signal by making a delayed response after the same amount of time has elapsed.

They trained the animals to perform this task in two contexts. In the “Short” scenario, intervals varied between 480 and 800 milliseconds, and in the “Long” context, intervals were between 800 and 1,200 milliseconds. At the beginning of the task, the animals were given the information about the context (via a visual cue) and therefore knew to expect intervals from either the shorter or longer range.

Jazayeri had previously shown that humans performing this task tend to bias their responses toward the middle of the range. Here, they found that animals do the same. For example, if animals believed the interval would be short, and were given an interval of 800 milliseconds, the interval they produced was a little shorter than 800 milliseconds. Conversely, if they believed it would be longer, and were given the same 800-millisecond interval, they produced an interval a bit longer than 800 milliseconds.

“Trials that were identical in almost every possible way, except the animal’s belief led to different behaviors,” Jazayeri says. “That was compelling experimental evidence that the animal is relying on its own belief.”

Once they had established that the animals relied on their prior beliefs, the researchers set out to find how the brain encodes prior beliefs to guide behavior. They recorded activity from about 1,400 neurons in a region of the frontal cortex, which they have previously shown is involved in timing.

During the “ready-set” epoch, the activity profile of each neuron evolved in its own way, and about 60 percent of the neurons had different activity patterns depending on the context (Short versus Long). To make sense of these signals, the researchers analyzed the evolution of neural activity across the entire population over time and found that prior beliefs bias behavioral responses by warping the neural representation of time toward the middle of the expected range.

“We have never seen such a concrete example of how the brain uses prior experience to modify the neural dynamics by which it generates sequences of neural activities, to correct for its own imprecision. This is the unique strength of this paper: bringing together perception, neural dynamics, and Bayesian computation into a coherent framework, supported by both theory and measurements of behavior and neural activities,” says Mate Lengyel, a professor of computational neuroscience at Cambridge University, who was not involved in the study.

Embedded knowledge

Researchers believe that prior experiences change the strength of connections between neurons. The strength of these connections, also known as synapses, determines how neurons act upon one another and constrains the patterns of activity that a network of interconnected neurons can generate. The finding that prior experiences warp the patterns of neural activity provides a window onto how experience alters synaptic connections. “The brain seems to embed prior experiences into synaptic connections so that patterns of brain activity are appropriately biased,” Jazayeri says.

As an independent test of these ideas, the researchers developed a computer model consisting of a network of neurons that could perform the same ready-set-go task. Using techniques borrowed from machine learning, they were able to modify the synaptic connections and create a model that behaved like the animals.

These models are extremely valuable as they provide a substrate for the detailed analysis of the underlying mechanisms, a procedure that is known as “reverse-engineering.” Remarkably, reverse-engineering the model revealed that it solved the task the same way the monkeys’ brain did. The model also had a warped representation of time according to prior experience.

The researchers used the computer model to further dissect the underlying mechanisms using perturbation experiments that are currently impossible to do in the brain. Using this approach, they were able to show that unwarping the neural representations removes the bias in the behavior. This important finding validated the critical role of warping in Bayesian integration of prior knowledge.

The researchers now plan to study how the brain builds up and slowly fine-tunes the synaptic connections that encode prior beliefs as an animal is learning to perform the timing task.

Reference: Neuron Journal

Materials provided by Massachusetts Institute of Technology

Automated system generates robotic parts for novel tasks

Automated system generates robotic parts for novel tasks

An automated system developed by MIT researchers designs and 3-D prints complex robotic parts called actuators that are optimized according to an enormous number of specifications. In short, the system does automatically what is virtually impossible for humans to do by hand.

In a paper published today in Science Advances, the researchers demonstrate the system by fabricating actuators — devices that mechanically control robotic systems in response to electrical signals — that show different black-and-white images at different angles. One actuator, for instance, portrays a Vincent van Gogh portrait when laid flat. Tilted an angle when it’s activated, however, it portrays the famous Edvard Munch painting “The Scream.” The researchers also 3-D printed floating water lilies with petals equipped with arrays of actuators and hinges that fold up in response to magnetic fields run through conductive fluids.

The actuators are made from a patchwork of three different materials, each with a different light or dark color and a property — such as flexibility and magnetization — that controls the actuator’s angle in response to a control signal. Software first breaks down the actuator design into millions of three-dimensional pixels, or “voxels,” that can each be filled with any of the materials. Then, it runs millions of simulations, filling different voxels with different materials. Eventually, it lands on the optimal placement of each material in each voxel to generate two different images at two different angles. A custom 3-D printer then fabricates the actuator by dropping the right material into the right voxel, layer by layer.

“Our ultimate goal is to automatically find an optimal design for any problem, and then use the output of our optimized design to fabricate it,” says first author Subramanian Sundaram PhD ’18, a former graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We go from selecting the printing materials, to finding the optimal design, to fabricating the final product in almost a completely automated way.”

The shifting images demonstrates what the system can do. But actuators optimized for appearance and function could also be used for biomimicry in robotics. For instance, other researchers are designing underwater robotic skins with actuator arrays meant to mimic denticles on shark skin. Denticles collectively deform to decrease drag for faster, quieter swimming. “You can imagine underwater robots having whole arrays of actuators coating the surface of their skins, which can be optimized for drag and turning efficiently, and so on,” Sundaram says.

Joining Sundaram on the paper are: Melina Skouras, a former MIT postdoc; David S. Kim, a former researcher in the Computational Fabrication Group; Louise van den Heuvel ’14, SM ’16; and Wojciech Matusik, an MIT associate professor in electrical engineering and computer science and head of the Computational Fabrication Group.

Navigating the “combinatorial explosion”

Robotic actuators today are becoming increasingly complex. Depending on the application, they must be optimized for weight, efficiency, appearance, flexibility, power consumption, and various other functions and performance metrics. Generally, experts manually calculate all those parameters to find an optimal design.

Adding to that complexity, new 3-D-printing techniques can now use multiple materials to create one product. That means the design’s dimensionality becomes incredibly high. “What you’re left with is what’s called a ‘combinatorial explosion,’ where you essentially have so many combinations of materials and properties that you don’t have a chance to evaluate every combination to create an optimal structure,” Sundaram says.

In their work, the researchers first customized three polymer materials with specific properties they needed to build their actuators: color, magnetization, and rigidity. In the end, they produced a near-transparent rigid material, an opaque flexible material used as a hinge, and a brown nanoparticle material that responds to a magnetic signal. They plugged all that characterization data into a property library.

The system takes as input grayscale image examples — such as the flat actuator that displays the Van Gogh portrait but tilts at an exact angle to show “The Scream.” It basically executes a complex form of trial and error that’s somewhat like rearranging a Rubik’s Cube, but in this case around 5.5 million voxels are iteratively reconfigured to match an image and meet a measured angle.

Initially, the system draws from the property library to randomly assign different materials to different voxels. Then, it runs a simulation to see if that arrangement portrays the two target images, straight on and at an angle. If not, it gets an error signal. That signal lets it know which voxels are on the mark and which should be changed. Adding, removing, and shifting around brown magnetic voxels, for instance, will change the actuator’s angle when a magnetic field is applied. But, the system also has to consider how aligning those brown voxels will affect the image.

Voxel by voxel

To compute the actuator’s appearances at each iteration, the researchers adopted a computer graphics technique called “ray-tracing,” which simulates the path of light interacting with objects. Simulated light beams shoot through the actuator at each column of voxels. Actuators can be fabricated with more than 100 voxel layers. Columns can contain more than 100 voxels, with different sequences of the materials that radiate a different shade of gray when flat or at an angle.

When the actuator is flat, for instance, the light beam may shine down on a column containing many brown voxels, producing a dark tone. But when the actuator tilts, the beam will shine on misaligned voxels. Brown voxels may shift away from the beam, while more clear voxels may shift into the beam, producing a lighter tone. The system uses that technique to align dark and light voxel columns where they need to be in the flat and angled image. After 100 million or more iterations, and anywhere from a few to dozens of hours, the system will find an arrangement that fits the target images.

“We’re comparing what that [voxel column] looks like when it’s flat or when it’s titled, to match the target images,” Sundaram says. “If not, you can swap, say, a clear voxel with a brown one. If that’s an improvement, we keep this new suggestion and make other changes over and over again.”

To fabricate the actuators, the researchers built a custom 3-D printer that uses a technique called “drop-on-demand.” Tubs of the three materials are connected to print heads with hundreds of nozzles that can be individually controlled. The printer fires a 30-micron-sized droplet of the designated material into its respective voxel location. Once the droplet lands on the substrate, it’s solidified. In that way, the printer builds an object, layer by layer.

The work could be used as a stepping stone for designing larger structures, such as airplane wings, Sundaram says. Researchers, for instance, have similarly started breaking down airplane wings into smaller voxel-like blocks to optimize their designs for weight and lift, and other metrics. “We’re not yet able to print wings or anything on that scale, or with those materials. But I think this is a first step toward that goal,” Sundaram says.

Materials provided by Massachusetts Institute of Technology

Artificial “muscles” achieve powerful pulling force

Artificial “muscles” achieve powerful pulling force

As a cucumber plant grows, it sprouts tightly coiled tendrils that seek out supports in order to pull the plant upward. This ensures the plant receives as much sunlight exposure as possible. Now, researchers at MIT have found a way to imitate this coiling-and-pulling mechanism to produce contracting fibers that could be used as artificial muscles for robots, prosthetic limbs, or other mechanical and biomedical applications.

While many different approaches have been used for creating artificial muscles, including hydraulic systems, servo motors, shape-memory metals, and polymers that respond to stimuli, they all have limitations, including high weight or slow response times. The new fiber-based system, by contrast, is extremely lightweight and can respond very quickly, the researchers say. The findings are being reported today in the journal Science.

The new fibers were developed by MIT postdoc Mehmet Kanik and MIT graduate student Sirma Örgüç, working with professors Polina Anikeeva, Yoel Fink, Anantha Chandrakasan, and C. Cem Taşan, and five others, using a fiber-drawing technique to combine two dissimilar polymers into a single strand of fiber.

The key to the process is mating together two materials that have very different thermal expansion coefficients — meaning they have different rates of expansion when they are heated. This is the same principle used in many thermostats, for example, using a bimetallic strip as a way of measuring temperature. As the joined material heats up, the side that wants to expand faster is held back by the other material. As a result, the bonded material curls up, bending toward the side that is expanding more slowly.

Artificial muscles achieve powerful pulling force

Credit: Courtesy of the researchers

Using two different polymers bonded together, a very stretchable cyclic copolymer elastomer and a much stiffer thermoplastic polyethylene, Kanik, Örgüç and colleagues produced a fiber that, when stretched out to several times its original length, naturally forms itself into a tight coil, very similar to the tendrils that cucumbers produce. But what happened next actually came as a surprise when the researchers first experienced it. “There was a lot of serendipity in this,” Anikeeva recalls.

As soon as Kanik picked up the coiled fiber for the first time, the warmth of his hand alone caused the fiber to curl up more tightly. Following up on that observation, he found that even a small increase in temperature could make the coil tighten up, producing a surprisingly strong pulling force. Then, as soon as the temperature went back down, the fiber returned to its original length. In later testing, the team showed that this process of contracting and expanding could be repeated 10,000 times “and it was still going strong,” Anikeeva says.

Artificial muscles achieve powerful pulling force

Credit: Courtesy of the researchers

One of the reasons for that longevity, she says, is that “everything is operating under very moderate conditions,” including low activation temperatures. Just a 1-degree Celsius increase can be enough to start the fiber contraction.

The fibers can span a wide range of sizes, from a few micrometers (millionths of a meter) to a few millimeters (thousandths of a meter) in width, and can easily be manufactured in batches up to hundreds of meters long. Tests have shown that a single fiber is capable of lifting loads of up to 650 times its own weight. For these experiments on individual fibers, Örgüç and Kanik have developed dedicated, miniaturized testing setups.

Artificial muscles achieve powerful pulling force

Credit: Courtesy of the researchers

The degree of tightening that occurs when the fiber is heated can be “programmed” by determining how much of an initial stretch to give the fiber. This allows the material to be tuned to exactly the amount of force needed and the amount of temperature change needed to trigger that force.

The fibers are made using a fiber-drawing system, which makes it possible to incorporate other components into the fiber itself. Fiber drawing is done by creating an oversized version of the material, called a preform, which is then heated to a specific temperature at which the material becomes viscous. It can then be pulled, much like pulling taffy, to create a fiber that retains its internal structure but is a small fraction of the width of the preform.

For testing purposes, the researchers coated the fibers with meshes of conductive nanowires. These meshes can be used as sensors to reveal the exact tension experienced or exerted by the fiber. In the future, these fibers could also include heating elements such as optical fibers or electrodes, providing a way of heating it internally without having to rely on any outside heat source to activate the contraction of the “muscle.”

Such fibers could find uses as actuators in robotic arms, legs, or grippers, and in prosthetic limbs, where their slight weight and fast response times could provide a significant advantage.

Some prosthetic limbs today can weigh as much as 30 pounds, with much of the weight coming from actuators, which are often pneumatic or hydraulic; lighter-weight actuators could thus make life much easier for those who use prosthetics. Such fibers might also find uses in tiny biomedical devices, such as a medical robot that works by going into an artery and then being activated,” Anikeeva suggests. “We have activation times on the order of tens of milliseconds to seconds,” depending on the dimensions, she says.

To provide greater strength for lifting heavier loads, the fibers can be bundled together, much as muscle fibers are bundled in the body. The team successfully tested bundles of 100 fibers. Through the fiber drawing process, sensors could also be incorporated in the fibers to provide feedback on conditions they encounter, such as in a prosthetic limb. Örgüç says bundled muscle fibers with a closed-loop feedback mechanism could find applications in robotic systems where automated and precise control are required.

Kanik says that the possibilities for materials of this type are virtually limitless because almost any combination of two materials with different thermal expansion rates could work, leaving a vast realm of possible combinations to explore. He adds that this new finding was like opening a new window, only to see “a bunch of other windows” waiting to be opened.

“The strength of this work is coming from its simplicity,” he says.

The team also included MIT graduate student Georgios Varnavides, postdoc Jinwoo Kim, and undergraduate students Thomas Benavides, Dani Gonzalez, and Timothy Akintlio.

Materials provided by Massachusetts Institute of Technology