More stories

  • in

    Why boiling droplets can race across hot oily surfaces

    When you’re frying something in a skillet and some droplets of water fall into the pan, you may have noticed those droplets skittering around on top of the film of hot oil. Now, that seemingly trivial phenomenon has been analyzed and understood for the first time by researchers at MIT — and may have important implications for microfluidic devices, heat transfer systems, and other useful functions.

    A droplet of boiling water on a hot surface will sometimes levitate on a thin vapor film, a well-studied phenomenon called the Leidenfrost effect. Because it is suspended on a cushion of vapor, the droplet can move across the surface with little friction. If the surface is coated with hot oil, which has much greater friction than the vapor film under a Leidenfrost droplet, the hot droplet should be expected to move much more slowly. But, counterintuitively, the series of experiments at MIT has showed that the opposite effect happens: The droplet on oil zooms away much more rapidly than on bare metal.

    This effect, which propels droplets across a heated oily surface 10 to 100 times faster than on bare metal, could potentially be used for self-cleaning or de-icing systems, or to propel tiny amounts of liquid through the tiny tubing of microfluidic devices used for biomedical and chemical research and testing. The findings are described today in a paper in the journal Physical Review Letters, written by graduate student Victor Julio Leon and professor of mechanical engineering Kripa Varanasi.

    In previous research, Varanasi and his team showed that it would be possible to harness this phenomenon for some of these potential applications, but the new work, producing such high velocities (approximately 50 times faster), could open up even more new uses, Varanasi says.

    After long and painstaking analysis, Leon and Varanasi were able to determine the reason for the rapid ejection of these droplets from the hot surface. Under the right conditions of high temperature, oil viscosity, and oil thickness, the oil will form a kind of thin cloak coating the outside of each water droplet. As the droplet heats up, tiny bubbles of vapor form along the interface between the droplet and the oil. Because these minuscule bubbles accumulate randomly along the droplet’s base, asymmetries develop, and the lowered friction under the bubble loosens the droplet’s attachment to the surface and propels it away.

    The oily film acts almost like the rubber of a balloon, and when the tiny vapor bubbles burst through, they impart a force and “the balloon just flies off because the air is going out one side, creating a momentum transfer,” Varanasi says. Without the oil cloak, the vapor bubbles would just flow out of the droplet in all directions, preventing self-propulsion, but the cloaking effect holds them in like the skin of the balloon.

    Researchers used extreme high-speed photography to reveal the details of the moving droplets. “You can actually see the fluctuations on the surface,” graduate student Victor Leon says.

    The phenomenon sounds simple, but it turns out to depend on a complex interplay between events happening at different timescales.

    This newly analyzed self-ejection phenomenon depends on a number of factors, including the droplet size, the thickness and viscosity of the oil film, the thermal conductivity of the surface, the surface tension of the different liquids in the system, the type of oil, and the texture of the surface.

    In their experiments, the lowest viscosity of the several oils they tested was about 100 times more viscous than the surrounding air. So, it would have been expected to make bubbles move much more slowly than on the air cushion of the Leidenfrost effect. “That gives an idea of how surprising it is that this droplet is moving faster,” Leon says.

    As boiling starts, bubbles will randomly form from some nucleation site that is not right at its center. Bubble formation will increase on that side, leading to the propulsion off in one direction. So far, the researchers have not been able to control the direction of that randomly induced propulsion, but they are now working on some possible ways to control the directionality in the future. “We have ideas of how to trigger the propulsion in controlled directions,” Leon says.

    Remarkably, the tests showed that even though the oil film of the surface, which was a silicon wafer, was only 10 to 100 microns thick — about the thickness of a human hair — its behavior didn’t match the equations for a thin film. Instead, because of the vaporization the film, it was actually behaving like an infinitely deep pool of oil. “We were kind of astounded” by that finding, Leon says. While a thin film should have caused it to stick, the virtually infinite pool gave the droplet much lower friction, allowing it to move more rapidly than expected, Leon says.

    The effect depends on the fact that the formation of the tiny bubbles is a much more rapid process than the transfer of heat through the oil film, about a thousand times faster, leaving plenty of time for the asymmetries within the droplet to accumulate. When the bubbles of vapor initially form at the oil-water interface, they are  much more insulating that the liquid of the droplet, leading to significant thermal disturbances in the oil film. These disturbances cause the droplet to vibrate, reducing friction and increasing vaporization rate.

    It took extreme high-speed photography to reveal the details of this rapid effect, Leon says, using a 100,000 frames per second video camera. “You can actually see the fluctuations on the surface,” Leon says.

    Initially, Varanasi says, “we were stumped at multiple levels as to what was going on, because the effect was so unexpected. … It’s a fairly complex answer to what may look seemingly simple, but it really creates this fast propulsion.”

    In practice, the effect means that in certain situations, a simple heating of a surface, by the right amount and with the right kind of oily coating, could cause corrosive scaling drops to be cleared from a surface. Further down the line, once the researchers have more control over directionality, the system could potentially substitute for some high-tech pumps in microfluidic devices to propel droplets through the right tubes at the right time. This might be especially useful in microgravity situations, where ordinary pumps don’t function as usual.

    It may also be possible to attach a payload to the droplets, creating a kind of microscale robotic delivery system, Varanasi says. And while their tests focused on water droplets, potentially it could apply to many different kinds of liquids and sublimating solids, he says.

    The work was supported by the National Science Foundation. More

  • in

    Using aluminum and water to make clean hydrogen fuel — when and where it’s needed

    As the world works to move away from fossil fuels, many researchers are investigating whether clean hydrogen fuel can play an expanded role in sectors from transportation and industry to buildings and power generation. It could be used in fuel cell vehicles, heat-producing boilers, electricity-generating gas turbines, systems for storing renewable energy, and more.

    But while using hydrogen doesn’t generate carbon emissions, making it typically does. Today, almost all hydrogen is produced using fossil fuel-based processes that together generate more than 2 percent of all global greenhouse gas emissions. In addition, hydrogen is often produced in one location and consumed in another, which means its use also presents logistical challenges.

    A promising reaction

    Another option for producing hydrogen comes from a perhaps surprising source: reacting aluminum with water. Aluminum metal will readily react with water at room temperature to form aluminum hydroxide and hydrogen. That reaction doesn’t typically take place because a layer of aluminum oxide naturally coats the raw metal, preventing it from coming directly into contact with water.

    Using the aluminum-water reaction to generate hydrogen doesn’t produce any greenhouse gas emissions, and it promises to solve the transportation problem for any location with available water. Simply move the aluminum and then react it with water on-site. “Fundamentally, the aluminum becomes a mechanism for storing hydrogen — and a very effective one,” says Douglas P. Hart, professor of mechanical engineering at MIT. “Using aluminum as our source, we can ‘store’ hydrogen at a density that’s 10 times greater than if we just store it as a compressed gas.”

    Two problems have kept aluminum from being employed as a safe, economical source for hydrogen generation. The first problem is ensuring that the aluminum surface is clean and available to react with water. To that end, a practical system must include a means of first modifying the oxide layer and then keeping it from re-forming as the reaction proceeds.

    The second problem is that pure aluminum is energy-intensive to mine and produce, so any practical approach needs to use scrap aluminum from various sources. But scrap aluminum is not an easy starting material. It typically occurs in an alloyed form, meaning that it contains other elements that are added to change the properties or characteristics of the aluminum for different uses. For example, adding magnesium increases strength and corrosion-resistance, adding silicon lowers the melting point, and adding a little of both makes an alloy that’s moderately strong and corrosion-resistant.

    Despite considerable research on aluminum as a source of hydrogen, two key questions remain: What’s the best way to prevent the adherence of an oxide layer on the aluminum surface, and how do alloying elements in a piece of scrap aluminum affect the total amount of hydrogen generated and the rate at which it is generated?

    “If we’re going to use scrap aluminum for hydrogen generation in a practical application, we need to be able to better predict what hydrogen generation characteristics we’re going to observe from the aluminum-water reaction,” says Laureen Meroueh PhD ’20, who earned her doctorate in mechanical engineering.

    Since the fundamental steps in the reaction aren’t well understood, it’s been hard to predict the rate and volume at which hydrogen forms from scrap aluminum, which can contain varying types and concentrations of alloying elements. So Hart, Meroueh, and Thomas W. Eagar, a professor of materials engineering and engineering management in the MIT Department of Materials Science and Engineering, decided to examine — in a systematic fashion — the impacts of those alloying elements on the aluminum-water reaction and on a promising technique for preventing the formation of the interfering oxide layer.

    To prepare, they had experts at Novelis Inc. fabricate samples of pure aluminum and of specific aluminum alloys made of commercially pure aluminum combined with either 0.6 percent silicon (by weight), 1 percent magnesium, or both — compositions that are typical of scrap aluminum from a variety of sources. Using those samples, the MIT researchers performed a series of tests to explore different aspects of the aluminum-water reaction.

    Pre-treating the aluminum

    The first step was to demonstrate an effective means of penetrating the oxide layer that forms on aluminum in the air. Solid aluminum is made up of tiny grains that are packed together with occasional boundaries where they don’t line up perfectly. To maximize hydrogen production, researchers would need to prevent the formation of the oxide layer on all those interior grain surfaces.

    Research groups have already tried various ways of keeping the aluminum grains “activated” for reaction with water. Some have crushed scrap samples into particles so tiny that the oxide layer doesn’t adhere. But aluminum powders are dangerous, as they can react with humidity and explode. Another approach calls for grinding up scrap samples and adding liquid metals to prevent oxide deposition. But grinding is a costly and energy-intensive process.

    To Hart, Meroueh, and Eagar, the most promising approach — first introduced by Jonathan Slocum ScD ’18 while he was working in Hart’s research group — involved pre-treating the solid aluminum by painting liquid metals on top and allowing them to permeate through the grain boundaries.

    To determine the effectiveness of that approach, the researchers needed to confirm that the liquid metals would reach the internal grain surfaces, with and without alloying elements present. And they had to establish how long it would take for the liquid metal to coat all of the grains in pure aluminum and its alloys.

    They started by combining two metals — gallium and indium — in specific proportions to create a “eutectic” mixture; that is, a mixture that would remain in liquid form at room temperature. They coated their samples with the eutectic and allowed it to penetrate for time periods ranging from 48 to 96 hours. They then exposed the samples to water and monitored the hydrogen yield (the amount formed) and flow rate for 250 minutes. After 48 hours, they also took high-magnification scanning electron microscope (SEM) images so they could observe the boundaries between adjacent aluminum grains.

    Based on the hydrogen yield measurements and the SEM images, the MIT team concluded that the gallium-indium eutectic does naturally permeate and reach the interior grain surfaces. However, the rate and extent of penetration vary with the alloy. The permeation rate was the same in silicon-doped aluminum samples as in pure aluminum samples but slower in magnesium-doped samples.

    Perhaps most interesting were the results from samples doped with both silicon and magnesium — an aluminum alloy often found in recycling streams. Silicon and magnesium chemically bond to form magnesium silicide, which occurs as solid deposits on the internal grain surfaces. Meroueh hypothesized that when both silicon and magnesium are present in scrap aluminum, those deposits can act as barriers that impede the flow of the gallium-indium eutectic.

    The experiments and images confirmed her hypothesis: The solid deposits did act as barriers, and images of samples pre-treated for 48 hours showed that permeation wasn’t complete. Clearly, a lengthy pre-treatment period would be critical for maximizing the hydrogen yield from scraps of aluminum containing both silicon and magnesium.

    Meroueh cites several benefits to the process they used. “You don’t have to apply any energy for the gallium-indium eutectic to work its magic on aluminum and get rid of that oxide layer,” she says. “Once you’ve activated your aluminum, you can drop it in water, and it’ll generate hydrogen — no energy input required.” Even better, the eutectic doesn’t chemically react with the aluminum. “It just physically moves around in between the grains,” she says. “At the end of the process, I could recover all of the gallium and indium I put in and use it again” — a valuable feature as gallium and (especially) indium are costly and in relatively short supply.

    Impacts of alloying elements on hydrogen generation

    The researchers next investigated how the presence of alloying elements affects hydrogen generation. They tested samples that had been treated with the eutectic for 96 hours; by then, the hydrogen yield and flow rates had leveled off in all the samples.

    The presence of 0.6 percent silicon increased the hydrogen yield for a given weight of aluminum by 20 percent compared to pure aluminum — even though the silicon-containing sample had less aluminum than the pure aluminum sample. In contrast, the presence of 1 percent magnesium produced far less hydrogen, while adding both silicon and magnesium pushed the yield up, but not to the level of pure aluminum.

    The presence of silicon also greatly accelerated the reaction rate, producing a far higher peak in the flow rate but cutting short the duration of hydrogen output. The presence of magnesium produced a lower flow rate but allowed the hydrogen output to remain fairly steady over time. And once again, aluminum with both alloying elements produced a flow rate between that of magnesium-doped and pure aluminum.

    Those results provide practical guidance on how to adjust the hydrogen output to match the operating needs of a hydrogen-consuming device. If the starting material is commercially pure aluminum, adding small amounts of carefully selected alloying elements can tailor the hydrogen yield and flow rate. If the starting material is scrap aluminum, careful choice of the source can be key. For high, brief bursts of hydrogen, pieces of silicon-containing aluminum from an auto junkyard could work well. For lower but longer flows, magnesium-containing scraps from the frame of a demolished building might be better. For results somewhere in between, aluminum containing both silicon and magnesium should work well; such material is abundantly available from scrapped cars and motorcycles, yachts, bicycle frames, and even smartphone cases.

    It should also be possible to combine scraps of different aluminum alloys to tune the outcome, notes Meroueh. “If I have a sample of activated aluminum that contains just silicon and another sample that contains just magnesium, I can put them both into a container of water and let them react,” she says. “So I get the fast ramp-up in hydrogen production from the silicon and then the magnesium takes over and has that steady output.”

    Another opportunity for tuning: Reducing grain size

    Another practical way to affect hydrogen production could be to reduce the size of the aluminum grains — a change that should increase the total surface area available for reactions to occur.

    To investigate that approach, the researchers requested specially customized samples from their supplier. Using standard industrial procedures, the Novelis experts first fed each sample through two rollers, squeezing it from the top and bottom so that the internal grains were flattened. They then heated each sample until the long, flat grains had reorganized and shrunk to a targeted size.

    In a series of carefully designed experiments, the MIT team found that reducing the grain size increased the efficiency and decreased the duration of the reaction to varying degrees in the different samples. Again, the presence of particular alloying elements had a major effect on the outcome.

    Needed: A revised theory that explains observations

    Throughout their experiments, the researchers encountered some unexpected results. For example, standard corrosion theory predicts that pure aluminum will generate more hydrogen than silicon-doped aluminum will — the opposite of what they observed in their experiments.

    To shed light on the underlying chemical reactions, Hart, Meroueh, and Eagar investigated hydrogen “flux,” that is, the volume of hydrogen generated over time on each square centimeter of aluminum surface, including the interior grains. They examined three grain sizes for each of their four compositions and collected thousands of data points measuring hydrogen flux.

    Their results show that reducing grain size has significant effects. It increases the peak hydrogen flux from silicon-doped aluminum as much as 100 times and from the other three compositions by 10 times. With both pure aluminum and silicon-containing aluminum, reducing grain size also decreases the delay before the peak flux and increases the rate of decline afterward. With magnesium-containing aluminum, reducing the grain size brings about an increase in peak hydrogen flux and results in a slightly faster decline in the rate of hydrogen output. With both silicon and magnesium present, the hydrogen flux over time resembles that of magnesium-containing aluminum when the grain size is not manipulated. When the grain size is reduced, the hydrogen output characteristics begin to resemble behavior observed in silicon-containing aluminum. That outcome was unexpected because when silicon and magnesium are both present, they react to form magnesium silicide, resulting in a new type of aluminum alloy with its own properties.

    The researchers stress the benefits of developing a better fundamental understanding of the underlying chemical reactions involved. In addition to guiding the design of practical systems, it might help them find a replacement for the expensive indium in their pre-treatment mixture. Other work has shown that gallium will naturally permeate through the grain boundaries of aluminum. “At this point, we know that the indium in our eutectic is important, but we don’t really understand what it does, so we don’t know how to replace it,” says Hart.

    But already Hart, Meroueh, and Eagar have demonstrated two practical ways of tuning the hydrogen reaction rate: by adding certain elements to the aluminum and by manipulating the size of the interior aluminum grains. In combination, those approaches can deliver significant results. “If you go from magnesium-containing aluminum with the largest grain size to silicon-containing aluminum with the smallest grain size, you get a hydrogen reaction rate that differs by two orders of magnitude,” says Meroueh. “That’s huge if you’re trying to design a real system that would use this reaction.”

    This research was supported through the MIT Energy Initiative by ExxonMobil-MIT Energy Fellowships awarded to Laureen Meroueh PhD ’20 from 2018 to 2020.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Global warming begets more warming, new paleoclimate study finds

    It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

    The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

    The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

    Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

    “The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

    Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and  co-founder and co-director of MIT’s Lorenz Center.

    A volatile push

    For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

    For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years. 

    “When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

    The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

    “This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

    “It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

    A warming multiplier

    The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

    In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

    In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

    As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

    So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

    “Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

    “Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

    This research was supported, in part, by MIT’s School of Science. More

  • in

    Electrifying cars and light trucks to meet Paris climate goals

    On Aug. 5, the White House announced that it seeks to ensure that 50 percent of all new passenger vehicles sold in the United States by 2030 are powered by electricity. The purpose of this target is to enable the U.S to remain competitive with China in the growing electric vehicle (EV) market and meet its international climate commitments. Setting ambitious EV sales targets and transitioning to zero-carbon power sources in the United States and other nations could lead to significant reductions in carbon dioxide and other greenhouse gas emissions in the transportation sector and move the world closer to achieving the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius relative to preindustrial levels.

    At this time, electrification of the transportation sector is occurring primarily in private light-duty vehicles (LDVs). In 2020, the global EV fleet exceeded 10 million, but that’s a tiny fraction of the cars and light trucks on the road. How much of the LDV fleet will need to go electric to keep the Paris climate goal in play? 

    To help answer that question, researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Energy Initiative have assessed the potential impacts of global efforts to reduce carbon dioxide emissions on the evolution of LDV fleets over the next three decades.

    Using an enhanced version of the multi-region, multi-sector MIT Economic Projection and Policy Analysis (EPPA) model that includes a representation of the household transportation sector, they projected changes for the 2020-50 period in LDV fleet composition, carbon dioxide emissions, and related impacts for 18 different regions. Projections were generated under four increasingly ambitious climate mitigation scenarios: a “Reference” scenario based on current market trends and fuel efficiency policies, a “Paris Forever” scenario in which current Paris Agreement commitments (Nationally Determined Contributions, or NDCs) are maintained but not strengthened after 2030, a “Paris to 2 C” scenario in which decarbonization actions are enhanced to be consistent with capping global warming at 2 C, and an “Accelerated Actions” scenario the caps global warming at 1.5 C through much more aggressive emissions targets than the current NDCs.

    Based on projections spanning the first three scenarios, the researchers found that the global EV fleet will likely grow to about 95-105 million EVs by 2030, and 585-823 million EVs by 2050. In the Accelerated Actions scenario, global EV stock reaches more than 200 million vehicles in 2030, and more than 1 billion in 2050, accounting for two-thirds of the global LDV fleet. The research team also determined that EV uptake will likely grow but vary across regions over the 30-year study time frame, with China, the United States, and Europe remaining the largest markets. Finally, the researchers found that while EVs play a role in reducing oil use, a more substantial reduction in oil consumption comes from economy-wide carbon pricing. The results appear in a study in the journal Economics of Energy & Environmental Policy.

    “Our study shows that EVs can contribute significantly to reducing global carbon emissions at a manageable cost,” says MIT Joint Program Deputy Director and MIT Energy Initiative Senior Research Scientist Sergey Paltsev, the lead author. “We hope that our findings will help decision-makers to design efficient pathways to reduce emissions.”  

    To boost the EV share of the global LDV fleet, the study’s co-authors recommend more ambitious policies to mitigate climate change and decarbonize the electric grid. They also envision an “integrated system approach” to transportation that emphasizes making internal combustion engine vehicles more efficient, a long-term shift to low- and net-zero carbon fuels, and systemic efficiency improvements through digitalization, smart pricing, and multi-modal integration. While the study focuses on EV deployment, the authors also stress for the need for investment in all possible decarbonization options related to transportation, including enhancing public transportation, avoiding urban sprawl through strategic land-use planning, and reducing the use of private motorized transport by mode switching to walking, biking, and mass transit.

    This research is an extension of the authors’ contribution to the MIT Mobility of the Future study. More

  • in

    Using graphene foam to filter toxins from drinking water

    Some kinds of water pollution, such as algal blooms and plastics that foul rivers, lakes, and marine environments, lie in plain sight. But other contaminants are not so readily apparent, which makes their impact potentially more dangerous. Among these invisible substances is uranium. Leaching into water resources from mining operations, nuclear waste sites, or from natural subterranean deposits, the element can now be found flowing out of taps worldwide.

    In the United States alone, “many areas are affected by uranium contamination, including the High Plains and Central Valley aquifers, which supply drinking water to 6 million people,” says Ahmed Sami Helal, a postdoc in the Department of Nuclear Science and Engineering. This contamination poses a near and present danger. “Even small concentrations are bad for human health,” says Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering.

    Now, a team led by Li has devised a highly efficient method for removing uranium from drinking water. Applying an electric charge to graphene oxide foam, the researchers can capture uranium in solution, which precipitates out as a condensed solid crystal. The foam may be reused up to seven times without losing its electrochemical properties. “Within hours, our process can purify a large quantity of drinking water below the EPA limit for uranium,” says Li.

    A paper describing this work was published in this week Advanced Materials. The two first co-authors are Helal and Chao Wang, a postdoc at MIT during the study, who is now with the School of Materials Science and Engineering at Tongji University, Shanghai. Researchers from Argonne National Laboratory, Taiwan’s National Chiao Tung University, and the University of Tokyo also participated in the research. The Defense Threat Reduction Agency (U.S. Department of Defense) funded later stages of this work.

    Targeting the contaminant

    The project, launched three years ago, began as an effort to find better approaches to environmental cleanup of heavy metals from mining sites. To date, remediation methods for such metals as chromium, cadmium, arsenic, lead, mercury, radium, and uranium have proven limited and expensive. “These techniques are highly sensitive to organics in water, and are poor at separating out the heavy metal contaminants,” explains Helal. “So they involve long operation times, high capital costs, and at the end of extraction, generate more toxic sludge.”

    To the team, uranium seemed a particularly attractive target. Field testing from the U.S. Geological Service and the Environmental Protection Agency (EPA) has revealed unhealthy levels of uranium moving into reservoirs and aquifers from natural rock sources in the northeastern United States, from ponds and pits storing old nuclear weapons and fuel in places like Hanford, Washington, and from mining activities located in many western states. This kind of contamination is prevalent in many other nations as well. An alarming number of these sites show uranium concentrations close to or above the EPA’s recommended ceiling of 30 parts per billion (ppb) — a level linked to kidney damage, cancer risk, and neurobehavioral changes in humans.

    The critical challenge lay in finding a practical remediation process exclusively sensitive to uranium, capable of extracting it from solution without producing toxic residues. And while earlier research showed that electrically charged carbon fiber could filter uranium from water, the results were partial and imprecise.

    Wang managed to crack these problems — based on her investigation of the behavior of graphene foam used for lithium-sulfur batteries. “The physical performance of this foam was unique because of its ability to attract certain chemical species to its surface,” she says. “I thought the ligands in graphene foam would work well with uranium.”

    Simple, efficient, and clean

    The team set to work transforming graphene foam into the equivalent of a uranium magnet. They learned that by sending an electric charge through the foam, splitting water and releasing hydrogen, they could increase the local pH and induce a chemical change that pulled uranium ions out of solution. The researchers found that the uranium would graft itself onto the foam’s surface, where it formed a never-before-seen crystalline uranium hydroxide. On reversal of the electric charge, the mineral, which resembles fish scales, slipped easily off the foam.

    It took hundreds of tries to get the chemical composition and electrolysis just right. “We kept changing the functional chemical groups to get them to work correctly,” says Helal. “And the foam was initially quite fragile, tending to break into pieces, so we needed to make it stronger and more durable,” says Wang.

    This uranium filtration process is simple, efficient, and clean, according to Li: “Each time it’s used, our foam can capture four times its own weight of uranium, and we can achieve an extraction capacity of 4,000 mg per gram, which is a major improvement over other methods,” he says. “We’ve also made a major breakthrough in reusability, because the foam can go through seven cycles without losing its extraction efficiency.” The graphene foam functions as well in seawater, where it reduces uranium concentrations from 3 parts per million to 19.9 ppb, showing that other ions in the brine do not interfere with filtration.

    The team believes its low-cost, effective device could become a new kind of home water filter, fitting on faucets like those of commercial brands. “Some of these filters already have activated carbon, so maybe we could modify these, add low-voltage electricity to filter uranium,” says Li.

    “The uranium extraction this device achieves is very impressive when compared to existing methods,” says Ho Jin Ryu, associate professor of nuclear and quantum engineering at the Korea Advanced Institute of Science and Technology. Ryu, who was not involved in the research, believes that the demonstration of graphene foam reusability is a “significant advance,” and that “the technology of local pH control to enhance uranium deposition will be impactful because the scientific principle can be applied more generally to heavy metal extraction from polluted water.”

    The researchers have already begun investigating broader applications of their method. “There is a science to this, so we can modify our filters to be selective for other heavy metals such as lead, mercury, and cadmium,” says Li. He notes that radium is another significant danger for locales in the United States and elsewhere that lack resources for reliable drinking water infrastructure.

    “In the future, instead of a passive water filter, we could be using a smart filter powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.” More

  • in

    Vapor-collection technology saves water while clearing the air

    About two-fifths of all the water that gets withdrawn from lakes, rivers, and wells in the U.S. is used not for agriculture, drinking, or sanitation, but to cool the power plants that provide electricity from fossil fuels or nuclear power. Over 65 percent of these plants use evaporative cooling, leading to huge white plumes that billow from their cooling towers, which can be a nuisance and, in some cases, even contribute to dangerous driving conditions.

    Now, a small company based on technology recently developed at MIT by the Varanasi Research Group is hoping to reduce both the water needs at these plants and the resultant plumes — and to potentially help alleviate water shortages in areas where power plants put pressure on local water systems.

    The technology is surprisingly simple in principle, but developing it to the point where it can now be tested at full scale on industrial plants was a more complex proposition. That required the real-world experience that the company’s founders gained from installing prototype systems, first on MIT’s natural-gas-powered cogeneration plant and then on MIT’s nuclear research reactor.

    In these demanding tests, which involved exposure to not only the heat and vibrations of a working industrial plant but also the rigors of New England winters, the system proved its effectiveness at both eliminating the vapor plume and recapturing water. And, it purified the water in the process, so that it was 100 times cleaner than the incoming cooling water. The system is now being prepared for full-scale tests in a commercial power plant and in a chemical processing plant.

    “Campus as a living laboratory”

    The technology was originally envisioned by professor of mechanical engineering Kripa Varanasi to develop efficient water-recovery systems by capturing water droplets from both natural fog and plumes from power plant cooling towers. The project began as part of doctoral thesis research of Maher Damak PhD ’18, with funding from the MIT Tata Center for Technology and Design, to improve the efficiency of fog-harvesting systems like the ones used in some arid coastal regions as a source of potable water. Those systems, which generally consist of plastic or metal mesh hung vertically in the path of fogbanks, are extremely inefficient, capturing only about 1 to 3 percent of the water droplets that pass through them.

    Varanasi and Damak found that vapor collection could be made much more efficient by first zapping the tiny droplets of water with a beam of electrically charged particles, or ions, to give each droplet a slight electric charge. Then, the stream of droplets passes through a wire mesh, like a window screen, that has an opposite electrical charge. This causes the droplets to be strongly attracted to the mesh, where they fall away due to gravity and can be collected in trays placed below the mesh.

    Lab tests showed the concept worked, and the researchers, joined by Karim Khalil PhD ’18, won the MIT $100K Entrepreneurship Competition in 2018 for the basic concept. The nascent company, which they called Infinite Cooling, with Damak as CEO, Khalil as CTO, and Varanasi as chairperson, immediately went to work setting up a test installation on one of the cooling towers of MIT’s natural-gas-powered Central Utility Plant, with funding from the MIT Office of Sustainability. After experimenting with various configurations, they were able to show that the system could indeed eliminate the plume and produce water of high purity.

    Professor Jacopo Buongiorno in the Department of Nuclear Science and Engineering immediately spotted a good opportunity for collaboration, offering the use of MIT’s Nuclear Reactor Laboratory research facility for further testing of the system with the help of NRL engineer Ed Block. With its 24/7 operation and its higher-temperature vapor emissions, the plant would provide a more stringent real-world test of the system, as well as proving its effectiveness in an actual operating reactor licensed by the Nuclear Regulatory Commission, an important step in “de-risking” the technology so that electric utilities could feel confident in adopting the system.

    After the system was installed above one of the plant’s four cooling towers, testing showed that the water being collected was more than 100 times cleaner than the feedwater coming into the cooling system. It also proved that the installation — which, unlike the earlier version, had its mesh screens mounted vertically, parallel to the vapor stream — had no effect at all on the operation of the plant. Video of the tests dramatically illustrates how as soon as the power is switched on to the collecting mesh, the white plume of vapor immediately disappears completely.

    The high temperature and volume of the vapor plume from the reactor’s cooling towers represented “kind of a worst-case scenario in terms of plumes,” Damak says, “so if we can capture that, we can basically capture anything.”

    Working with MIT’s Nuclear Reactor Laboratory, Varanasi says, “has been quite an important step because it helped us to test it at scale. … It really both validated the water quality and the performance of the system.” The process, he says, “shows the importance of using the campus as a living laboratory. It allows us to do these kinds of experiments at scale, and also showed the ability to sustainably reduce the water footprint of the campus.”

    Far-reaching benefits

    Power plant plumes are often considered an eyesore and can lead to local opposition to new power plants because of the potential for obscured views, and even potential traffic hazards when the obscuring plumes blow across roadways. “The ability to eliminate the plumes could be an important benefit, allowing plants to be sited in locations that might otherwise be restricted,” Buongiorno says. At the same time, the system could eliminate a significant amount of water used by the plants and then lost to the sky, potentially alleviating pressure on local water systems, which could be especially helpful in arid regions.

    The system is essentially a distillation process, and the pure water it produces could go into power plant boilers — which are separate from the cooling system — that require high-purity water. That might reduce the need for both fresh water and purification systems for the boilers.

    What’s more, in many arid coastal areas power plants are cooled directly with seawater. This system would essentially add a water desalination capability to the plant, at a fraction of the cost of building a new standalone desalination plant, and at an even smaller fraction of its operating costs since the heat would essentially be provided for free.

    Contamination of water is typically measured by testing its electrical conductivity, which increases with the amount of salts and other contaminants it contains. Water used in power plant cooling systems typically measures 3,000 microsiemens per centimeter, Khalil explains, while the water supply in the City of Cambridge is typically around 500 or 600 microsiemens per centimeter. The water captured by this system, he says, typically measures below 50 microsiemens per centimeter.

    Thanks to the validation provided by the testing on MIT’s plants, the company has now been able to secure arrangements for its first two installations on operating commercial plants, which should begin later this year. One is a 900-megawatt power plant where the system’s clean water production will be a major advantage, and the other is at a chemical manufacturing plant in the Midwest.

    In many locations power plants have to pay for the water they use for cooling, Varanasi says, and the new system is expected to reduce the need for water by up to 20 percent. For a typical power plant, that alone could account for about a million dollars saved in water costs per year, he says.

    “Innovation has been a hallmark of the U.S. commercial industry for more than six decades,” says Maria G. Korsnick, president and CEO of the Nuclear Energy Institute, who was not involved in the research. “As the changing climate impacts every aspect of life, including global water supplies, companies across the supply chain are innovating for solutions. The testing of this innovative technology at MIT provides a valuable basis for its consideration in commercial applications.” More

  • in

    A new way to detect the SARS-CoV-2 Alpha variant in wastewater

    Researchers from the Antimicrobial Resistance (AMR) interdisciplinary research group at the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, alongside collaborators from Biobot Analytics, Nanyang Technological University (NTU), and MIT, have successfully developed an innovative, open-source molecular detection method that is able to detect and quantify the B.1.1.7 (Alpha) variant of SARS-CoV-2. The breakthrough paves the way for rapid, inexpensive surveillance of other SARS-CoV-2 variants in wastewater.

    As the world continues to battle and contain Covid-19, the recent identification of SARS-CoV-2 variants with higher transmissibility and increased severity has made developing convenient variant tracking methods essential. Currently, identified variants include the B.1.17 (Alpha) variant first identified in the United Kingdom and the B.1.617.2 (Delta) variant first detected in India.

    Wastewater surveillance has emerged as a critical public health tool to safely and efficiently track the SARS-CoV-2 pandemic in a non-intrusive manner, providing complementary information that enables health authorities to acquire actionable community-level information. Most recently, viral fragments of SARS-CoV-2 were detected in housing estates in Singapore through a proactive wastewater surveillance program. This information, alongside surveillance testing, allowed Singapore’s Ministry of Health to swiftly respond, isolate, and conduct swab tests as part of precautionary measures.

    However, detecting variants through wastewater surveillance is less commonplace due to challenges in existing technology. Next-generation sequencing for wastewater surveillance is time-consuming and expensive. Tests also lack the sensitivity required to detect low variant abundances in dilute and mixed wastewater samples due to inconsistent and/or low sequencing coverage.

    The method developed by the researchers is uniquely tailored to address these challenges and expands the utility of wastewater surveillance beyond testing for SARS-CoV-2, toward tracking the spread of SARS-CoV-2 variants of concern.

    Wei Lin Lee, research scientist at SMART AMR and first author on the paper adds, “This is especially important in countries battling SARS-CoV-2 variants. Wastewater surveillance will help find out the true proportion and spread of the variants in the local communities. Our method is sensitive enough to detect variants in highly diluted SARS-CoV-2 concentrations typically seen in wastewater samples, and produces reliable results even for samples which contain multiple SARS-CoV-2 lineages.”

    Led by Janelle Thompson, NTU associate professor, and Eric Alm, MIT professor and SMART AMR principal investigator, the team’s study, “Quantitative SARS-CoV-2 Alpha variant B.1.1.7 Tracking in Wastewater by Allele-Specific RT-qPCR” has been published in Environmental Science & Technology Letters. The research explains the innovative, open-source molecular detection method based on allele-specific RT-qPCR that detects and quantifies the B.1.1.7 (Alpha) variant. The developed assay, tested and validated in wastewater samples across 19 communities in the United States, is able to reliably detect and quantify low levels of the B.1.1.7 (Alpha) variant with low cross-reactivity, and at variant proportions down to 1 percent in a background of mixed SARS-CoV-2 viruses.

    Targeting spike protein mutations that are highly predictive of the B.1.1.7 (Alpha) variant, the method can be implemented using commercially available RT-qPCR protocols. Unlike commercially available products that use proprietary primers and probes for wastewater surveillance, the paper details the open-source method and its development that can be freely used by other organizations and research institutes for their work on wastewater surveillance of SARS-CoV-2 and its variants.

    The breakthrough by the research team in Singapore is currently used by Biobot Analytics, an MIT startup and global leader in wastewater epidemiology headquartered in Cambridge, Massachusetts, serving states and localities throughout the United States. Using the method, Biobot Analytics is able to accept and analyze wastewater samples for the B.1.1.7 (Alpha) variant and plans to add additional variants to its analysis as methods are developed. For example, the SMART AMR team is currently developing specific assays that will be able to detect and quantify the B.1.617.2 (Delta) variant, which has recently been identified as a variant of concern by the World Health Organization.

    “Using the team’s innovative method, we have been able to monitor the B.1.1.7 (Alpha) variant in local populations in the U.S. — empowering leaders with information about Covid-19 trends in their communities and allowing them to make considered recommendations and changes to control measures,” says Mariana Matus PhD ’18, Biobot Analytics CEO and co-founder.

    “This method can be rapidly adapted to detect new variants of concern beyond B.1.1.7,” adds MIT’s Alm. “Our partnership with Biobot Analytics has translated our research into real-world impact beyond the shores of Singapore and aid in the detection of Covid-19 and its variants, serving as an early warning system and guidance for policymakers as they trace infection clusters and consider suitable public health measures.”

    The research is carried out by SMART and supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    SMART was established by MIT in partnership with the National Research Foundation of Singapore (NRF) in 2007. SMART is the first entity in CREATE developed by NRF. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five IRGs: AMR, Critical Analytics for Manufacturing Personalized-Medicine, Disruptive and Sustainable Technologies for Agricultural Precision, Future Urban Mobility, and Low Energy Electronic Systems.

    The AMR interdisciplinary research group is a translational research and entrepreneurship program that tackles the growing threat of antimicrobial resistance. By leveraging talent and convergent technologies across Singapore and MIT, AMR aims to develop multiple innovative and disruptive approaches to identify, respond to, and treat drug-resistant microbial infections. Through strong scientific and clinical collaborations, its goal is to provide transformative, holistic solutions for Singapore and the world. More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More