More stories

  • in

    Advancing the energy transition amidst global crises

    “The past six years have been the warmest on the planet, and our track record on climate change mitigation is drastically short of what it needs to be,” said Robert C. Armstrong, MIT Energy Initiative (MITEI) director and the Chevron Professor of Chemical Engineering, introducing MITEI’s 15th Annual Research Conference.

    At the symposium, participants from academia, industry, and finance acknowledged the deepening difficulties of decarbonizing a world rocked by geopolitical conflicts and suffering from supply chain disruptions, energy insecurity, inflation, and a persistent pandemic. In spite of this grim backdrop, the conference offered evidence of significant progress in the energy transition. Researchers provided glimpses of a low-carbon future, presenting advances in such areas as long-duration energy storage, carbon capture, and renewable technologies.

    In his keynote remarks, Ernest J. Moniz, the Cecil and Ida Green Professor of Physics and Engineering Systems Emeritus, founding director of MITEI, and former U.S. secretary of energy, highlighted “four areas that have materially changed in the last year” that could shake up, and possibly accelerate, efforts to address climate change.

    Extreme weather seems to be propelling the public and policy makers of both U.S. parties toward “convergence … at least in recognition of the challenge,” Moniz said. He perceives a growing consensus that climate goals will require — in diminishing order of certainty — firm (always-on) power to complement renewable energy sources, a fuel (such as hydrogen) flowing alongside electricity, and removal of atmospheric carbon dioxide (CO2).

    Russia’s invasion of Ukraine, with its “weaponization of natural gas” and global energy impacts, underscores the idea that climate, energy security, and geopolitics “are now more or less recognized widely as one conversation.” Moniz pointed as well to new U.S. laws on climate change and infrastructure that will amplify the role of science and technology and “address the drive to technological dominance by China.”

    The rapid transformation of energy systems will require a comprehensive industrial policy, Moniz said. Government and industry must select and rapidly develop low-carbon fuels, firm power sources (possibly including nuclear power), CO2 removal systems, and long-duration energy storage technologies. “We will need to make progress on all fronts literally in this decade to come close to our goals for climate change mitigation,” he concluded.

    Global cooperation?

    Over two days, conference participants delved into many of the issues Moniz raised. In one of the first panels, scholars pondered whether the international community could forge a coordinated climate change response. The United States’ rift with China, especially over technology trade policies, loomed large.

    “Hatred of China is a bipartisan hobby and passion, but a blanket approach isn’t right, even for the sake of national security,” said Yasheng Huang, the Epoch Foundation Professor of Global Economics and Management at the MIT Sloan School of Management. “Although the United States and China working together would have huge effects for both countries, it is politically unpalatable in the short term,” said F. Taylor Fravel, the Arthur and Ruth Sloan Professor of Political Science and director of the MIT Security Studies Program. John E. Parsons, deputy director for research at the MIT Center for Energy and Environmental Policy Research, suggested that the United States should use this moment “to get our own act together … and start doing things,” such as building nuclear power plants in a cost-effective way.

    Debating carbon removal

    Several panels took up the matter of carbon emissions and the most promising technologies for contending with them. Charles Harvey, MIT professor of civil and environmental engineering, and Howard Herzog, a senior research engineer at MITEI, set the stage early, debating whether capturing carbon was essential to reaching net-zero targets.

    “I have no trouble getting to net zero without carbon capture and storage,” said David Keith, the Gordon McKay Professor of Applied Physics at Harvard University, in a subsequent roundtable. Carbon capture seems more risky to Keith than solar geoengineering, which involves injecting sulfur into the stratosphere to offset CO2 and its heat-trapping impacts.

    There are new ways of moving carbon from where it’s a problem to where it’s safer. Kripa K. Varanasi, MIT professor of mechanical engineering, described a process for modulating the pH of ocean water to remove CO2. Timothy Krysiek, managing director for Equinor Ventures, talked about construction of a 900-kilometer pipeline transporting CO2 from northern Germany to a large-scale storage site located in Norwegian waters 3,000 meters below the seabed. “We can use these offshore Norwegian assets as a giant carbon sink for Europe,” he said.

    A startup showcase featured additional approaches to the carbon challenge. Mantel, which received MITEI Seed Fund money, is developing molten salt material to capture carbon for long-term storage or for use in generating electricity. Verdox has come up with an electrochemical process for capturing dilute CO2 from the atmosphere.

    But while much of the global warming discussion focuses on CO2, other greenhouse gases are menacing. Another panel discussed measuring and mitigating these pollutants. “Methane has 82 times more warming power than CO2 from the point of emission,” said Desirée L. Plata, MIT associate professor of civil and environmental engineering. “Cutting methane is the strongest lever we have to slow climate change in the next 25 years — really the only lever.”

    Steven Hamburg, chief scientist and senior vice president of the Environmental Defense Fund, cautioned that emission of hydrogen molecules into the atmosphere can cause increases in other greenhouse gases such as methane, ozone, and water vapor. As researchers and industry turn to hydrogen as a fuel or as a feedstock for commercial processes, “we will need to minimize leakage … or risk increasing warming,” he said.

    Supply chains, markets, and new energy ventures

    In panels on energy storage and the clean energy supply chain, there were interesting discussions of challenges ahead. High-density energy materials such as lithium, cobalt, nickel, copper, and vanadium for grid-scale energy storage, electric vehicles (EVs), and other clean energy technologies, can be difficult to source. “These often come from water-stressed regions, and we need to be super thoughtful about environmental stresses,” said Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. She also noted that in light of the explosive growth in demand for metals such as lithium, recycling EVs won’t be of much help. “The amount of material coming back from end-of-life batteries is minor,” she said, until EVs are much further along in their adoption cycle.

    Arvind Sanger, founder and managing partner of Geosphere Capital, said that the United States should be developing its own rare earths and minerals, although gaining the know-how will take time, and overcoming “NIMBYism” (not in my backyard-ism) is a challenge. Sanger emphasized that we must continue to use “denser sources of energy” to catalyze the energy transition over the next decade. In particular, Sanger noted that “for every transition technology, steel is needed,” and steel is made in furnaces that use coal and natural gas. “It’s completely woolly-headed to think we can just go to a zero-fossil fuel future in a hurry,” he said.

    The topic of power markets occupied another panel, which focused on ways to ensure the distribution of reliable and affordable zero-carbon energy. Integrating intermittent resources such as wind and solar into the grid requires a suite of retail markets and new digital tools, said Anuradha Annaswamy, director of MIT’s Active-Adaptive Control Laboratory. Tim Schittekatte, a postdoc at the MIT Sloan School of Management, proposed auctions as a way of insuring consumers against periods of high market costs.

    Another panel described the very different investment needs of new energy startups, such as longer research and development phases. Hooisweng Ow, technology principal at Eni Next LLC Ventures, which is developing drilling technology for geothermal energy, recommends joint development and partnerships to reduce risk. Michael Kearney SM ’11, PhD ’19, SM ’19 is a partner at The Engine, a venture firm built by MIT investing in path-breaking technology to solve the toughest challenges in climate and other problems. Kearney believes the emergence of new technologies and markets will bring on “a labor transition on an order of magnitude never seen before in this country,” he said. “Workforce development is not a natural zone for startups … and this will have to change.”

    Supporting the global South

    The opportunities and challenges of the energy transition look quite different in the developing world. In conversation with Robert Armstrong, Luhut Binsar Pandjaitan, the coordinating minister for maritime affairs and investment of the Republic of Indonesia, reported that his “nation is rich with solar, wind, and energy transition minerals like nickel and copper,” but cannot on its own tackle developing renewable energy or reducing carbon emissions and improving grid infrastructure. “Education is a top priority, and we are very far behind in high technologies,” he said. “We need help and support from MIT to achieve our target,” he said.

    Technologies that could springboard Indonesia and other nations of the global South toward their climate goals are emerging in MITEI-supported projects and at young companies MITEI helped spawn. Among the promising innovations unveiled at the conference are new materials and designs for cooling buildings in hot climates and reducing the environmental costs of construction, and a sponge-like substance that passively sucks moisture out of the air to lower the energy required for running air conditioners in humid climates.

    Other ideas on the move from lab to market have great potential for industrialized nations as well, such as a computational framework for maximizing the energy output of ocean-based wind farms; a process for using ammonia as a renewable fuel with no CO2 emissions; long-duration energy storage derived from the oxidation of iron; and a laser-based method for unlocking geothermal steam to drive power plants. More

  • in

    Ocean microbes get their diet through a surprising mix of sources, study finds

    One of the smallest and mightiest organisms on the planet is a plant-like bacterium known to marine biologists as Prochlorococcus. The green-tinted microbe measures less than a micron across, and its populations suffuse through the upper layers of the ocean, where a single teaspoon of seawater can hold millions of the tiny organisms.

    Prochlorococcus grows through photosynthesis, using sunlight to convert the atmosphere’s carbon dioxide into organic carbon molecules. The microbe is responsible for 5 percent of the world’s photosynthesizing activity, and scientists have assumed that photosynthesis is the microbe’s go-to strategy for acquiring the carbon it needs to grow.

    But a new MIT study in Nature Microbiology today has found that Prochlorococcus relies on another carbon-feeding strategy, more than previously thought.

    Organisms that use a mix of strategies to provide carbon are known as mixotrophs. Most marine plankton are mixotrophs. And while Prochlorococcus is known to occasionally dabble in mixotrophy, scientists have assumed the microbe primarily lives a phototrophic lifestyle.

    The new MIT study shows that in fact, Prochlorococcus may be more of a mixotroph than it lets on. The microbe may get as much as one-third of its carbon through a second strategy: consuming the dissolved remains of other dead microbes.

    The new estimate may have implications for climate models, as the microbe is a significant force in capturing and “fixing” carbon in the Earth’s atmosphere and ocean.

    “If we wish to predict what will happen to carbon fixation in a different climate, or predict where Prochlorococcus will or will not live in the future, we probably won’t get it right if we’re missing a process that accounts for one-third of the population’s carbon supply,” says Mick Follows, a professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and its Department of Civil and Environmental Engineering.

    The study’s co-authors include first author and MIT postdoc Zhen Wu, along with collaborators from the University of Haifa, the Leibniz-Institute for Baltic Sea Research, the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, and Potsdam University.

    Persistent plankton

    Since Prochlorococcus was first discovered in the Sargasso Sea in 1986, by MIT Institute Professor Sallie “Penny” Chisholm and others, the microbe has been observed throughout the world’s oceans, inhabiting the upper sunlit layers ranging from the surface down to about 160 meters. Within this range, light levels vary, and the microbe has evolved a number of ways to photosynthesize carbon in even low-lit regions.

    The organism has also evolved ways to consume organic compounds including glucose and certain amino acids, which could help the microbe survive for limited periods of time in dark ocean regions. But surviving on organic compounds alone is a bit like only eating junk food, and there is evidence that Prochlorococcus will die after a week in regions where photosynthesis is not an option.

    And yet, researchers including Daniel Sher of the University of Haifa, who is a co-author of the new study, have observed healthy populations of Prochlorococcus that persist deep in the sunlit zone, where the light intensity should be too low to maintain a population. This suggests that the microbes must be switching to a non-photosynthesizing, mixotrophic lifestyle in order to consume other organic sources of carbon.

    “It seems that at least some Prochlorococcus are using existing organic carbon in a mixotrophic way,” Follows says. “That stimulated the question: How much?”

    What light cannot explain

    In their new paper, Follows, Wu, Sher, and their colleagues looked to quantify the amount of carbon that Prochlorococcus is consuming through processes other than photosynthesis.

    The team looked first to measurements taken by Sher’s team, which previously took ocean samples at various depths in the Mediterranean Sea and measured the concentration of phytoplankton, including Prochlorococcus, along with the associated intensity of light and the concentration of nitrogen — an essential nutrient that is richly available in deeper layers of the ocean and that plankton can assimilate to make proteins.

    Wu and Follows used this data, and similar information from the Pacific Ocean, along with previous work from Chisholm’s lab, which established the rate of photosynthesis that Prochlorococcus could carry out in a given intensity of light.

    “We converted that light intensity profile into a potential growth rate — how fast the population of Prochlorococcus could grow if it was acquiring all it’s carbon by photosynthesis, and light is the limiting factor,” Follows explains.

    The team then compared this calculated rate to growth rates that were previously observed in the Pacific Ocean by several other research teams.

    “This data showed that, below a certain depth, there’s a lot of growth happening that photosynthesis simply cannot explain,” Follows says. “Some other process must be at work to make up the difference in carbon supply.”

    The researchers inferred that, in deeper, darker regions of the ocean, Prochlorococcus populations are able to survive and thrive by resorting to mixotrophy, including consuming organic carbon from detritus. Specifically, the microbe may be carrying out osmotrophy — a process by which an organism passively absorbs organic carbon molecules via osmosis.

    Judging by how fast the microbe is estimated to be growing below the sunlit zone, the team calculates that Prochlorococcus obtains up to one-third of its carbon diet through mixotrophic strategies.

    “It’s kind of like going from a specialist to a generalist lifestyle,” Follows says. “If I only eat pizza, then if I’m 20 miles from a pizza place, I’m in trouble, whereas if I eat burgers as well, I could go to the nearby McDonald’s. People had thought of Prochlorococcus as a specialist, where they do this one thing (photosynthesis) really well. But it turns out they may have more of a generalist lifestyle than we previously thought.”

    Chisholm, who has both literally and figuratively written the book on Prochlorococcus, says the group’s findings “expand the range of conditions under which their populations can not only survive, but also thrive. This study changes the way we think about the role of Prochlorococcus in the microbial food web.”

    This research was supported, in part, by the Israel Science Foundation, the U.S. National Science Foundation, and the Simons Foundation. More

  • in

    Methane research takes on new urgency at MIT

    One of the most notable climate change provisions in the 2022 Inflation Reduction Act is the first U.S. federal tax on a greenhouse gas (GHG). That the fee targets methane (CH4), rather than carbon dioxide (CO2), emissions is indicative of the urgency the scientific community has placed on reducing this short-lived but powerful gas. Methane persists in the air about 12 years — compared to more than 1,000 years for CO2 — yet it immediately causes about 120 times more warming upon release. The gas is responsible for at least a quarter of today’s gross warming. 

    “Methane has a disproportionate effect on near-term warming,” says Desiree Plata, the director of MIT Methane Network. “CH4 does more damage than CO2 no matter how long you run the clock. By removing methane, we could potentially avoid critical climate tipping points.” 

    Because GHGs have a runaway effect on climate, reductions made now will have a far greater impact than the same reductions made in the future. Cutting methane emissions will slow the thawing of permafrost, which could otherwise lead to massive methane releases, as well as reduce increasing emissions from wetlands.  

    “The goal of MIT Methane Network is to reduce methane emissions by 45 percent by 2030, which would save up to 0.5 degree C of warming by 2100,” says Plata, an associate professor of civil and environmental engineering at MIT and director of the Plata Lab. “When you consider that governments are trying for a 1.5-degree reduction of all GHGs by 2100, this is a big deal.” 

    Under normal concentrations, methane, like CO2, poses no health risks. Yet methane assists in the creation of high levels of ozone. In the lower atmosphere, ozone is a key component of air pollution, which leads to “higher rates of asthma and increased emergency room visits,” says Plata. 

    Methane-related projects at the Plata Lab include a filter made of zeolite — the same clay-like material used in cat litter — designed to convert methane into CO2 at dairy farms and coal mines. At first glance, the technology would appear to be a bit of a hard sell, since it converts one GHG into another. Yet the zeolite filter’s low carbon and dollar costs, combined with the disproportionate warming impact of methane, make it a potential game-changer.

    The sense of urgency about methane has been amplified by recent studies that show humans are generating far more methane emissions than previously estimated, and that the rates are rising rapidly. Exactly how much methane is in the air is uncertain. Current methods for measuring atmospheric methane, such as ground, drone, and satellite sensors, “are not readily abundant and do not always agree with each other,” says Plata.  

    The Plata Lab is collaborating with Tim Swager in the MIT Department of Chemistry to develop low-cost methane sensors. “We are developing chemiresisitive sensors that cost about a dollar that you could place near energy infrastructure to back-calculate where leaks are coming from,” says Plata.  

    The researchers are working on improving the accuracy of the sensors using machine learning techniques and are planning to integrate internet-of-things technology to transmit alerts. Plata and Swager are not alone in focusing on data collection: the Inflation Reduction Act adds significant funding for methane sensor research. 

    Other research at the Plata Lab includes the development of nanomaterials and heterogeneous catalysis techniques for environmental applications. The lab also explores mitigation solutions for industrial waste, particularly those related to the energy transition. Plata is the co-founder of an lithium-ion battery recycling startup called Nth Cycle. 

    On a more fundamental level, the Plata Lab is exploring how to develop products with environmental and social sustainability in mind. “Our overarching mission is to change the way that we invent materials and processes so that environmental objectives are incorporated along with traditional performance and cost metrics,” says Plata. “It is important to do that rigorous assessment early in the design process.”

    Play video

    MIT amps up methane research 

    The MIT Methane Network brings together 26 researchers from MIT along with representatives of other institutions “that are dedicated to the idea that we can reduce methane levels in our lifetime,” says Plata. The organization supports research such as Plata’s zeolite and sensor projects, as well as designing pipeline-fixing robots, developing methane-based fuels for clean hydrogen, and researching the capture and conversion of methane into liquid chemical precursors for pharmaceuticals and plastics. Other members are researching policies to encourage more sustainable agriculture and land use, as well as methane-related social justice initiatives. 

    “Methane is an especially difficult problem because it comes from all over the place,” says Plata. A recent Global Carbon Project study estimated that half of methane emissions are caused by humans. This is led by waste and agriculture (28 percent), including cow and sheep belching, rice paddies, and landfills.  

    Fossil fuels represent 18 percent of the total budget. Of this, about 63 percent is derived from oil and gas production and pipelines, 33 percent from coal mining activities, and 5 percent from industry and transportation. Human-caused biomass burning, primarily from slash-and-burn agriculture, emits about 4 percent of the global total.  

    The other half of the methane budget includes natural methane emissions from wetlands (20 percent) and other natural sources (30 percent). The latter includes permafrost melting and natural biomass burning, such as forest fires started by lightning.  

    With increases in global warming and population, the line between anthropogenic and natural causes is getting fuzzier. “Human activities are accelerating natural emissions,” says Plata. “Climate change increases the release of methane from wetlands and permafrost and leads to larger forest and peat fires.”  

    The calculations can get complicated. For example, wetlands provide benefits from CO2 capture, biological diversity, and sea level rise resiliency that more than compensate for methane releases. Meanwhile, draining swamps for development increases emissions. 

    Over 100 nations have signed onto the U.N.’s Global Methane Pledge to reduce at least 30 percent of anthropogenic emissions within the next 10 years. The U.N. report estimates that this goal can be achieved using proven technologies and that about 60 percent of these reductions can be accomplished at low cost. 

    Much of the savings would come from greater efficiencies in fossil fuel extraction, processing, and delivery. The methane fees in the Inflation Reduction Act are primarily focused on encouraging fossil fuel companies to accelerate ongoing efforts to cap old wells, flare off excess emissions, and tighten pipeline connections.  

    Fossil fuel companies have already made far greater pledges to reduce methane than they have with CO2, which is central to their business. This is due, in part, to the potential savings, as well as in preparation for methane regulations expected from the Environmental Protection Agency in late 2022. The regulations build upon existing EPA oversight of drilling operations, and will likely be exempt from the U.S. Supreme Court’s ruling that limits the federal government’s ability to regulate GHGs. 

    Zeolite filter targets methane in dairy and coal 

    The “low-hanging fruit” of gas stream mitigation addresses most of the 20 percent of total methane emissions in which the gas is released in sufficiently high concentrations for flaring. Plata’s zeolite filter aims to address the thornier challenge of reducing the 80 percent of non-flammable dilute emissions. 

    Plata found inspiration in decades-old catalysis research for turning methane into methanol. One strategy has been to use an abundant, low-cost aluminosilicate clay called zeolite.  

    “The methanol creation process is challenging because you need to separate a liquid, and it has very low efficiency,” says Plata. “Yet zeolite can be very efficient at converting methane into CO2, and it is much easier because it does not require liquid separation. Converting methane to CO2 sounds like a bad thing, but there is a major anti-warming benefit. And because methane is much more dilute than CO2, the relative CO2 contribution is minuscule.”  

    Using zeolite to create methanol requires highly concentrated methane, high temperatures and pressures, and industrial processing conditions. Yet Plata’s process, which dopes the zeolite with copper, operates in the presence of oxygen at much lower temperatures under typical pressures. “We let the methane proceed the way it wants from a thermodynamic perspective from methane to methanol down to CO2,” says Plata. 

    Researchers around the world are working on other dilute methane removal technologies. Projects include spraying iron salt aerosols into sea air where they react with natural chlorine or bromine radicals, thereby capturing methane. Most of these geoengineering solutions, however, are difficult to measure and would require massive scale to make a difference.  

    Plata is focusing her zeolite filters on environments where concentrations are high, but not so high as to be flammable. “We are trying to scale zeolite into filters that you could snap onto the side of a cross-ventilation fan in a dairy barn or in a ventilation air shaft in a coal mine,” says Plata. “For every packet of air we bring in, we take a lot of methane out, so we get more bang for our buck.”  

    The major challenge is creating a filter that can handle high flow rates without getting clogged or falling apart. Dairy barn air handlers can push air at up to 5,000 cubic feet per minute and coal mine handlers can approach 500,000 CFM. 

    Plata is exploring engineering options including fluidized bed reactors with floating catalyst particles. Another filter solution, based in part on catalytic converters, features “higher-order geometric structures where you have a porous material with a long path length where the gas can interact with the catalyst,” says Plata. “This avoids the challenge with fluidized beds of containing catalyst particles in the reactor. Instead, they are fixed within a structured material.”  

    Competing technologies for removing methane from mine shafts “operate at temperatures of 1,000 to 1,200 degrees C, requiring a lot of energy and risking explosion,” says Plata. “Our technology avoids safety concerns by operating at 300 to 400 degrees C. It reduces energy use and provides more tractable deployment costs.” 

    Potentially, energy and dollar costs could be further reduced in coal mines by capturing the heat generated by the conversion process. “In coal mines, you have enrichments above a half-percent methane, but below the 4 percent flammability threshold,” says Plata. “The excess heat from the process could be used to generate electricity using off-the-shelf converters.” 

    Plata’s dairy barn research is funded by the Gerstner Family Foundation and the coal mining project by the U.S. Department of Energy. “The DOE would like us to spin out the technology for scale-up within three years,” says Plata. “We cannot guarantee we will hit that goal, but we are trying to develop this as quickly as possible. Our society needs to start reducing methane emissions now.”  More

  • in

    Studying floods to better predict their dangers

    “My job is basically flooding Cambridge,” says Katerina “Katya” Boukin, a graduate student in civil and environmental engineering at MIT and the MIT Concrete Sustainability Hub’s resident expert on flood simulations. 

    You can often find her fine-tuning high-resolution flood risk models for the City of Cambridge, Massachusetts, or talking about hurricanes with fellow researcher Ipek Bensu Manav.

    Flooding represents one of the world’s gravest natural hazards. Extreme climate events inducing flooding, like severe storms, winter storms, and tropical cyclones, caused an estimated $128.1 billion of damages in 2021 alone. 

    Climate simulation models suggest that severe storms will become more frequent in the coming years, necessitating a better understanding of which parts of cities are most vulnerable — an understanding that can be improved through modeling.

    A problem with current flood models is that they struggle to account for an oft-misunderstood type of flooding known as pluvial flooding. 

    “You might think of flooding as the overflowing of a body of water, like a river. This is fluvial flooding. This can be somewhat predictable, as you can think of proximity to water as a risk factor,” Boukin explains.

    However, the “flash flooding” that causes many deaths each year can happen even in places nowhere near a body of water. This is an example of pluvial flooding, which is affected by terrain, urban infrastructure, and the dynamic nature of storm loads.

    “If we don’t know how a flood is propagating, we don’t know the risk it poses to the urban environment. And if we don’t understand the risk, we can’t really discuss mitigation strategies,” says Boukin, “That’s why I pursue improving flood propagation models.”

    Boukin is leading development of a new flood prediction method that seeks to address these shortcomings. By better representing the complex morphology of cities, Boukin’s approach may provide a clearer forecast of future urban flooding.

    Katya Boukin developed this model of the City of Cambridge, Massachusetts. The base model was provided through a collaboration between MIT, the City of Cambridge, and Dewberry Engineering.

    Image: Katya Boukin

    Previous item
    Next item

    “In contrast to the more typical traditional catchment model, our method has rainwater spread around the urban environment based on the city’s topography, below-the-surface features like sewer pipes, and the characteristics of local soils,” notes Boukin.

    “We can simulate the flooding of regions with local rain forecasts. Our results can show how flooding propagates by the foot and by the second,” she adds.

    While Boukin’s current focus is flood simulation, her unconventional academic career has taken her research in many directions, like examining structural bottlenecks in dense urban rail systems and forecasting ground displacement due to tunneling. 

    “I’ve always been interested in the messy side of problem-solving. I think that difficult problems present a real chance to gain a deeper understanding,” says Boukin.

    Boukin credits her upbringing for giving her this perspective. A native of Israel, Boukin says that civil engineering is the family business. “My parents are civil engineers, my mom’s parents are, too, her grandfather was a professor in civil engineering, and so on. Civil engineering is my bloodline.”

    However, the decision to follow the family tradition did not come so easily. “After I took the Israeli equivalent of the SAT, I was at a decision point: Should I go to engineering school or medical school?” she recalls.

    “I decided to go on a backpacking trip to help make up my mind. It’s sort of an Israeli rite to explore internationally, so I spent six months in South America. I think backpacking is something everyone should do.”

    After this soul searching, Boukin landed on engineering school, where she fell in love with structural engineering. “It was the option that felt most familiar and interesting. I grew up playing with AutoCAD on the family computer, and now I use AutoCAD professionally!” she notes.

    “For my master’s degree, I was looking to study in a department that would help me integrate knowledge from fields like climatology and civil engineering. I found the MIT Department of Civil and Environmental Engineering to be an excellent fit,” she says.

    “I am lucky that MIT has so many people that work together as well as they do. I ended up at the Concrete Sustainability Hub, where I’m working on projects which are the perfect fit between what I wanted to do and what the department wanted to do.” 

    Boukin’s move to Cambridge has given her a new perspective on her family and childhood. 

    “My parents brought me to Israel when I was just 1 year old. In moving here as a second-time immigrant, I have a new perspective on what my parents went through during the move to Israel. I moved when I was 27 years old, the same age as they were. They didn’t have a support network and worked any job they could find,” she explains.

    “I am incredibly grateful to them for the morals they instilled in my sister, who recently graduated medical school, and I. I know I can call my parents if I ever need something, and they will do whatever they can to help.”

    Boukin hopes to honor her parents’ efforts through her research.

    “Not only do I want to help stakeholders understand flood risks, I want to make awareness of flooding more accessible. Each community needs different things to be resilient, and different cultures have different ways of delivering and receiving information,” she says.

    “Everyone should understand that they, in addition to the buildings and infrastructure around them, are part of a complex ecosystem. Any change to a city can affect the rest of it. If designers and residents are aware of this when considering flood mitigation strategies, we can better design cities and understand the consequences of damage.” More

  • in

    3Q: Why Europe is so vulnerable to heat waves

    This year saw high-temperature records shattered across much of Europe, as crops withered in the fields due to widespread drought. Is this a harbinger of things to come as the Earth’s climate steadily warms up?

    Elfatih Eltahir, MIT professor of civil and environmental engineering and H. M. King Bhumibol Professor of Hydrology and Climate, and former doctoral student Alexandre Tuel PhD ’20 recently published a piece in the Bulletin of the Atomic Scientists describing how their research helps explain this anomalous European weather. The findings are based in part on analyses described in their book “Future Climate of the Mediterranean and Europe,” published earlier this year. MIT News asked the two authors to describe the dynamics behind these extreme weather events.

    Q: Was the European heat wave this summer anticipated based on existing climate models?

    Eltahir: Climate models project increasingly dry summers over Europe. This is especially true for the second half of the 21st century, and for southern Europe. Extreme dryness is often associated with hot conditions and heat waves, since any reduction in evaporation heats the soil and the air above it. In general, models agree in making such projections about European summers. However, understanding the physical mechanisms responsible for these projections is an active area of research.

    The same models that project dry summers over southern Europe also project dry winters over the neighboring Mediterranean Sea. In fact, the Mediterranean Sea stands out as one of the most significantly impacted regions — a literal “hot spot” — for winter droughts triggered by climate change. Again, until recently, the association between the projections of summer dryness over Europe and dry winters over the Mediterranean was not understood.

    In recent MIT doctoral research, carried out in the Department of Civil and Environmental Engineering, a hypothesis was developed to explain why the Mediterranean stands out as a hot spot for winter droughts under climate change. Further, the same theory offers a mechanistic understanding that connects the projections of dry summers over southern Europe and dry winters over the Mediterranean.

    What is exciting about the observed climate over Europe last summer is the fact that the observed drought started and developed with spatial and temporal patterns that are consistent with our proposed theory, and in particular the connection to the dry conditions observed over the Mediterranean during the previous winter.

    Q: What is it about the area around the Mediterranean basin that produces such unusual weather extremes?

    Eltahir: Multiple factors come together to cause extreme heat waves such as the one that Europe has experienced this summer, as well as previously, in 2003, 2015, 2018, 2019, and 2020. Among these, however, mutual influences between atmospheric dynamics and surface conditions, known as land-atmosphere feedbacks, seem to play a very important role.

    In the current climate, southern Europe is located in the transition zone between the dry subtropics (the Sahara Desert in North Africa) and the relatively wet midlatitudes (with a climate similar to that of the Pacific Northwest). High summertime temperatures tend to make the precipitation that falls to the ground evaporate quickly, and as a consequence soil moisture during summer is very dependent on springtime precipitation. A dry spring in Europe (such as the 2022 one) causes dry soils in late spring and early summer. This lack of surface water in turn limits surface evaporation during summer. Two important consequences follow: First, incoming radiative energy from the sun preferentially goes into increasing air temperature rather than evaporating water; and second, the inflow of water into air layers near the surface decreases, which makes the air drier and precipitation less likely. Combined, these two influences increase the likelihood of heat waves and droughts.

    Tuel: Through land-atmosphere feedbacks, dry springs provide a favorable environment for persistent warm and dry summers but are of course not enough to directly cause heat waves. A spark is required to ignite the fuel. In Europe and elsewhere, this spark is provided by large-scale atmospheric dynamics. If an anticyclone sets over an area with very dry soils, surface temperature can quickly shoot up as land-atmosphere feedbacks come into play, developing into a heat wave that can persist for weeks.

    The sensitivity to springtime precipitation makes southern Europe and the Mediterranean particularly prone to persistent summer heat waves. This will play an increasingly important role in the future, as spring precipitation is expected to decline, making scorching summers even more likely in this corner of the world. The decline in spring precipitation, which originates as an anomalously dry winter around the Mediterranean, is very robust across climate projections. Southern Europe and the Mediterranean really stand out from most other land areas, where precipitation will on average increase with global warming.

    In our work, we showed that this Mediterranean winter decline was driven by two independent factors: on the one hand, trends in the large-scale circulation, notably stationary atmospheric waves, and on the other hand, reduced warming of the Mediterranean Sea relative to the surrounding continents — a well-known feature of global warming. Both factors lead to increased surface air pressure and reduced precipitation over the Mediterranean and Southern Europe.

    Q: What can we expect over the coming decades in terms of the frequency and severity of these kinds of droughts, floods, and other extremes in European weather?

    Tuel: Climate models have long shown that the frequency and intensity of heat waves was bound to increase as the global climate warms, and Europe is no exception. The reason is simple: As the global temperature rises, the temperature distribution shifts toward higher values, and heat waves become more intense and more frequent. Southern Europe and the Mediterranean, however, will be hit particularly hard. The reason for this is related to the land-atmosphere feedbacks we just discussed. Winter precipitation over the Mediterranean and spring precipitation over southern Europe will decline significantly, which will lead to a decrease in early summer soil moisture over southern Europe and will push average summer temperatures even higher; the region will become a true climate change hot spot. In that sense, 2022 may really be a taste of the future. The succession of recent heat waves in Europe, however, suggests that things may be going faster than climate model projections imply. Decadal variability or badly understood trends in large-scale atmospheric dynamics may play a role here, though that is still debated. Another possibility is that climate models tend to underestimate the magnitude of land-atmosphere feedbacks and downplay the influence of dry soil moisture anomalies on summertime weather.

    Potential trends in floods are more difficult to assess because floods result from a multiplicity of factors, like extreme precipitation, soil moisture levels, or land cover. Extreme precipitation is generally expected to increase in most regions, but very high uncertainties remain, notably because extreme precipitation is highly dependent on atmospheric dynamics about which models do not always agree. What is almost certain is that with warming, the water content of the atmosphere increases (following a law of thermodynamics known as the Clausius-Clapeyron relationship). Thus, if the dynamics are favorable to precipitation, a lot more of it may fall in a warmer climate. Last year’s floods in Germany, for example, were triggered by unprecedented heavy rainfall which climate change made more likely. More

  • in

    From bridges to DNA: civil engineering across disciplines

    How is DNA like a bridge? This question is not a riddle or logic game, it is a concern of Johannes Kalliauer’s doctoral thesis.

    As a student at TU Wien in Austria, Kalliauer was faced with a monumental task: combining approaches from civil engineering and theoretical physics to better understand the forces that act on DNA.

    Kalliauer, now a postdoc at the MIT Concrete Sustainability Hub, says he modeled DNA as though it were a beam, using molecular dynamics principles to understand its structural properties.

    “The mechanics of very small objects, like DNA helices, and large ones, like bridges, are quite similar. Each may be understood in terms of Newtonian mechanics. Forces and moments act on each system, subjecting each to deformations like twisting, stretching, and warping,” says Kalliauer.

    As a 2020 article from TU Wien noted, Kalliauer observed a counterintuitive behavior when examining DNA at an atomic level. Unlike a typical spring which becomes less coiled as it is stretched, DNA was observed to become more wound as its length was increased. 

    In situations like these where conventional logic appears to break down, Kalliauer relies on the intuition he has gained as an engineer.

    “To understand this strange behavior in DNA, I turned to a fundamental approach: I examined what was the same about DNA and macroscopic structures and what was different. Civil engineers use methods and calculations which have been developed over centuries and which are very similar to the ones I employed for my thesis,” Kalliauer explains. 

    As Kalliauer continues, “Structural engineering is an incredibly versatile discipline. If you understand it, you can understand atomistic objects like DNA strands and very large ones like galaxies. As a researcher, I rely on it to help me bring new viewpoints to fields like biology. Other civil engineers can and should do the same.”

    Kalliauer, who grew up in a small town in Austria, has spent his life applying unconventional approaches like this across disciplines. “I grew up in a math family. While none of us were engineers, my parents instilled an appreciation for the discipline in me and my two older sisters.”

    After middle school, Kalliauer attended a technical school for civil engineering, where he discovered a fascination for mechanics. He also worked on a construction site to gain practical experience and see engineering applied in a real-world context.

    Kalliauer studied out of interest intensely, working upwards of 100 hours per week to better understand coursework in university. “I asked teachers and professors many questions, often challenging their ideas. Above everything else, I needed to understand things for myself. Doing well on exams was a secondary concern.”

    In university, he studied topics ranging from car crash testing to concrete hinges to biology. As a new member of the CSHub, he is studying how floods may be modeled with the statistical physics-based model provided by lattice density functional theory.

    In doing this, he builds on the work of past and present CSHub researchers like Elli Vartziotis and Katerina Boukin. 

    “It’s important to me that this research has a real impact in the world. I hope my approach to engineering can help researchers and stakeholders understand how floods propagate in urban contexts, so that we may make cities more resilient,” he says. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Silk offers an alternative to some microplastics

    Microplastics, tiny particles of plastic that are now found worldwide in the air, water, and soil, are increasingly recognized as a serious pollution threat, and have been found in the bloodstream of animals and people around the world.

    Some of these microplastics are intentionally added to a variety of products, including agricultural chemicals, paints, cosmetics, and detergents — amounting to an estimated 50,000 tons a year in the European Union alone, according to the European Chemicals Agency. The EU has already declared that these added, nonbiodegradable microplastics must be eliminated by 2025, so the search is on for suitable replacements, which do not currently exist.

    Now, a team of scientists at MIT and elsewhere has developed a system based on silk that could provide an inexpensive and easily manufactured substitute. The new process is described in a paper in the journal Small, written by MIT postdoc Muchun Liu, MIT professor of civil and environmental engineering Benedetto Marelli, and five others at the chemical company BASF in Germany and the U.S.

    The microplastics widely used in industrial products generally protect some specific active ingredient (or ingredients) from being degraded by exposure to air or moisture, until the time they are needed. They provide a slow release of the active ingredient for a targeted period of time and minimize adverse effects to its surroundings. For example, vitamins are often delivered in the form of microcapsules packed into a pill or capsule, and pesticides and herbicides are similarly enveloped. But the materials used today for such microencapsulation are plastics that persist in the environment for a long time. Until now, there has been no practical, economical substitute available that would biodegrade naturally.

    Much of the burden of environmental microplastics comes from other sources, such as the degradation over time of larger plastic objects such as bottles and packaging, and from the wear of car tires. Each of these sources may require its own kind of solutions for reducing its spread, Marelli says. The European Chemical Agency has estimated that the intentionally added microplastics represent approximately 10-15 percent of the total amount in the environment, but this source may be relatively easy to address using this nature-based biodegradable replacement, he says.

    “We cannot solve the whole microplastics problem with one solution that fits them all,” he says. “Ten percent of a big number is still a big number. … We’ll solve climate change and pollution of the world one percent at a time.”

    Unlike the high-quality silk threads used for fine fabrics, the silk protein used in the new alternative material is widely available and less expensive, Liu says. While silkworm cocoons must be painstakingly unwound to produce the fine threads needed for fabric, for this use, non-textile-quality cocoons can be used, and the silk fibers can simply be dissolved using a scalable water-based process. The processing is so simple and tunable that the resulting material can be adapted to work on existing manufacturing equipment, potentially providing a simple “drop in” solution using existing factories.

    Silk is recognized as safe for food or medical use, as it is nontoxic and degrades naturally in the body. In lab tests, the researchers demonstrated that the silk-based coating material could be used in existing, standard spray-based manufacturing equipment to make a standard water-soluble microencapsulated herbicide product, which was then tested in a greenhouse on a corn crop. The test showed it worked even better than an existing commercial product, inflicting less damage to the plants, Liu says.

    While other groups have proposed degradable encapsulation materials that may work at a small laboratory scale, Marelli says, “there is a strong need to achieve encapsulation of high-content actives to open the door to commercial use. The only way to have an impact is where we can not only replace a synthetic polymer with a biodegradable counterpart, but also achieve performance that is the same, if not better.”

    The secret to making the material compatible with existing equipment, Liu explains, is in the tunability of the silk material. By precisely adjusting the polymer chain arrangements of silk materials and addition of a surfactant, it is possible to fine-tune the properties of the resulting coatings once they dry out and harden. The material can be hydrophobic (water-repelling) even though it is made and processed in a water solution, or it can be hydrophilic (water-attracting), or anywhere in between, and for a given application it can be made to match the characteristics of the material it is being used to replace.

    In order to arrive at a practical solution, Liu had to develop a way of freezing the forming droplets of encapsulated materials as they were forming, to study the formation process in detail. She did this using a special spray-freezing system, and was able to observe exactly how the encapsulation works in order to control it better. Some of the encapsulated “payload” materials, whether they be pesticides or nutrients or enzymes, are water-soluble and some are not, and they interact in different ways with the coating material.

    “To encapsulate different materials, we have to study how the polymer chains interact and whether they are compatible with different active materials in suspension,” she says. The payload material and the coating material are mixed together in a solution and then sprayed. As droplets form, the payload tends to be embedded in a shell of the coating material, whether that’s the original synthetic plastic or the new silk material.

    The new method can make use of low-grade silk that is unusable for fabrics, and large quantities of which are currently discarded because they have no significant uses, Liu says. It can also use used, discarded silk fabric, diverting that material from being disposed of in landfills.

    Currently, 90 percent of the world’s silk production takes place in China, Marelli says, but that’s largely because China has perfected the production of the high-quality silk threads needed for fabrics. But because this process uses bulk silk and has no need for that level of quality, production could easily be ramped up in other parts of the world to meet local demand if this process becomes widely used, he says.

    “This elegant and clever study describes a sustainable and biodegradable silk-based replacement for microplastic encapsulants, which are a pressing environmental challenge,” says Alon Gorodetsky, an associate professor of chemical and biomolecular engineering at the University of California at Irvine, who was not associated with this research. “The modularity of the described materials and the scalability of the manufacturing processes are key advantages that portend well for translation to real-world applications.”

    This process “represents a potentially highly significant advance in active ingredient delivery for a range of industries, particularly agriculture,” says Jason White, director of the Connecticut Agricultural Experiment Station, who also was not associated with this work. “Given the current and future challenges related to food insecurity, agricultural production, and a changing climate, novel strategies such as this are greatly needed.”

    The research team also included Pierre-Eric Millard, Ophelie Zeyons, Henning Urch, Douglas Findley and Rupert Konradi from the BASF corporation, in Germany and in the U.S. The work was supported by BASF through the Northeast Research Alliance (NORA). More