More stories

  • in

    Methane research takes on new urgency at MIT

    One of the most notable climate change provisions in the 2022 Inflation Reduction Act is the first U.S. federal tax on a greenhouse gas (GHG). That the fee targets methane (CH4), rather than carbon dioxide (CO2), emissions is indicative of the urgency the scientific community has placed on reducing this short-lived but powerful gas. Methane persists in the air about 12 years — compared to more than 1,000 years for CO2 — yet it immediately causes about 120 times more warming upon release. The gas is responsible for at least a quarter of today’s gross warming. 

    “Methane has a disproportionate effect on near-term warming,” says Desiree Plata, the director of MIT Methane Network. “CH4 does more damage than CO2 no matter how long you run the clock. By removing methane, we could potentially avoid critical climate tipping points.” 

    Because GHGs have a runaway effect on climate, reductions made now will have a far greater impact than the same reductions made in the future. Cutting methane emissions will slow the thawing of permafrost, which could otherwise lead to massive methane releases, as well as reduce increasing emissions from wetlands.  

    “The goal of MIT Methane Network is to reduce methane emissions by 45 percent by 2030, which would save up to 0.5 degree C of warming by 2100,” says Plata, an associate professor of civil and environmental engineering at MIT and director of the Plata Lab. “When you consider that governments are trying for a 1.5-degree reduction of all GHGs by 2100, this is a big deal.” 

    Under normal concentrations, methane, like CO2, poses no health risks. Yet methane assists in the creation of high levels of ozone. In the lower atmosphere, ozone is a key component of air pollution, which leads to “higher rates of asthma and increased emergency room visits,” says Plata. 

    Methane-related projects at the Plata Lab include a filter made of zeolite — the same clay-like material used in cat litter — designed to convert methane into CO2 at dairy farms and coal mines. At first glance, the technology would appear to be a bit of a hard sell, since it converts one GHG into another. Yet the zeolite filter’s low carbon and dollar costs, combined with the disproportionate warming impact of methane, make it a potential game-changer.

    The sense of urgency about methane has been amplified by recent studies that show humans are generating far more methane emissions than previously estimated, and that the rates are rising rapidly. Exactly how much methane is in the air is uncertain. Current methods for measuring atmospheric methane, such as ground, drone, and satellite sensors, “are not readily abundant and do not always agree with each other,” says Plata.  

    The Plata Lab is collaborating with Tim Swager in the MIT Department of Chemistry to develop low-cost methane sensors. “We are developing chemiresisitive sensors that cost about a dollar that you could place near energy infrastructure to back-calculate where leaks are coming from,” says Plata.  

    The researchers are working on improving the accuracy of the sensors using machine learning techniques and are planning to integrate internet-of-things technology to transmit alerts. Plata and Swager are not alone in focusing on data collection: the Inflation Reduction Act adds significant funding for methane sensor research. 

    Other research at the Plata Lab includes the development of nanomaterials and heterogeneous catalysis techniques for environmental applications. The lab also explores mitigation solutions for industrial waste, particularly those related to the energy transition. Plata is the co-founder of an lithium-ion battery recycling startup called Nth Cycle. 

    On a more fundamental level, the Plata Lab is exploring how to develop products with environmental and social sustainability in mind. “Our overarching mission is to change the way that we invent materials and processes so that environmental objectives are incorporated along with traditional performance and cost metrics,” says Plata. “It is important to do that rigorous assessment early in the design process.”

    Play video

    MIT amps up methane research 

    The MIT Methane Network brings together 26 researchers from MIT along with representatives of other institutions “that are dedicated to the idea that we can reduce methane levels in our lifetime,” says Plata. The organization supports research such as Plata’s zeolite and sensor projects, as well as designing pipeline-fixing robots, developing methane-based fuels for clean hydrogen, and researching the capture and conversion of methane into liquid chemical precursors for pharmaceuticals and plastics. Other members are researching policies to encourage more sustainable agriculture and land use, as well as methane-related social justice initiatives. 

    “Methane is an especially difficult problem because it comes from all over the place,” says Plata. A recent Global Carbon Project study estimated that half of methane emissions are caused by humans. This is led by waste and agriculture (28 percent), including cow and sheep belching, rice paddies, and landfills.  

    Fossil fuels represent 18 percent of the total budget. Of this, about 63 percent is derived from oil and gas production and pipelines, 33 percent from coal mining activities, and 5 percent from industry and transportation. Human-caused biomass burning, primarily from slash-and-burn agriculture, emits about 4 percent of the global total.  

    The other half of the methane budget includes natural methane emissions from wetlands (20 percent) and other natural sources (30 percent). The latter includes permafrost melting and natural biomass burning, such as forest fires started by lightning.  

    With increases in global warming and population, the line between anthropogenic and natural causes is getting fuzzier. “Human activities are accelerating natural emissions,” says Plata. “Climate change increases the release of methane from wetlands and permafrost and leads to larger forest and peat fires.”  

    The calculations can get complicated. For example, wetlands provide benefits from CO2 capture, biological diversity, and sea level rise resiliency that more than compensate for methane releases. Meanwhile, draining swamps for development increases emissions. 

    Over 100 nations have signed onto the U.N.’s Global Methane Pledge to reduce at least 30 percent of anthropogenic emissions within the next 10 years. The U.N. report estimates that this goal can be achieved using proven technologies and that about 60 percent of these reductions can be accomplished at low cost. 

    Much of the savings would come from greater efficiencies in fossil fuel extraction, processing, and delivery. The methane fees in the Inflation Reduction Act are primarily focused on encouraging fossil fuel companies to accelerate ongoing efforts to cap old wells, flare off excess emissions, and tighten pipeline connections.  

    Fossil fuel companies have already made far greater pledges to reduce methane than they have with CO2, which is central to their business. This is due, in part, to the potential savings, as well as in preparation for methane regulations expected from the Environmental Protection Agency in late 2022. The regulations build upon existing EPA oversight of drilling operations, and will likely be exempt from the U.S. Supreme Court’s ruling that limits the federal government’s ability to regulate GHGs. 

    Zeolite filter targets methane in dairy and coal 

    The “low-hanging fruit” of gas stream mitigation addresses most of the 20 percent of total methane emissions in which the gas is released in sufficiently high concentrations for flaring. Plata’s zeolite filter aims to address the thornier challenge of reducing the 80 percent of non-flammable dilute emissions. 

    Plata found inspiration in decades-old catalysis research for turning methane into methanol. One strategy has been to use an abundant, low-cost aluminosilicate clay called zeolite.  

    “The methanol creation process is challenging because you need to separate a liquid, and it has very low efficiency,” says Plata. “Yet zeolite can be very efficient at converting methane into CO2, and it is much easier because it does not require liquid separation. Converting methane to CO2 sounds like a bad thing, but there is a major anti-warming benefit. And because methane is much more dilute than CO2, the relative CO2 contribution is minuscule.”  

    Using zeolite to create methanol requires highly concentrated methane, high temperatures and pressures, and industrial processing conditions. Yet Plata’s process, which dopes the zeolite with copper, operates in the presence of oxygen at much lower temperatures under typical pressures. “We let the methane proceed the way it wants from a thermodynamic perspective from methane to methanol down to CO2,” says Plata. 

    Researchers around the world are working on other dilute methane removal technologies. Projects include spraying iron salt aerosols into sea air where they react with natural chlorine or bromine radicals, thereby capturing methane. Most of these geoengineering solutions, however, are difficult to measure and would require massive scale to make a difference.  

    Plata is focusing her zeolite filters on environments where concentrations are high, but not so high as to be flammable. “We are trying to scale zeolite into filters that you could snap onto the side of a cross-ventilation fan in a dairy barn or in a ventilation air shaft in a coal mine,” says Plata. “For every packet of air we bring in, we take a lot of methane out, so we get more bang for our buck.”  

    The major challenge is creating a filter that can handle high flow rates without getting clogged or falling apart. Dairy barn air handlers can push air at up to 5,000 cubic feet per minute and coal mine handlers can approach 500,000 CFM. 

    Plata is exploring engineering options including fluidized bed reactors with floating catalyst particles. Another filter solution, based in part on catalytic converters, features “higher-order geometric structures where you have a porous material with a long path length where the gas can interact with the catalyst,” says Plata. “This avoids the challenge with fluidized beds of containing catalyst particles in the reactor. Instead, they are fixed within a structured material.”  

    Competing technologies for removing methane from mine shafts “operate at temperatures of 1,000 to 1,200 degrees C, requiring a lot of energy and risking explosion,” says Plata. “Our technology avoids safety concerns by operating at 300 to 400 degrees C. It reduces energy use and provides more tractable deployment costs.” 

    Potentially, energy and dollar costs could be further reduced in coal mines by capturing the heat generated by the conversion process. “In coal mines, you have enrichments above a half-percent methane, but below the 4 percent flammability threshold,” says Plata. “The excess heat from the process could be used to generate electricity using off-the-shelf converters.” 

    Plata’s dairy barn research is funded by the Gerstner Family Foundation and the coal mining project by the U.S. Department of Energy. “The DOE would like us to spin out the technology for scale-up within three years,” says Plata. “We cannot guarantee we will hit that goal, but we are trying to develop this as quickly as possible. Our society needs to start reducing methane emissions now.”  More

  • in

    Studying floods to better predict their dangers

    “My job is basically flooding Cambridge,” says Katerina “Katya” Boukin, a graduate student in civil and environmental engineering at MIT and the MIT Concrete Sustainability Hub’s resident expert on flood simulations. 

    You can often find her fine-tuning high-resolution flood risk models for the City of Cambridge, Massachusetts, or talking about hurricanes with fellow researcher Ipek Bensu Manav.

    Flooding represents one of the world’s gravest natural hazards. Extreme climate events inducing flooding, like severe storms, winter storms, and tropical cyclones, caused an estimated $128.1 billion of damages in 2021 alone. 

    Climate simulation models suggest that severe storms will become more frequent in the coming years, necessitating a better understanding of which parts of cities are most vulnerable — an understanding that can be improved through modeling.

    A problem with current flood models is that they struggle to account for an oft-misunderstood type of flooding known as pluvial flooding. 

    “You might think of flooding as the overflowing of a body of water, like a river. This is fluvial flooding. This can be somewhat predictable, as you can think of proximity to water as a risk factor,” Boukin explains.

    However, the “flash flooding” that causes many deaths each year can happen even in places nowhere near a body of water. This is an example of pluvial flooding, which is affected by terrain, urban infrastructure, and the dynamic nature of storm loads.

    “If we don’t know how a flood is propagating, we don’t know the risk it poses to the urban environment. And if we don’t understand the risk, we can’t really discuss mitigation strategies,” says Boukin, “That’s why I pursue improving flood propagation models.”

    Boukin is leading development of a new flood prediction method that seeks to address these shortcomings. By better representing the complex morphology of cities, Boukin’s approach may provide a clearer forecast of future urban flooding.

    Katya Boukin developed this model of the City of Cambridge, Massachusetts. The base model was provided through a collaboration between MIT, the City of Cambridge, and Dewberry Engineering.

    Image: Katya Boukin

    Previous item
    Next item

    “In contrast to the more typical traditional catchment model, our method has rainwater spread around the urban environment based on the city’s topography, below-the-surface features like sewer pipes, and the characteristics of local soils,” notes Boukin.

    “We can simulate the flooding of regions with local rain forecasts. Our results can show how flooding propagates by the foot and by the second,” she adds.

    While Boukin’s current focus is flood simulation, her unconventional academic career has taken her research in many directions, like examining structural bottlenecks in dense urban rail systems and forecasting ground displacement due to tunneling. 

    “I’ve always been interested in the messy side of problem-solving. I think that difficult problems present a real chance to gain a deeper understanding,” says Boukin.

    Boukin credits her upbringing for giving her this perspective. A native of Israel, Boukin says that civil engineering is the family business. “My parents are civil engineers, my mom’s parents are, too, her grandfather was a professor in civil engineering, and so on. Civil engineering is my bloodline.”

    However, the decision to follow the family tradition did not come so easily. “After I took the Israeli equivalent of the SAT, I was at a decision point: Should I go to engineering school or medical school?” she recalls.

    “I decided to go on a backpacking trip to help make up my mind. It’s sort of an Israeli rite to explore internationally, so I spent six months in South America. I think backpacking is something everyone should do.”

    After this soul searching, Boukin landed on engineering school, where she fell in love with structural engineering. “It was the option that felt most familiar and interesting. I grew up playing with AutoCAD on the family computer, and now I use AutoCAD professionally!” she notes.

    “For my master’s degree, I was looking to study in a department that would help me integrate knowledge from fields like climatology and civil engineering. I found the MIT Department of Civil and Environmental Engineering to be an excellent fit,” she says.

    “I am lucky that MIT has so many people that work together as well as they do. I ended up at the Concrete Sustainability Hub, where I’m working on projects which are the perfect fit between what I wanted to do and what the department wanted to do.” 

    Boukin’s move to Cambridge has given her a new perspective on her family and childhood. 

    “My parents brought me to Israel when I was just 1 year old. In moving here as a second-time immigrant, I have a new perspective on what my parents went through during the move to Israel. I moved when I was 27 years old, the same age as they were. They didn’t have a support network and worked any job they could find,” she explains.

    “I am incredibly grateful to them for the morals they instilled in my sister, who recently graduated medical school, and I. I know I can call my parents if I ever need something, and they will do whatever they can to help.”

    Boukin hopes to honor her parents’ efforts through her research.

    “Not only do I want to help stakeholders understand flood risks, I want to make awareness of flooding more accessible. Each community needs different things to be resilient, and different cultures have different ways of delivering and receiving information,” she says.

    “Everyone should understand that they, in addition to the buildings and infrastructure around them, are part of a complex ecosystem. Any change to a city can affect the rest of it. If designers and residents are aware of this when considering flood mitigation strategies, we can better design cities and understand the consequences of damage.” More

  • in

    3Q: Why Europe is so vulnerable to heat waves

    This year saw high-temperature records shattered across much of Europe, as crops withered in the fields due to widespread drought. Is this a harbinger of things to come as the Earth’s climate steadily warms up?

    Elfatih Eltahir, MIT professor of civil and environmental engineering and H. M. King Bhumibol Professor of Hydrology and Climate, and former doctoral student Alexandre Tuel PhD ’20 recently published a piece in the Bulletin of the Atomic Scientists describing how their research helps explain this anomalous European weather. The findings are based in part on analyses described in their book “Future Climate of the Mediterranean and Europe,” published earlier this year. MIT News asked the two authors to describe the dynamics behind these extreme weather events.

    Q: Was the European heat wave this summer anticipated based on existing climate models?

    Eltahir: Climate models project increasingly dry summers over Europe. This is especially true for the second half of the 21st century, and for southern Europe. Extreme dryness is often associated with hot conditions and heat waves, since any reduction in evaporation heats the soil and the air above it. In general, models agree in making such projections about European summers. However, understanding the physical mechanisms responsible for these projections is an active area of research.

    The same models that project dry summers over southern Europe also project dry winters over the neighboring Mediterranean Sea. In fact, the Mediterranean Sea stands out as one of the most significantly impacted regions — a literal “hot spot” — for winter droughts triggered by climate change. Again, until recently, the association between the projections of summer dryness over Europe and dry winters over the Mediterranean was not understood.

    In recent MIT doctoral research, carried out in the Department of Civil and Environmental Engineering, a hypothesis was developed to explain why the Mediterranean stands out as a hot spot for winter droughts under climate change. Further, the same theory offers a mechanistic understanding that connects the projections of dry summers over southern Europe and dry winters over the Mediterranean.

    What is exciting about the observed climate over Europe last summer is the fact that the observed drought started and developed with spatial and temporal patterns that are consistent with our proposed theory, and in particular the connection to the dry conditions observed over the Mediterranean during the previous winter.

    Q: What is it about the area around the Mediterranean basin that produces such unusual weather extremes?

    Eltahir: Multiple factors come together to cause extreme heat waves such as the one that Europe has experienced this summer, as well as previously, in 2003, 2015, 2018, 2019, and 2020. Among these, however, mutual influences between atmospheric dynamics and surface conditions, known as land-atmosphere feedbacks, seem to play a very important role.

    In the current climate, southern Europe is located in the transition zone between the dry subtropics (the Sahara Desert in North Africa) and the relatively wet midlatitudes (with a climate similar to that of the Pacific Northwest). High summertime temperatures tend to make the precipitation that falls to the ground evaporate quickly, and as a consequence soil moisture during summer is very dependent on springtime precipitation. A dry spring in Europe (such as the 2022 one) causes dry soils in late spring and early summer. This lack of surface water in turn limits surface evaporation during summer. Two important consequences follow: First, incoming radiative energy from the sun preferentially goes into increasing air temperature rather than evaporating water; and second, the inflow of water into air layers near the surface decreases, which makes the air drier and precipitation less likely. Combined, these two influences increase the likelihood of heat waves and droughts.

    Tuel: Through land-atmosphere feedbacks, dry springs provide a favorable environment for persistent warm and dry summers but are of course not enough to directly cause heat waves. A spark is required to ignite the fuel. In Europe and elsewhere, this spark is provided by large-scale atmospheric dynamics. If an anticyclone sets over an area with very dry soils, surface temperature can quickly shoot up as land-atmosphere feedbacks come into play, developing into a heat wave that can persist for weeks.

    The sensitivity to springtime precipitation makes southern Europe and the Mediterranean particularly prone to persistent summer heat waves. This will play an increasingly important role in the future, as spring precipitation is expected to decline, making scorching summers even more likely in this corner of the world. The decline in spring precipitation, which originates as an anomalously dry winter around the Mediterranean, is very robust across climate projections. Southern Europe and the Mediterranean really stand out from most other land areas, where precipitation will on average increase with global warming.

    In our work, we showed that this Mediterranean winter decline was driven by two independent factors: on the one hand, trends in the large-scale circulation, notably stationary atmospheric waves, and on the other hand, reduced warming of the Mediterranean Sea relative to the surrounding continents — a well-known feature of global warming. Both factors lead to increased surface air pressure and reduced precipitation over the Mediterranean and Southern Europe.

    Q: What can we expect over the coming decades in terms of the frequency and severity of these kinds of droughts, floods, and other extremes in European weather?

    Tuel: Climate models have long shown that the frequency and intensity of heat waves was bound to increase as the global climate warms, and Europe is no exception. The reason is simple: As the global temperature rises, the temperature distribution shifts toward higher values, and heat waves become more intense and more frequent. Southern Europe and the Mediterranean, however, will be hit particularly hard. The reason for this is related to the land-atmosphere feedbacks we just discussed. Winter precipitation over the Mediterranean and spring precipitation over southern Europe will decline significantly, which will lead to a decrease in early summer soil moisture over southern Europe and will push average summer temperatures even higher; the region will become a true climate change hot spot. In that sense, 2022 may really be a taste of the future. The succession of recent heat waves in Europe, however, suggests that things may be going faster than climate model projections imply. Decadal variability or badly understood trends in large-scale atmospheric dynamics may play a role here, though that is still debated. Another possibility is that climate models tend to underestimate the magnitude of land-atmosphere feedbacks and downplay the influence of dry soil moisture anomalies on summertime weather.

    Potential trends in floods are more difficult to assess because floods result from a multiplicity of factors, like extreme precipitation, soil moisture levels, or land cover. Extreme precipitation is generally expected to increase in most regions, but very high uncertainties remain, notably because extreme precipitation is highly dependent on atmospheric dynamics about which models do not always agree. What is almost certain is that with warming, the water content of the atmosphere increases (following a law of thermodynamics known as the Clausius-Clapeyron relationship). Thus, if the dynamics are favorable to precipitation, a lot more of it may fall in a warmer climate. Last year’s floods in Germany, for example, were triggered by unprecedented heavy rainfall which climate change made more likely. More

  • in

    From bridges to DNA: civil engineering across disciplines

    How is DNA like a bridge? This question is not a riddle or logic game, it is a concern of Johannes Kalliauer’s doctoral thesis.

    As a student at TU Wien in Austria, Kalliauer was faced with a monumental task: combining approaches from civil engineering and theoretical physics to better understand the forces that act on DNA.

    Kalliauer, now a postdoc at the MIT Concrete Sustainability Hub, says he modeled DNA as though it were a beam, using molecular dynamics principles to understand its structural properties.

    “The mechanics of very small objects, like DNA helices, and large ones, like bridges, are quite similar. Each may be understood in terms of Newtonian mechanics. Forces and moments act on each system, subjecting each to deformations like twisting, stretching, and warping,” says Kalliauer.

    As a 2020 article from TU Wien noted, Kalliauer observed a counterintuitive behavior when examining DNA at an atomic level. Unlike a typical spring which becomes less coiled as it is stretched, DNA was observed to become more wound as its length was increased. 

    In situations like these where conventional logic appears to break down, Kalliauer relies on the intuition he has gained as an engineer.

    “To understand this strange behavior in DNA, I turned to a fundamental approach: I examined what was the same about DNA and macroscopic structures and what was different. Civil engineers use methods and calculations which have been developed over centuries and which are very similar to the ones I employed for my thesis,” Kalliauer explains. 

    As Kalliauer continues, “Structural engineering is an incredibly versatile discipline. If you understand it, you can understand atomistic objects like DNA strands and very large ones like galaxies. As a researcher, I rely on it to help me bring new viewpoints to fields like biology. Other civil engineers can and should do the same.”

    Kalliauer, who grew up in a small town in Austria, has spent his life applying unconventional approaches like this across disciplines. “I grew up in a math family. While none of us were engineers, my parents instilled an appreciation for the discipline in me and my two older sisters.”

    After middle school, Kalliauer attended a technical school for civil engineering, where he discovered a fascination for mechanics. He also worked on a construction site to gain practical experience and see engineering applied in a real-world context.

    Kalliauer studied out of interest intensely, working upwards of 100 hours per week to better understand coursework in university. “I asked teachers and professors many questions, often challenging their ideas. Above everything else, I needed to understand things for myself. Doing well on exams was a secondary concern.”

    In university, he studied topics ranging from car crash testing to concrete hinges to biology. As a new member of the CSHub, he is studying how floods may be modeled with the statistical physics-based model provided by lattice density functional theory.

    In doing this, he builds on the work of past and present CSHub researchers like Elli Vartziotis and Katerina Boukin. 

    “It’s important to me that this research has a real impact in the world. I hope my approach to engineering can help researchers and stakeholders understand how floods propagate in urban contexts, so that we may make cities more resilient,” he says. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Silk offers an alternative to some microplastics

    Microplastics, tiny particles of plastic that are now found worldwide in the air, water, and soil, are increasingly recognized as a serious pollution threat, and have been found in the bloodstream of animals and people around the world.

    Some of these microplastics are intentionally added to a variety of products, including agricultural chemicals, paints, cosmetics, and detergents — amounting to an estimated 50,000 tons a year in the European Union alone, according to the European Chemicals Agency. The EU has already declared that these added, nonbiodegradable microplastics must be eliminated by 2025, so the search is on for suitable replacements, which do not currently exist.

    Now, a team of scientists at MIT and elsewhere has developed a system based on silk that could provide an inexpensive and easily manufactured substitute. The new process is described in a paper in the journal Small, written by MIT postdoc Muchun Liu, MIT professor of civil and environmental engineering Benedetto Marelli, and five others at the chemical company BASF in Germany and the U.S.

    The microplastics widely used in industrial products generally protect some specific active ingredient (or ingredients) from being degraded by exposure to air or moisture, until the time they are needed. They provide a slow release of the active ingredient for a targeted period of time and minimize adverse effects to its surroundings. For example, vitamins are often delivered in the form of microcapsules packed into a pill or capsule, and pesticides and herbicides are similarly enveloped. But the materials used today for such microencapsulation are plastics that persist in the environment for a long time. Until now, there has been no practical, economical substitute available that would biodegrade naturally.

    Much of the burden of environmental microplastics comes from other sources, such as the degradation over time of larger plastic objects such as bottles and packaging, and from the wear of car tires. Each of these sources may require its own kind of solutions for reducing its spread, Marelli says. The European Chemical Agency has estimated that the intentionally added microplastics represent approximately 10-15 percent of the total amount in the environment, but this source may be relatively easy to address using this nature-based biodegradable replacement, he says.

    “We cannot solve the whole microplastics problem with one solution that fits them all,” he says. “Ten percent of a big number is still a big number. … We’ll solve climate change and pollution of the world one percent at a time.”

    Unlike the high-quality silk threads used for fine fabrics, the silk protein used in the new alternative material is widely available and less expensive, Liu says. While silkworm cocoons must be painstakingly unwound to produce the fine threads needed for fabric, for this use, non-textile-quality cocoons can be used, and the silk fibers can simply be dissolved using a scalable water-based process. The processing is so simple and tunable that the resulting material can be adapted to work on existing manufacturing equipment, potentially providing a simple “drop in” solution using existing factories.

    Silk is recognized as safe for food or medical use, as it is nontoxic and degrades naturally in the body. In lab tests, the researchers demonstrated that the silk-based coating material could be used in existing, standard spray-based manufacturing equipment to make a standard water-soluble microencapsulated herbicide product, which was then tested in a greenhouse on a corn crop. The test showed it worked even better than an existing commercial product, inflicting less damage to the plants, Liu says.

    While other groups have proposed degradable encapsulation materials that may work at a small laboratory scale, Marelli says, “there is a strong need to achieve encapsulation of high-content actives to open the door to commercial use. The only way to have an impact is where we can not only replace a synthetic polymer with a biodegradable counterpart, but also achieve performance that is the same, if not better.”

    The secret to making the material compatible with existing equipment, Liu explains, is in the tunability of the silk material. By precisely adjusting the polymer chain arrangements of silk materials and addition of a surfactant, it is possible to fine-tune the properties of the resulting coatings once they dry out and harden. The material can be hydrophobic (water-repelling) even though it is made and processed in a water solution, or it can be hydrophilic (water-attracting), or anywhere in between, and for a given application it can be made to match the characteristics of the material it is being used to replace.

    In order to arrive at a practical solution, Liu had to develop a way of freezing the forming droplets of encapsulated materials as they were forming, to study the formation process in detail. She did this using a special spray-freezing system, and was able to observe exactly how the encapsulation works in order to control it better. Some of the encapsulated “payload” materials, whether they be pesticides or nutrients or enzymes, are water-soluble and some are not, and they interact in different ways with the coating material.

    “To encapsulate different materials, we have to study how the polymer chains interact and whether they are compatible with different active materials in suspension,” she says. The payload material and the coating material are mixed together in a solution and then sprayed. As droplets form, the payload tends to be embedded in a shell of the coating material, whether that’s the original synthetic plastic or the new silk material.

    The new method can make use of low-grade silk that is unusable for fabrics, and large quantities of which are currently discarded because they have no significant uses, Liu says. It can also use used, discarded silk fabric, diverting that material from being disposed of in landfills.

    Currently, 90 percent of the world’s silk production takes place in China, Marelli says, but that’s largely because China has perfected the production of the high-quality silk threads needed for fabrics. But because this process uses bulk silk and has no need for that level of quality, production could easily be ramped up in other parts of the world to meet local demand if this process becomes widely used, he says.

    “This elegant and clever study describes a sustainable and biodegradable silk-based replacement for microplastic encapsulants, which are a pressing environmental challenge,” says Alon Gorodetsky, an associate professor of chemical and biomolecular engineering at the University of California at Irvine, who was not associated with this research. “The modularity of the described materials and the scalability of the manufacturing processes are key advantages that portend well for translation to real-world applications.”

    This process “represents a potentially highly significant advance in active ingredient delivery for a range of industries, particularly agriculture,” says Jason White, director of the Connecticut Agricultural Experiment Station, who also was not associated with this work. “Given the current and future challenges related to food insecurity, agricultural production, and a changing climate, novel strategies such as this are greatly needed.”

    The research team also included Pierre-Eric Millard, Ophelie Zeyons, Henning Urch, Douglas Findley and Rupert Konradi from the BASF corporation, in Germany and in the U.S. The work was supported by BASF through the Northeast Research Alliance (NORA). More

  • in

    Four researchers with MIT ties earn Schmidt Science Fellowships

    Four researchers with MIT ties — Juncal Arbelaiz, Xiangkun (Elvis) Cao, Sandya Subramanian, and Heather Zlotnick ’17 — have been honored with competitive Schmidt Science Fellowships.

    Created in 2017, the fellows program aims to bring together the world’s brightest minds “to solve society’s toughest challenges.”

    The four MIT-affiliated researchers are among 29 Schmidt Science Fellows from around the world who will receive postdoctoral support for either one or two years with an annual stipend of $100,000, along with individualized mentoring and participation in the program’s Global Meeting Series. The fellows will also have opportunities to engage with thought-leaders from science, business, policy, and society. According to the award announcement, the fellows are expected to pursue research that shifts from the focus of their PhDs, to help expand and enhance their futures as scientific leaders.

    Juncal Arbelaiz is a PhD candidate in applied mathematics at MIT, who is completing her doctorate this summer. Her doctoral research at MIT is advised by Ali Jadbabaie, the JR East Professor of Engineering and head of the Department of Civil and Environmental Engineering; Anette Hosoi, the Neil and Jane Pappalardo Professor of Mechanical Engineering and associate dean of the School of Engineering; and Bassam Bamieh, professor of mechanical engineering and associate director of the Center for Control, Dynamical Systems, and Computation at the University of California at Santa Barbara. Arbelaiz’s research revolves around the design of optimal decentralized intelligence for spatially-distributed dynamical systems.

    “I cannot think of a better way to start my independent scientific career. I feel very excited and grateful for this opportunity,” says Arbelaiz. With her fellowship, she will enlist systems biology to explore how the nervous system encodes and processes sensory information to address future safety-critical artificial intelligence applications. “The Schmidt Science Fellowship will provide me with a unique opportunity to work at the intersection of biological and machine intelligence for two years and will be a steppingstone towards my longer-term objective of becoming a researcher in bio-inspired machine intelligence,” she says.

    Xiangkun (Elvis) Cao is currently a postdoc in the lab of T. Alan Hatton, the Ralph Landau Professor in Chemical Engineering, and an Impact Fellow at the MIT Climate and Sustainability Consortium. Cao received his PhD in mechanical engineering from Cornell University in 2021, during which he focused on microscopic precision in the simultaneous delivery of light and fluids by optofluidics, with advances relevant to health and sustainability applications. As a Schmidt Science Fellow, he plans to be co-advised by Hatton on carbon capture, and Ted Sargent, professor of chemistry at Northwestern University, on carbon utilization. Cao is passionate about integrated carbon capture and utilization (CCU) from molecular to process levels, machine learning to inspire smart CCU, and the nexus of technology, business, and policy for CCU.

    “The Schmidt Science Fellowship provides the perfect opportunity for me to work across disciplines to study integrated carbon capture and utilization from molecular to process levels,” Cao explains. “My vision is that by integrating carbon capture and utilization, we can concurrently make scientific discoveries and unlock economic opportunities while mitigating global climate change. This way, we can turn our carbon liability into an asset.”

    Sandya Subramanian, a 2021 PhD graduate of the Harvard-MIT Program in Health Sciences and Technology (HST) in the area of medical engineering and medical physics, is currently a postdoc at Stanford Data Science. She is focused on the topics of biomedical engineering, statistics, machine learning, neuroscience, and health care. Her research is on developing new technologies and methods to study the interactions between the brain, the autonomic nervous system, and the gut. “I’m extremely honored to receive the Schmidt Science Fellowship and to join the Schmidt community of leaders and scholars,” says Subramanian. “I’ve heard so much about the fellowship and the fact that it can open doors and give people confidence to pursue challenging or unique paths.”

    According to Subramanian, the autonomic nervous system and its interactions with other body systems are poorly understood but thought to be involved in several disorders, such as functional gastrointestinal disorders, Parkinson’s disease, diabetes, migraines, and eating disorders. The goal of her research is to improve our ability to monitor and quantify these physiologic processes. “I’m really interested in understanding how we can use physiological monitoring technologies to inform clinical decision-making, especially around the autonomic nervous system, and I look forward to continuing the work that I’ve recently started at Stanford as Schmidt Science Fellow,” she says. “A huge thank you to all of the mentors, colleagues, friends, and leaders I had the pleasure of meeting and working with at HST and MIT; I couldn’t have done this without everything I learned there.”

    Hannah Zlotnick ’17 attended MIT for her undergraduate studies, majoring in biological engineering with a minor in mechanical engineering. At MIT, Zlotnick was a student-athlete on the women’s varsity soccer team, a UROP student in Alan Grodzinsky’s laboratory, and a member of Pi Beta Phi. For her PhD, Zlotnick attended the University of Pennsylvania, and worked in Robert Mauck’s laboratory within the departments of Bioengineering and Orthopaedic Surgery.

    Zlotnick’s PhD research focused on harnessing remote forces, such as magnetism or gravity, to enhance engineered cartilage and osteochondral repair both in vitro and in large animal models. Zlotnick now plans to pivot to the field of biofabrication to create tissue models of the knee joint to assess potential therapeutics for osteoarthritis. “I am humbled to be a part of the Schmidt Science Fellows community, and excited to venture into the field of biofabrication,” Zlotnick says. “Hopefully this work uncovers new therapies for patients with inflammatory joint diseases.” More

  • in

    Better living through multicellular life cycles

    Cooperation is a core part of life for many organisms, ranging from microbes to complex multicellular life. It emerges when individuals share resources or partition a task in such a way that each derives a greater benefit when acting together than they could on their own. For example, birds and fish flock to evade predators, slime mold swarms to hunt for food and reproduce, and bacteria form biofilms to resist stress.

    Individuals must live in the same “neighborhood” to cooperate. For bacteria, this neighborhood can be as small as tens of microns. But in environments like the ocean, it’s rare for cells with the same genetic makeup to co-occur in the same neighborhood on their own. And this necessity poses a puzzle to scientists: In environments where survival hinges on cooperation, how do bacteria build their neighborhood?

    To study this problem, MIT professor Otto X. Cordero and colleagues took inspiration from nature: They developed a model system around a common coastal seawater bacterium that requires cooperation to eat sugars from brown algae. In the system, single cells were initially suspended in seawater too far away from other cells to cooperate. To share resources and grow, the cells had to find a mechanism of creating a neighborhood. “Surprisingly, each cell was able to divide and create its own neighborhood of clones by forming tightly packed clusters,” says Cordero, associate professor in the Department of Civil and Environmental Engineering.

    A new paper, published today in Current Biology, demonstrates how an algae-eating bacterium solves the engineering challenge of creating local cell density starting from a single-celled state.

    “A key discovery was the importance of phenotypic heterogeneity in supporting this surprising mechanism of clonal cooperation,” says Cordero, lead author of the new paper.

    Using a combination of microscopy, transcriptomics, and labeling experiments to profile a cellular metabolic state, the researchers found that cells phenotypically differentiate into a sticky “shell” population and a motile, carbon-storing “core.” The researchers propose that shell cells create the cellular neighborhood needed to sustain cooperation while core cells accumulate stores of carbon that support further clonal reproduction when the multicellular structure ruptures.

    This work addresses a key piece in the bigger challenge of understanding the bacterial processes that shape our earth, such as the cycling of carbon from dead organic matter back into food webs and the atmosphere. “Bacteria are fundamentally single cells, but often what they accomplish in nature is done through cooperation. We have much to uncover about what bacteria can accomplish together and how that differs from their capacity as individuals,” adds Cordero.

    Co-authors include Julia Schwartzman and Ali Ebrahimi, former postdocs in the Cordero Lab. Other co-authors are Gray Chadwick, a former graduate student at Caltech; Yuya Sato, a senior researcher at Japan’s National Institute of Advanced Industrial Science and Technology; Benjamin Roller, a current postdoc at the University of Vienna; and Victoria Orphan of Caltech.

    Funding was provided by the Simons Foundation. Individual authors received support from the Swiss National Science Foundation, Japan Society for the Promotion of Science, the U.S. National Science Foundation, the Kavli Institute of Theoretical Physics, and the National Institutes of Health. More