More stories

  • in

    Making agriculture more resilient to climate change

    As Earth’s temperature rises, agricultural practices will need to adapt. Droughts will likely become more frequent, and some land may no longer be arable. On top of that is the challenge of feeding an ever-growing population without expanding the production of fertilizer and other agrochemicals, which have a large carbon footprint that is contributing to the overall warming of the planet.Researchers across MIT are taking on these agricultural challenges from a variety of angles, from engineering plants that sound an alarm when they’re under stress to making seeds more resilient to drought. These types of technologies, and more yet to be devised, will be essential to feed the world’s population as the climate changes.“After water, the first thing we need is food. In terms of priority, there is water, food, and then everything else. As we are trying to find new strategies to support a world of 10 billion people, it will require us to invent new ways of making food,” says Benedetto Marelli, an associate professor of civil and environmental engineering at MIT.Marelli is the director of one of the six missions of the recently launched Climate Project at MIT, which focus on research areas such as decarbonizing industry and building resilient cities. Marelli directs the Wild Cards mission, which aims to identify unconventional solutions that are high-risk and high-reward.Drawing on expertise from a breadth of fields, MIT is well-positioned to tackle the challenges posed by climate change, Marelli says. “Bringing together our strengths across disciplines, including engineering, processing at scale, biological engineering, and infrastructure engineering, along with humanities, science, and economics, presents a great opportunity.”Protecting seeds from droughtMarelli, who began his career as a biomedical engineer working on regenerative medicine, is now developing ways to boost crop yields by helping seeds to survive and germinate during drought conditions, or in soil that has been depleted of nutrients. To achieve that, he has devised seed coatings, based on silk and other polymers, that can envelop and nourish seeds during the critical germination process.

    A new seed-coating process could facilitate agriculture on marginal arid lands by enabling the seeds to retain any available water.

    In healthy soil, plants have access to nitrogen, phosphates, and other nutrients that they need, many of which are supplied by microbes that live in the soil. However, in soil that has suffered from drought or overfarming, these nutrients are lacking. Marelli’s idea was to coat the seeds with a polymer that can be embedded with plant-growth-promoting bacteria that “fix” nitrogen by absorbing it from the air and making it available to plants. The microbes can also make other necessary nutrients available to plants.For the first generation of the seed coatings, he embedded these microbes in coatings made of silk — a material that he had previously shown can extend the shelf life of produce, meat, and other foods. In his lab at MIT, Marelli has shown that the seed coatings can help germinating plants survive drought, ultraviolet light exposure, and high salinity.Now, working with researchers at the Mohammed VI Polytechnic University in Morocco, he is adapting the approach to crops native to Morocco, a country that has experienced six consecutive years of drought due a drop in rainfall linked to climate change.For these studies, the researchers are using a biopolymer coating derived from food waste that can be easily obtained in Morocco, instead of silk.“We’re working with local communities to extract the biopolymers, to try to have a process that works at scale so that we make materials that work in that specific environment.” Marelli says. “We may come up with an idea here at MIT within a high-resource environment, but then to work there, we need to talk with the local communities, with local stakeholders, and use their own ingenuity and try to match our solution with something that could actually be applied in the local environment.”Microbes as fertilizersWhether they are experiencing drought or not, crops grow much better when synthetic fertilizers are applied. Although it’s essential to most farms, applying fertilizer is expensive and has environmental consequences. Most of the world’s fertilizer is produced using the Haber-Bosch process, which converts nitrogen and hydrogen to ammonia at high temperatures and pressures. This energy intensive process accounts for about 1.5 percent of the world’s greenhouse gas emissions, and the transportation required to deliver it to farms around the world adds even more emissions.Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering at MIT, is developing a microbial alternative to the Haber-Bosch process. Some farms have experimented with applying nitrogen-fixing bacteria directly to the roots of their crops, which has shown some success. However, the microbes are too delicate to be stored long-term or shipped anywhere, so they must be produced in a bioreactor on the farm.

    MIT chemical engineers devised a metal-organic coating that protects bacterial cells from damage without impeding their growth or function.

    To overcome those challenges, Furst has developed a way to coat the microbes with a protective shell that prevents them from being destroyed by heat or other stresses. The coating also protects microbes from damage caused by freeze-drying — a process that would make them easier to transport.The coatings can vary in composition, but they all consist of two components. One is a metal such as iron, manganese, or zinc, and the other is a polyphenol — a type of plant-derived organic compound that includes tannins and other antioxidants. These two components self-assemble into a protective shell that encapsulates bacteria.

    Play video

    Mighty Microbes: The Power of Protective PolymersVideo: Chemistry Shorts

    “These microbes would be delivered with the seeds, so it would remove the need for fertilizing mid-growing. It also reduces the cost and provides more autonomy to the farmers and decreases carbon emissions associated with agriculture,” Furst says. “We think it’ll be a way to make agriculture completely regenerative, so to bring back soil health while also boosting crop yields and the nutrient density of the crops.”Furst has founded a company called Seia Bio, which is working on commercializing the coated microbes and has begun testing them on farms in Brazil. In her lab, Furst is also working on adapting the approach to coat microbes that can capture carbon dioxide from the atmosphere and turn it into limestone, which helps to raise the soil pH.“It can help change the pH of soil to stabilize it, while also being a way to effectively perform direct air capture of CO2,” she says. “Right now, farmers may truck in limestone to change the pH of soil, and so you’re creating a lot of emissions to bring something in that microbes can do on their own.”Distress sensors for plantsSeveral years ago, Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, began to explore the idea of using plants themselves as sensors that could reveal when they’re in distress. When plants experience drought, attack by pests, or other kinds of stress, they produce hormones and other signaling molecules to defend themselves.Strano, whose lab specializes in developing tiny sensors for a variety of molecules, wondered if such sensors could be deployed inside plants to pick up those distress signals. To create their sensors, Strano’s lab takes advantage of the special properties of single-walled carbon nanotubes, which emit fluorescent light. By wrapping the tubes with different types of polymers, the sensors can be tuned to detect specific targets, giving off a fluorescent signal when the target is present.For use in plants, Strano and his colleagues created sensors that could detect signaling molecules such as salicylic acid and hydrogen peroxide. They then showed that these sensors could be inserted into the underside of plant leaves, without harming the plants. Once embedded in the mesophyll of the leaves, the sensors can pick up a variety of signals, which can be read with an infrared camera.

    Sensors that detect plant signaling molecules can reveal when crops are experiencing too much light or heat, or attack from insects or microbes.

    These sensors can reveal, in real-time, whether a plant is experiencing a variety of stresses. Until now, there hasn’t been a way to get that information fast enough for farmers to act on it.“What we’re trying to do is make tools that get information into the hands of farmers very quickly, fast enough for them to make adaptive decisions that can increase yield,” Strano says. “We’re in the middle of a revolution of really understanding the way in which plants internally communicate and communicate with other plants.”This kind of sensing could be deployed in fields, where it could help farmers respond more quickly to drought and other stresses, or in greenhouses, vertical farms, and other types of indoor farms that use technology to grow crops in a controlled environment.Much of Strano’s work in this area has been conducted with the support of the U.S. Department of Agriculture (USDA) and as part of the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) program at the Singapore-MIT Alliance for Research and Technology (SMART), and sensors have been deployed in tests in crops at a controlled environment farm in Singapore called Growy.“The same basic kinds of tools can help detect problems in open field agriculture or in controlled environment agriculture,” Strano says. “They both suffer from the same problem, which is that the farmers get information too late to prevent yield loss.”Reducing pesticide usePesticides represent another huge financial expense for farmers: Worldwide, farmers spend about $60 billion per year on pesticides. Much of this pesticide ends up accumulating in water and soil, where it can harm many species, including humans. But, without using pesticides, farmers may lose more than half of their crops.Kripa Varanasi, an MIT professor of mechanical engineering, is working on tools that can help farmers measure how much pesticide is reaching their plants, as well as technologies that can help pesticides adhere to plants more efficiently, reducing the amount that runs off into soil and water.Varanasi, whose research focuses on interactions between liquid droplets and surfaces, began to think about applying his work to agriculture more than a decade ago, after attending a conference at the USDA. There, he was inspired to begin developing ways to improve the efficiency of pesticide application by optimizing the interactions that occur at leaf surfaces.“Billions of drops of pesticide are being sprayed on every acre of crop, and only a small fraction is ultimately reaching and staying on target. This seemed to me like a problem that we could help to solve,” he says.Varanasi and his students began exploring strategies to make drops of pesticide stick to leaves better, instead of bouncing off. They found that if they added polymers with positive and negative charges, the oppositely charged droplets would form a hydrophilic (water-attracting) coating on the leaf surface, which helps the next droplets applied to stick to the leaf.

    AgZen has developed a system for farming that can monitor exactly how much of the sprayed chemicals adheres to plants, in real time, as the sprayer drives through a field.

    Later, they developed an easier-to-use technology in which a surfactant is added to the pesticide before spraying. When this mixture is sprayed through a special nozzle, it forms tiny droplets that are “cloaked” in surfactant. The surfactant helps the droplets to stick to the leaves within a few milliseconds, without bouncing off.In 2020, Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22 founded a company called AgZen to commercialize their technologies and get them into the hands of farmers. They incorporated their ideas for improving pesticide adhesion into a product called EnhanceCoverage.During the testing for this product, they realized that there weren’t any good ways to measure how many of the droplets were staying on the plant. That led them to develop a product known as RealCoverage, which is based on machine vision. It can be attached to any pesticide sprayer and offer real-time feedback on what percentage of the pesticide droplets are sticking to and staying on every leaf.RealCoverage was used on 65,000 acres of farmland across the United States in 2024, from soybeans in Iowa to cotton in Georgia. Farmers who used the product were able to reduce their pesticide use by 30 to 50 percent, by using the data to optimize delivery and, in some cases, even change what chemicals were sprayed.He hopes that the EnhanceCoverage product, which is expected to become available in 2025, will help farmers further reduce their pesticide use.“Our mission here is to help farmers with savings while helping them achieve better yields. We have found a way to do all this while also reducing waste and the amount of chemicals that we put into our atmosphere and into our soils and into our water,” Varanasi says. “This is the MIT approach: to figure out what are the real issues and how to come up with solutions. Now we have a tool and I hope that it’s deployed everywhere and everyone gets the benefit from it.” More

  • in

    Study: Fusion energy could play a major role in the global response to climate change

    For many decades, fusion has been touted as the ultimate source of abundant, clean electricity. Now, as the world faces the need to reduce carbon emissions to prevent catastrophic climate change, making commercial fusion power a reality takes on new importance. In a power system dominated by low-carbon variable renewable energy sources (VREs) such as solar and wind, “firm” electricity sources are needed to kick in whenever demand exceeds supply — for example, when the sun isn’t shining or the wind isn’t blowing and energy storage systems aren’t up to the task. What is the potential role and value of fusion power plants (FPPs) in such a future electric power system — a system that is not only free of carbon emissions but also capable of meeting the dramatically increased global electricity demand expected in the coming decades?Working together for a year-and-a-half, investigators in the MIT Energy Initiative (MITEI) and the MIT Plasma Science and Fusion Center (PSFC) have been collaborating to answer that question. They found that — depending on its future cost and performance — fusion has the potential to be critically important to decarbonization. Under some conditions, the availability of FPPs could reduce the global cost of decarbonizing by trillions of dollars. More than 25 experts together examined the factors that will impact the deployment of FPPs, including costs, climate policy, operating characteristics, and other factors. They present their findings in a new report funded through MITEI and entitled “The Role of Fusion Energy in a Decarbonized Electricity System.”“Right now, there is great interest in fusion energy in many quarters — from the private sector to government to the general public,” says the study’s principal investigator (PI) Robert C. Armstrong, MITEI’s former director and the Chevron Professor of Chemical Engineering, Emeritus. “In undertaking this study, our goal was to provide a balanced, fact-based, analysis-driven guide to help us all understand the prospects for fusion going forward.” Accordingly, the study takes a multidisciplinary approach that combines economic modeling, electric grid modeling, techno-economic analysis, and more to examine important factors that are likely to shape the future deployment and utilization of fusion energy. The investigators from MITEI provided the energy systems modeling capability, while the PSFC participants provided the fusion expertise.Fusion technologies may be a decade away from commercial deployment, so the detailed technology and costs of future commercial FPPs are not known at this point. As a result, the MIT research team focused on determining what cost levels fusion plants must reach by 2050 to achieve strong market penetration and make a significant contribution to the decarbonization of global electricity supply in the latter half of the century.The value of having FPPs available on an electric grid will depend on what other options are available, so to perform their analyses, the researchers needed estimates of the future cost and performance of those options, including conventional fossil fuel generators, nuclear fission power plants, VRE generators, and energy storage technologies, as well as electricity demand for specific regions of the world. To find the most reliable data, they searched the published literature as well as results of previous MITEI and PSFC analyses.Overall, the analyses showed that — while the technology demands of harnessing fusion energy are formidable — so are the potential economic and environmental payoffs of adding this firm, low-carbon technology to the world’s portfolio of energy options.Perhaps the most remarkable finding is the “societal value” of having commercial FPPs available. “Limiting warming to 1.5 degrees C requires that the world invest in wind, solar, storage, grid infrastructure, and everything else needed to decarbonize the electric power system,” explains Randall Field, executive director of the fusion study and MITEI’s director of research. “The cost of that task can be far lower when FPPs are available as a source of clean, firm electricity.” And the benefit varies depending on the cost of the FPPs. For example, assuming that the cost of building a FPP is $8,000 per kilowatt (kW) in 2050 and falls to $4,300/kW in 2100, the global cost of decarbonizing electric power drops by $3.6 trillion. If the cost of a FPP is $5,600/kW in 2050 and falls to $3,000/kW in 2100, the savings from having the fusion plants available would be $8.7 trillion. (Those calculations are based on differences in global gross domestic product and assume a discount rate of 6 percent. The undiscounted value is about 20 times larger.)The goal of other analyses was to determine the scale of deployment worldwide at selected FPP costs. Again, the results are striking. For a deep decarbonization scenario, the total global share of electricity generation from fusion in 2100 ranges from less than 10 percent if the cost of fusion is high to more than 50 percent if the cost of fusion is low.Other analyses showed that the scale and timing of fusion deployment vary in different parts of the world. Early deployment of fusion can be expected in wealthy nations such as European countries and the United States that have the most aggressive decarbonization policies. But certain other locations — for example, India and the continent of Africa — will have great growth in fusion deployment in the second half of the century due to a large increase in demand for electricity during that time. “In the U.S. and Europe, the amount of demand growth will be low, so it’ll be a matter of switching away from dirty fuels to fusion,” explains Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy and a senior research scientist at MITEI. “But in India and Africa, for example, the tremendous growth in overall electricity demand will be met with significant amounts of fusion along with other low-carbon generation resources in the later part of the century.”A set of analyses focusing on nine subregions of the United States showed that the availability and cost of other low-carbon technologies, as well as how tightly carbon emissions are constrained, have a major impact on how FPPs would be deployed and used. In a decarbonized world, FPPs will have the highest penetration in locations with poor diversity, capacity, and quality of renewable resources, and limits on carbon emissions will have a big impact. For example, the Atlantic and Southeast subregions have low renewable resources. In those subregions, wind can produce only a small fraction of the electricity needed, even with maximum onshore wind buildout. Thus, fusion is needed in those subregions, even when carbon constraints are relatively lenient, and any available FPPs would be running much of the time. In contrast, the Central subregion of the United States has excellent renewable resources, especially wind. Thus, fusion competes in the Central subregion only when limits on carbon emissions are very strict, and FPPs will typically be operated only when the renewables can’t meet demand.An analysis of the power system that serves the New England states provided remarkably detailed results. Using a modeling tool developed at MITEI, the fusion team explored the impact of using different assumptions about not just cost and emissions limits but even such details as potential land-use constraints affecting the use of specific VREs. This approach enabled them to calculate the FPP cost at which fusion units begin to be installed. They were also able to investigate how that “threshold” cost changed with changes in the cap on carbon emissions. The method can even show at what price FPPs begin to replace other specific generating sources. In one set of runs, they determined the cost at which FPPs would begin to displace floating platform offshore wind and rooftop solar.“This study is an important contribution to fusion commercialization because it provides economic targets for the use of fusion in the electricity markets,” notes Dennis G. Whyte, co-PI of the fusion study, former director of the PSFC, and the Hitachi America Professor of Engineering in the Department of Nuclear Science and Engineering. “It better quantifies the technical design challenges for fusion developers with respect to pricing, availability, and flexibility to meet changing demand in the future.”The researchers stress that while fission power plants are included in the analyses, they did not perform a “head-to-head” comparison between fission and fusion, because there are too many unknowns. Fusion and nuclear fission are both firm, low-carbon electricity-generating technologies; but unlike fission, fusion doesn’t use fissile materials as fuels, and it doesn’t generate long-lived nuclear fuel waste that must be managed. As a result, the regulatory requirements for FPPs will be very different from the regulations for today’s fission power plants — but precisely how they will differ is unclear. Likewise, the future public perception and social acceptance of each of these technologies cannot be projected, but could have a major influence on what generation technologies are used to meet future demand.The results of the study convey several messages about the future of fusion. For example, it’s clear that regulation can be a potentially large cost driver. This should motivate fusion companies to minimize their regulatory and environmental footprint with respect to fuels and activated materials. It should also encourage governments to adopt appropriate and effective regulatory policies to maximize their ability to use fusion energy in achieving their decarbonization goals. And for companies developing fusion technologies, the study’s message is clearly stated in the report: “If the cost and performance targets identified in this report can be achieved, our analysis shows that fusion energy can play a major role in meeting future electricity needs and achieving global net-zero carbon goals.” More

  • in

    Study: Marshes provide cost-effective coastal protection

    Images of coastal houses being carried off into the sea due to eroding coastlines and powerful storm surges are becoming more commonplace as climate change brings a rising sea level coupled with more powerful storms. In the U.S. alone, coastal storms caused $165 billion in losses in 2022.Now, a study from MIT shows that protecting and enhancing salt marshes in front of protective seawalls can significantly help protect some coastlines, at a cost that makes this approach reasonable to implement.The new findings are being reported in the journal Communications Earth and Environment, in a paper by MIT graduate student Ernie I. H. Lee and professor of civil and environmental engineering Heidi Nepf. This study, Nepf says, shows that restoring coastal marshes “is not just something that would be nice to do, but it’s actually economically justifiable.” The researchers found that, among other things, the wave-attenuating effects of salt marsh mean that the seawall behind it can be built significantly lower, reducing construction cost while still providing as much protection from storms.“One of the other exciting things that the study really brings to light,” Nepf says, “is that you don’t need a huge marsh to get a good effect. It could be a relatively short marsh, just tens of meters wide, that can give you benefit.” That makes her hopeful, Nepf says, that this information might be applied in places where planners may have thought saving a smaller marsh was not worth the expense. “We show that it can make enough of a difference to be financially viable,” she says.While other studies have previously shown the benefits of natural marshes in attenuating damaging storms, Lee says that such studies “mainly focus on landscapes that have a wide marsh on the order of hundreds of meters. But we want to show that it also applies in urban settings where not as much marsh land is available, especially since in these places existing gray infrastructure (seawalls) tends to already be in place.”The study was based on computer modeling of waves propagating over different shore profiles, using the morphology of various salt marsh plants — the height and stiffness of the plants, and their spatial density — rather than an empirical drag coefficient. “It’s a physically based model of plant-wave interaction, which allowed us to look at the influence of plant species and changes in morphology across seasons,” without having to go out and calibrate the vegetation drag coefficient with field measurements for each different condition, Nepf says.The researchers based their benefit-cost analysis on a simple metric: To protect a certain length of shoreline, how much could the height of a given seawall be reduced if it were accompanied by a given amount of marsh? Other ways of assessing the value, such as including the value of real estate that might be damaged by a given amount of flooding, “vary a lot depending on how you value the assets if a flood happens,” Lee says. “We use a more concrete value to quantify the benefits of salt marshes, which is the equivalent height of seawall you would need to deliver the same protection value.”They used models of a variety of plants, reflecting differences in height and the stiffness across different seasons. They found a twofold variation in the various plants’ effectiveness in attenuating waves, but all provided a useful benefit.To demonstrate the details in a real-world example and help to validate the simulations, Nepf and Lee studied local salt marshes in Salem, Massachusetts, where projects are already underway to try to restore marshes that had been degraded. Including the specific example provided a template for others, Nepf says. In Salem, their model showed that a healthy salt marsh could offset the need for an additional seawall height of 1.7 meters (about 5.5 feet), based on satisfying a rate of wave overtopping that was set for the safety of pedestrians.However, the real-world data needed to model a marsh, including maps of salt marsh species, plant height, and shoots per bed area, are “very labor-intensive” to put together, Nepf says. Lee is now developing a method to use drone imaging and machine learning to facilitate this mapmaking. Nepf says this will enable researchers or planners to evaluate a given area of marshland and say, “How much is this marsh worth in terms of its ability to reduce flooding?”The White House Office of Information and Regulatory Affairs recently released guidance for assessing the value of ecosystem services in planning of federal projects, Nepf explains.  “But in many scenarios, it lacks specific methods for quantifying value, and this study is meeting that need,” she says.The Federal Emergency Management Agency also has a benefit-cost analysis (BCA) toolkit, Lee notes. “They have guidelines on how to quantify each of the environmental services, and one of the novelties of this paper is quantifying the cost and the protection value of marshes. This is one of the applications that policymakers can consider on how to quantify the environmental service values of marshes,” he says.The software that environmental engineers can apply to specific sites has been made available online for free on GitHub. “It’s a one-dimensional model accessible by a standard consulting firm,” Nepf says.“This paper presents a practical tool for translating the wave attenuation capabilities of marshes into economic values, which could assist decision-makers in the adaptation of marshes for nature-based coastal defense,” says Xioaxia Zhang, a professor at Shenzen University in China who was not involved in this work. “The results indicate that salt marshes are not only environmentally beneficial but also cost-effective.”The study “is a very important and crucial step to quantifying the protective value of marshes,” adds Bas Borsje, an associate professor of nature-based flood protection at the University of Twente in the Netherlands, who was not associated with this work. “The most important step missing at the moment is how to translate our findings to the decision makers. This is the first time I’m aware of that decision-makers are quantitatively informed on the protection value of salt marshes.”Lee received support for this work from the Schoettler Scholarship Fund, administered by the MIT Department of Civil and Environmental Engineering. More

  • in

    How climate change will impact outdoor activities in the US

    It can be hard to connect a certain amount of average global warming with one’s everyday experience, so researchers at MIT have devised a different approach to quantifying the direct impact of climate change. Instead of focusing on global averages, they came up with the concept of “outdoor days”: the number days per year in a given location when the temperature is not too hot or cold to enjoy normal outdoor activities, such as going for a walk, playing sports, working in the garden, or dining outdoors.In a study published earlier this year, the researchers applied this method to compare the impact of global climate change on different countries around the world, showing that much of the global south would suffer major losses in the number of outdoor days, while some northern countries could see a slight increase. Now, they have applied the same approach to comparing the outcomes for different parts of the United States, dividing the country into nine climatic regions, and finding similar results: Some states, especially Florida and other parts of the Southeast, should see a significant drop in outdoor days, while some, especially in the Northwest, should see a slight increase.The researchers also looked at correlations between economic activity, such as tourism trends, and changing climate conditions, and examined how numbers of outdoor days could result in significant social and economic impacts. Florida’s economy, for example, is highly dependent on tourism and on people moving there for its pleasant climate; a major drop in days when it is comfortable to spend time outdoors could make the state less of a draw.The new findings were published this month in the journal Geophysical Research Letters, in a paper by researchers Yeon-Woo Choi and Muhammad Khalifa and professor of civil and environmental engineering Elfatih Eltahir.“This is something very new in our attempt to understand impacts of climate change impact, in addition to the changing extremes,” Choi says. It allows people to see how these global changes may impact them on a very personal level, as opposed to focusing on global temperature changes or on extreme events such as powerful hurricanes or increased wildfires. “To the best of my knowledge, nobody else takes this same approach” in quantifying the local impacts of climate change, he says. “I hope that many others will parallel our approach to better understand how climate may affect our daily lives.”The study looked at two different climate scenarios — one where maximum efforts are made to curb global emissions of greenhouse gases and one “worst case” scenario where little is done and global warming continues to accelerate. They used these two scenarios with every available global climate model, 32 in all, and the results were broadly consistent across all 32 models.The reality may lie somewhere in between the two extremes that were modeled, Eltahir suggests. “I don’t think we’re going to act as aggressively” as the low-emissions scenarios suggest, he says, “and we may not be as careless” as the high-emissions scenario. “Maybe the reality will emerge in the middle, toward the end of the century,” he says.The team looked at the difference in temperatures and other conditions over various ranges of decades. The data already showed some slight differences in outdoor days from the 1961-1990 period compared to 1991-2020. The researchers then compared these most recent 30 years with the last 30 years of this century, as projected by the models, and found much greater differences ahead for some regions. The strongest effects in the modeling were seen in the Southeastern states. “It seems like climate change is going to have a significant impact on the Southeast in terms of reducing the number of outdoor days,” Eltahir says, “with implications for the quality of life of the population, and also for the attractiveness of tourism and for people who want to retire there.”He adds that “surprisingly, one of the regions that would benefit a little bit is the Northwest.” But the gain there is modest: an increase of about 14 percent in outdoor days projected for the last three decades of this century, compared to the period from 1976 to 2005. The Southwestern U.S., by comparison, faces an average loss of 23 percent of their outdoor days.The study also digs into the relationship between climate and economic activity by looking at tourism trends from U.S. National Park Service visitation data, and how that aligned with differences in climate conditions. “Accounting for seasonal variations, we find a clear connection between the number of outdoor days and the number of tourist visits in the United States,” Choi says.For much of the country, there will be little overall change in the total number of annual outdoor days, the study found, but the seasonal pattern of those days could change significantly. While most parts of the country now see the most outdoor days in summertime, that will shift as summers get hotter, and spring and fall will become the preferred seasons for outdoor activity.In a way, Eltahir says, “what we are talking about that will happen in the future [for most of the country] is already happening in Florida.” There, he says, “the really enjoyable time of year is in the spring and fall, and summer is not the best time of year.”People’s level of comfort with temperatures varies somewhat among individuals and among regions, so the researchers designed a tool, now freely available online, that allows people to set their own definitions of the lowest and highest temperatures they consider suitable for outdoor activities, and then see what the climate models predict would be the change in the number of outdoor days for their location, using their own standards of comfort. For their study, they used a widely accepted range of 10 degrees Celsius (50 degrees Fahrenheit) to 25 C (77 F), which is the “thermoneutral zone” in which the human body does not require either metabolic heat generation or evaporative cooling to maintain its core temperature — in other words, in that range there is generally no need to either shiver or sweat.The model mainly focuses on temperature but also allows people to include humidity or precipitation in their definition of what constitutes a comfortable outdoor day. The model could be extended to incorporate other variables such as air quality, but the researchers say temperature tends to be the major determinant of comfort for most people.Using their software tool, “If you disagree with how we define an outdoor day, you could define one for yourself, and then you’ll see what the impacts of that are on your number of outdoor days and their seasonality,” Eltahir says.This work was inspired by the realization, he says, that “people’s understanding of climate change is based on the assumption that climate change is something that’s going to happen sometime in the future and going to happen to someone else. It’s not going to impact them directly. And I think that contributes to the fact that we are not doing enough.”Instead, the concept of outdoor days “brings the concept of climate change home, brings it to personal everyday activities,” he says. “I hope that people will find that useful to bridge that gap, and provide a better understanding and appreciation of the problem. And hopefully that would help lead to sound policies that are based on science, regarding climate change.”The research was based on work supported by the Community Jameel for Jameel Observatory CREWSnet and Abdul Latif Jameel Water and Food Systems Lab at MIT. More

  • in

    MIT Energy and Climate Club mobilizes future leaders to address global climate issues

    One of MIT’s missions is helping to solve the world’s greatest problems — with a large focus on one of the most pressing topics facing the world today, climate change.The MIT Energy and Climate Club, (MITEC) formerly known as the MIT Energy Club, has been working since 2004 to inform and educate the entire MIT community about this urgent issue and other related matters.MITEC, one of the largest clubs on campus, has hundreds of active members from every major, including both undergraduate and graduate students. With a broad reach across the Institute, MITEC is the hub for thought leadership and relationship-building across campus.The club’s co-presidents Laurențiu Anton, doctoral candidate in electrical engineering and computer science; Rosie Keller, an MBA student in the MIT Sloan School of Management; and Thomas Lee, doctoral candidate in the Institute for Data, Systems, and Society, say that faculty, staff, and alumni are also welcome to join and interact with the continuously growing club.While they closely collaborate on all aspects of the club, each of the co-presidents has a focus area to support the student managing directors and vice presidents for several of the club’s committees. Keller oversees the External Relations, Social, Launchpad, and Energy and Climate Hackathon leadership teams. Lee supports the leadership team for next spring’s Energy Conference. He also assists the club treasurer on budget and finance and guides the industry Sponsorships team. Anton oversees marketing, community and education as well as the Energy and Climate Night and Energy and Climate Career Fair leadership teams.“We think of MITEC as the umbrella of all things related to energy and climate on campus. Our goal is to share actionable information and not just have discussions. We work with other organizations on campus, including the MIT Environmental Solutions Initiative, to bring awareness,” says Anton. “Our Community and Education team is currently working with the MIT ESI [Environmental Solutions Initiative] to create an ecosystem map that we’re excited to produce for the MIT community.”To share their knowledge and get more people interested in solving climate and energy problems, each year MITEC hosts a variety of events including the MIT Energy and Climate Night, the MIT Energy and Climate Hack, the MIT Energy and Climate Career Fair, and the MIT Energy Conference to be held next spring March 3-4. The club also offers students the opportunity to gain valuable work experience while engaging with top companies, such as Constellation Energy and GE Vernova, on real climate and energy issues through their Launchpad Program.Founded in 2006, the annual MIT Energy Conference is the largest student-run conference in North America focused on energy and climate issues, where hundreds of participants gather every year with the CEOs, policymakers, investors, and scholars at the forefront of the global energy transition.“The 2025 MIT Energy Conference’s theme is ‘Breakthrough to Deployment: Driving Climate Innovation to Market’ — which focuses on the importance of both cutting-edge research innovation as well as large-scale commercial deployment to successfully reach climate goals,” says Lee.Anton notes that the first of four MITEC flagship events the MIT Energy and Climate Night. This research symposium that takes place every year in the fall at the MIT Museum will be held on Nov. 8. The club invites a select number of keynote speakers and several dozen student posters. Guests are allowed to walk around and engage with students, and in return students get practice showcasing their research. The club’s career fair will take place in the spring semester, shortly after Independent Activities Period.MITEC also provides members opportunities to meet with companies that are working to improve the energy sector, which helps to slow down, as well as adapt to, the effects of climate change.“We recently went to Provincetown and toured Eversource’s battery energy storage facility. This helped open doors for club members,” says Keller. “The Provincetown battery helps address grid reliability problems after extreme storms on Cape Cod — which speaks to energy’s connection to both the mitigation and adaptation aspects of climate change,” adds Lee.“MITEC is also a great way to meet other students at MIT that you might not otherwise have a chance to,” says Keller.“We’d always welcome more undergraduate students to join MITEC. There are lots of leadership opportunities within the club for them to take advantage of and build their resumes. We also have good and growing collaboration between different centers on campus such as the Sloan Sustainability Initiative and the MIT Energy Initiative. They support us with resources, introductions, and help amplify what we’re doing. But students are the drivers of the club and set the agendas,” says Lee.All three co-presidents are excited to hear that MIT President Sally Kornbluth wants to bring climate change solutions to the next level, and that she recently launched The Climate Project at MIT to kick off the Institute’s major new effort to accelerate and scale up climate change solutions.“We look forward to connecting with the new directors of the Climate Project at MIT and Interim Vice President for Climate Change Richard Lester in the near future. We are eager to explore how MITEC can support and collaborate with the Climate Project at MIT,” says Anton.Lee, Keller, and Anton want MITEC to continue fostering solutions to climate issues. They emphasized that while individual actions like bringing your own thermos, using public transportation, or recycling are necessary, there’s a bigger picture to consider. They encourage the MIT community to think critically about the infrastructure and extensive supply chains behind the products everyone uses daily.“It’s not just about bringing a thermos; it’s also understanding the life cycle of that thermos, from production to disposal, and how our everyday choices are interconnected with global climate impacts,” says Anton.“Everyone should get involved with this worldwide problem. We’d like to see more people think about how they can use their careers for change. To think how they can navigate the type of role they can play — whether it’s in finance or on the technical side. I think exploring what that looks like as a career is also a really interesting way of thinking about how to get involved with the problem,” says Keller.“MITEC’s newsletter reaches more than 4,000 people. We’re grateful that so many people are interested in energy and climate change,” says Anton. More

  • in

    The changing geography of “energy poverty”

    A growing portion of Americans who are struggling to pay for their household energy live in the South and Southwest, reflecting a climate-driven shift away from heating needs and toward air conditioning use, an MIT study finds.The newly published research also reveals that a major U.S. federal program that provides energy subsidies to households, by assigning block grants to states, does not yet fully match these recent trends.The work evaluates the “energy burden” on households, which reflects the percentage of income needed to pay for energy necessities, from 2015 to 2020. Households with an energy burden greater than 6 percent of income are considered to be in “energy poverty.” With climate change, rising temperatures are expected to add financial stress in the South, where air conditioning is increasingly needed. Meanwhile, milder winters are expected to reduce heating costs in some colder regions.“From 2015 to 2020, there is an increase in burden generally, and you do also see this southern shift,” says Christopher Knittel, an MIT energy economist and co-author of a new paper detailing the study’s results. About federal aid, he adds, “When you compare the distribution of the energy burden to where the money is going, it’s not aligned too well.”The paper, “U.S. federal resource allocations are inconsistent with concentrations of energy poverty,” is published today in Science Advances.The authors are Carlos Batlle, a professor at Comillas University in Spain and a senior lecturer with the MIT Energy Initiative; Peter Heller SM ’24, a recent graduate of the MIT Technology and Policy Program; Knittel, the George P. Shultz Professor at the MIT Sloan School of Management and associate dean for climate and sustainability at MIT; and Tim Schittekatte, a senior lecturer at MIT Sloan.A scorching decadeThe study, which grew out of graduate research that Heller conducted at MIT, deploys a machine-learning estimation technique that the scholars applied to U.S. energy use data.Specifically, the researchers took a sample of about 20,000 households from the U.S. Energy Information Administration’s Residential Energy Consumption Survey, which includes a wide variety of demographic characteristics about residents, along with building-type and geographic information. Then, using the U.S. Census Bureau’s American Community Survey data for 2015 and 2020, the research team estimated the average household energy burden for every census tract in the lower 48 states — 73,057 in 2015, and 84,414 in 2020.That allowed the researchers to chart the changes in energy burden in recent years, including the shift toward a greater energy burden in southern states. In 2015, Maine, Mississippi, Arkansas, Vermont, and Alabama were the five states (ranked in descending order) with the highest energy burden across census bureau tracts. In 2020, that had shifted somewhat, with Maine and Vermont dropping on the list and southern states increasingly having a larger energy burden. That year, the top five states in descending order were Mississippi, Arkansas, Alabama, West Virginia, and Maine.The data also reflect a urban-rural shift. In 2015, 23 percent of the census tracts where the average household is living in energy poverty were urban. That figure shrank to 14 percent by 2020.All told, the data are consistent with the picture of a warming world, in which milder winters in the North, Northwest, and Mountain West require less heating fuel, while more extreme summer temperatures in the South require more air conditioning.“Who’s going to be harmed most from climate change?” asks Knittel. “In the U.S., not surprisingly, it’s going to be the southern part of the U.S. And our study is confirming that, but also suggesting it’s the southern part of the U.S that’s least able to respond. If you’re already burdened, the burden’s growing.”An evolution for LIHEAP?In addition to identifying the shift in energy needs during the last decade, the study also illuminates a longer-term change in U.S. household energy needs, dating back to the 1980s. The researchers compared the present-day geography of U.S. energy burden to the help currently provided by the federal Low Income Home Energy Assistance Program (LIHEAP), which dates to 1981.Federal aid for energy needs actually predates LIHEAP, but the current program was introduced in 1981, then updated in 1984 to include cooling needs such as air conditioning. When the formula was updated in 1984, two “hold harmless” clauses were also adopted, guaranteeing states a minimum amount of funding.Still, LIHEAP’s parameters also predate the rise of temperatures over the last 40 years, and the current study shows that, compared to the current landscape of energy poverty, LIHEAP distributes relatively less of its funding to southern and southwestern states.“The way Congress uses formulas set in the 1980s keeps funding distributions nearly the same as it was in the 1980s,” Heller observes. “Our paper illustrates the shift in need that has occurred over the decades since then.”Currently, it would take a fourfold increase in LIHEAP to ensure that no U.S. household experiences energy poverty. But the researchers tested out a new funding design, which would help the worst-off households first, nationally, ensuring that no household would have an energy burden of greater than 20.3 percent.“We think that’s probably the most equitable way to allocate the money, and by doing that, you now have a different amount of money that should go to each state, so that no one state is worse off than the others,” Knittel says.And while the new distribution concept would require a certain amount of subsidy reallocation among states, it would be with the goal of helping all households avoid a certain level of energy poverty, across the country, at a time of changing climate, warming weather, and shifting energy needs in the U.S.“We can optimize where we spend the money, and that optimization approach is an important thing to think about,” Knittel says.  More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More

  • in

    Bubble findings could unlock better electrode and electrolyzer designs

    Industrial electrochemical processes that use electrodes to produce fuels and chemical products are hampered by the formation of bubbles that block parts of the electrode surface, reducing the area available for the active reaction. Such blockage reduces the performance of the electrodes by anywhere from 10 to 25 percent.But new research reveals a decades-long misunderstanding about the extent of that interference. The findings show exactly how the blocking effect works and could lead to new ways of designing electrode surfaces to minimize inefficiencies in these widely used electrochemical processes.It has long been assumed that the entire area of the electrode shadowed by each bubble would be effectively inactivated. But it turns out that a much smaller area — roughly the area where the bubble actually contacts the surface — is blocked from its electrochemical activity. The new insights could lead directly to new ways of patterning the surfaces to minimize the contact area and improve overall efficiency.The findings are reported today in the journal Nanoscale, in a paper by recent MIT graduate Jack Lake PhD ’23, graduate student Simon Rufer, professor of mechanical engineering Kripa Varanasi, research scientist Ben Blaiszik, and six others at the University of Chicago and Argonne National Laboratory. The team has made available an open-source, AI-based software tool that engineers and scientists can now use to automatically recognize and quantify bubbles formed on a given surface, as a first step toward controlling the electrode material’s properties.

    Play video

    Gas-evolving electrodes, often with catalytic surfaces that promote chemical reactions, are used in a wide variety of processes, including the production of “green” hydrogen without the use of fossil fuels, carbon-capture processes that can reduce greenhouse gas emissions, aluminum production, and the chlor-alkali process that is used to make widely used chemical products.These are very widespread processes. The chlor-alkali process alone accounts for 2 percent of all U.S. electricity usage; aluminum production accounts for 3 percent of global electricity; and both carbon capture and hydrogen production are likely to grow rapidly in coming years as the world strives to meet greenhouse-gas reduction targets. So, the new findings could make a real difference, Varanasi says.“Our work demonstrates that engineering the contact and growth of bubbles on electrodes can have dramatic effects” on how bubbles form and how they leave the surface, he says. “The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes to avoid the deleterious effects of bubbles.”“The broader literature built over the last couple of decades has suggested that not only that small area of contact but the entire area under the bubble is passivated,” Rufer says. The new study reveals “a significant difference between the two models because it changes how you would develop and design an electrode to minimize these losses.”To test and demonstrate the implications of this effect, the team produced different versions of electrode surfaces with patterns of dots that nucleated and trapped bubbles at different sizes and spacings. They were able to show that surfaces with widely spaced dots promoted large bubble sizes but only tiny areas of surface contact, which helped to make clear the difference between the expected and actual effects of bubble coverage.Developing the software to detect and quantify bubble formation was necessary for the team’s analysis, Rufer explains. “We wanted to collect a lot of data and look at a lot of different electrodes and different reactions and different bubbles, and they all look slightly different,” he says. Creating a program that could deal with different materials and different lighting and reliably identify and track the bubbles was a tricky process, and machine learning was key to making it work, he says.Using that tool, he says, they were able to collect “really significant amounts of data about the bubbles on a surface, where they are, how big they are, how fast they’re growing, all these different things.” The tool is now freely available for anyone to use via the GitHub repository.By using that tool to correlate the visual measures of bubble formation and evolution with electrical measurements of the electrode’s performance, the researchers were able to disprove the accepted theory and to show that only the area of direct contact is affected. Videos further proved the point, revealing new bubbles actively evolving directly under parts of a larger bubble.The researchers developed a very general methodology that can be applied to characterize and understand the impact of bubbles on any electrode or catalyst surface. They were able to quantify the bubble passivation effects in a new performance metric they call BECSA (Bubble-induced electrochemically active surface), as opposed to ECSA (electrochemically active surface area), that is used in the field. “The BECSA metric was a concept we defined in an earlier study but did not have an effective method to estimate until this work,” says Varanasi.The knowledge that the area under bubbles can be significantly active ushers in a new set of design rules for high-performance electrodes. This means that electrode designers should seek to minimize bubble contact area rather than simply bubble coverage, which can be achieved by controlling the morphology and chemistry of the electrodes. Surfaces engineered to control bubbles can not only improve the overall efficiency of the processes and thus reduce energy use, they can also save on upfront materials costs. Many of these gas-evolving electrodes are coated with catalysts made of expensive metals like platinum or iridium, and the findings from this work can be used to engineer electrodes to reduce material wasted by reaction-blocking bubbles.Varanasi says that “the insights from this work could inspire new electrode architectures that not only reduce the usage of precious materials, but also improve the overall electrolyzer performance,” both of which would provide large-scale environmental benefits.The research team included Jim James, Nathan Pruyne, Aristana Scourtas, Marcus Schwarting, Aadit Ambalkar, Ian Foster, and Ben Blaiszik at the University of Chicago and Argonne National Laboratory. The work was supported by the U.S. Department of Energy under the ARPA-E program. More