More stories

  • in

    China-based emissions of three potent climate-warming greenhouse gases spiked in past decade

    When it comes to heating up the planet, not all greenhouse gases are created equal. They vary widely in their global warming potential (GWP), a measure of how much infrared thermal radiation a greenhouse gas would absorb over a given time frame once it enters the atmosphere. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO2), and the GWPs of a class of greenhouse gases known as perfluorocarbons (PFCs) are thousands of times that of CO2. The lifespans in the atmosphere of different greenhouse gases also vary widely. Methane persists in the atmosphere for around 10 years; CO2 for over 100 years, and PFCs for up to tens of thousands of years.Given the high GWPs and lifespans of PFCs, their emissions could pose a major roadblock to achieving the aspirational goal of the Paris Agreement on climate change — to limit the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels. Now, two new studies based on atmospheric observations inside China and high-resolution atmospheric models show a rapid rise in Chinese emissions over the last decade (2011 to 2020 or 2021) of three PFCs: tetrafluoromethane (PFC-14) and hexafluoroethane (PFC-116) (results in PNAS), and perfluorocyclobutane (PFC-318) (results in Environmental Science & Technology).Both studies find that Chinese emissions have played a dominant role in driving up global emission levels for all three PFCs.The PNAS study identifies substantial PFC-14 and PFC-116 emission sources in the less-populated western regions of China from 2011 to 2021, likely due to the large amount of aluminum industry in these regions. The semiconductor industry also contributes to some of the emissions detected in the more economically developed eastern regions. These emissions are byproducts from aluminum smelting, or occur during the use of the two PFCs in the production of semiconductors and flat panel displays. During the observation period, emissions of both gases in China rose by 78 percent, accounting for most of the increase in global emissions of these gases.The ES&T study finds that during 2011-20, a 70 percent increase in Chinese PFC-318 emissions (contributing more than half of the global emissions increase of this gas) — originated primarily in eastern China. The regions with high emissions of PFC-318 in China overlap with geographical areas densely populated with factories that produce polytetrafluoroethylene (PTFE, commonly used for nonstick cookware coatings), implying that PTFE factories are major sources of PFC-318 emissions in China. In these factories, PFC-318 is formed as a byproduct.“Using atmospheric observations from multiple monitoring sites, we not only determined the magnitudes of PFC emissions, but also pinpointed the possible locations of their sources,” says Minde An, a postdoc at the MIT Center for Global Change Science (CGCS), and corresponding author of both studies. “Identifying the actual source industries contributing to these PFC emissions, and understanding the reasons for these largely byproduct emissions, can provide guidance for developing region- or industry-specific mitigation strategies.”“These three PFCs are largely produced as unwanted byproducts during the manufacture of otherwise widely used industrial products,” says MIT professor of atmospheric sciences Ronald Prinn, director of both the MIT Joint Program on the Science and Policy of Global Change and CGCS, and a co-author of both studies. “Phasing out emissions of PFCs as early as possible is highly beneficial for achieving global climate mitigation targets and is likely achievable by recycling programs and targeted technological improvements in these industries.”Findings in both studies were obtained, in part, from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment (AGAGE) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, as reported in the latest United Nations Environment Program and World Meteorological Organization Ozone Assessment report. More

  • in

    Q&A: What past environmental success can teach us about solving the climate crisis

    Susan Solomon, MIT professor of Earth, atmospheric, and planetary sciences (EAPS) and of chemistry, played a critical role in understanding how a class of chemicals known as chlorofluorocarbons were creating a hole in the ozone layer. Her research was foundational to the creation of the Montreal Protocol, an international agreement established in the 1980s that phased out products releasing chlorofluorocarbons. Since then, scientists have documented signs that the ozone hole is recovering thanks to these measures.Having witnessed this historical process first-hand, Solomon, the Lee and Geraldine Martin Professor of Environmental Studies, is aware of how people can come together to make successful environmental policy happen. Using her story, as well as other examples of success — including combating smog, getting rid of DDT, and more — Solomon draws parallels from then to now as the climate crisis comes into focus in her new book, “Solvable: How we Healed the Earth and How we can do it Again.”Solomon took a moment to talk about why she picked the stories in her book, the students who inspired her, and why we need hope and optimism now more than ever.Q: You have first-hand experience seeing how we’ve altered the Earth, as well as the process of creating international environmental policy. What prompted you to write a book about your experiences?A: Lots of things, but one of the main ones is the things that I see in teaching. I have taught a class called Science, Politics and Environmental Policy for many years here at MIT. Because my emphasis is always on how we’ve actually fixed problems, students come away from that class feeling hopeful, like they really want to stay engaged with the problem.It strikes me that students today have grown up in a very contentious and difficult era in which they feel like nothing ever gets done. But stuff does get done, even now. Looking at how we did things so far really helps you to see how we can do things in the future.Q: In the book, you use five different stories as examples of successful environmental policy, and then end talking about how we can apply these lessons to climate change. Why did you pick these five stories?A: I picked some of them because I’m closer to those problems in my own professional experience, like ozone depletion and smog. I did other issues partly because I wanted to show that even in the 21st century, we’ve actually got some stuff done — that’s the story of the Kigali Amendment to the Montreal Protocol, which is a binding international agreement on some greenhouse gases.Another chapter is on DDT. One of the reasons I included that is because it had an enormous effect on the birth of the environmental movement in the United States. Plus, that story allows you to see how important the environmental groups can be.Lead in gasoline and paint is the other one. I find it a very moving story because the idea that we were poisoning millions of children and not even realizing it is so very, very sad. But it’s so uplifting that we did figure out the problem, and it happened partly because of the civil rights movement, that made us aware that the problem was striking minority communities much more than non-minority communities.Q: What surprised you the most during your research for the book?A: One of the things that that I didn’t realize and should have, was the outsized role played by one single senator, Ed Muskie of Maine. He made pollution control his big issue and devoted incredible energy to it. He clearly had the passion and wanted to do it for many years, but until other factors helped him, he couldn’t. That’s where I began to understand the role of public opinion and the way in which policy is only possible when public opinion demands change.Another thing about Muskie was the way in which his engagement with these issues demanded that science be strong. When I read what he put into congressional testimony I realized how highly he valued the science. Science alone is never enough, but it’s always necessary. Over the years, science got a lot stronger, and we developed ways of evaluating what the scientific wisdom across many different studies and many different views actually is. That’s what scientific assessment is all about, and it’s crucial to environmental progress.Q: Throughout the book you argue that for environmental action to succeed, three things must be met which you call the three Ps: a threat much be personal, perceptible, and practical. Where did this idea come from?A: My observations. You have to perceive the threat: In the case of the ozone hole, you could perceive it because those false-color images of the ozone loss were so easy to understand, and it was personal because few things are scarier than cancer, and a reduced ozone layer leads to too much sun, increasing skin cancers. Science plays a role in communicating what can be readily understood by the public, and that’s important to them perceiving it as a serious problem.Nowadays, we certainly perceive the reality of climate change. We also see that it’s personal. People are dying because of heat waves in much larger numbers than they used to; there are horrible problems in the Boston area, for example, with flooding and sea level rise. People perceive the reality of the problem and they feel personally threatened.The third P is practical: People have to believe that there are practical solutions. It’s interesting to watch how the battle for hearts and minds has shifted. There was a time when the skeptics would just attack the whole idea that the climate was changing. Eventually, they decided ‘we better accept that because people perceive it, so let’s tell them that it’s not caused by human activity.’ But it’s clear enough now that human activity does play a role. So they’ve moved on to attacking that third P, that somehow it’s not practical to have any kind of solutions. This is progress! So what about that third P?What I tried to do in the book is to point out some of the ways in which the problem has also become eminently practical to deal with in the last 10 years, and will continue to move in that direction. We’re right on the cusp of success, and we just have to keep going. People should not give in to eco despair; that’s the worst thing you could do, because then nothing will happen. If we continue to move at the rate we have, we will certainly get to where we need to be.Q: That ties in very nicely with my next question. The book is very optimistic; what gives you hope?A: I’m optimistic because I’ve seen so many examples of where we have succeeded, and because I see so many signs of movement right now that are going to push us in the same direction.If we had kept conducting business as usual as we had been in the year 2000, we’d be looking at 4 degrees of future warming. Right now, I think we’re looking at 3 degrees. I think we can get to 2 degrees. We have to really work on it, and we have to get going seriously in the next decade, but globally right now over 30 percent of our energy is from renewables. That’s fantastic! Let’s just keep going.Q: Throughout the book, you show that environmental problems won’t be solved by individual actions alone, but requires policy and technology driving. What individual actions can people take to help push for those bigger changes?A: A big one is choose to eat more sustainably; choose alternative transportation methods like public transportation or reducing the amount of trips that you make. Older people usually have retirement investments, you can shift them over to a social choice funds and away from index funds that end up funding companies that you might not be interested in. You can use your money to put pressure: Amazon has been under a huge amount of pressure to cut down on their plastic packaging, mainly coming from consumers. They’ve just announced they’re not going to use those plastic pillows anymore. I think you can see lots of ways in which people really do matter, and we can matter more.Q: What do you hope people take away from the book?A: Hope for their future and resolve to do the best they can getting engaged with it. More

  • in

    Study finds health risks in switching ships from diesel to ammonia fuel

    As container ships the size of city blocks cross the oceans to deliver cargo, their huge diesel engines emit large quantities of air pollutants that drive climate change and have human health impacts. It has been estimated that maritime shipping accounts for almost 3 percent of global carbon dioxide emissions and the industry’s negative impacts on air quality cause about 100,000 premature deaths each year.Decarbonizing shipping to reduce these detrimental effects is a goal of the International Maritime Organization, a U.N. agency that regulates maritime transport. One potential solution is switching the global fleet from fossil fuels to sustainable fuels such as ammonia, which could be nearly carbon-free when considering its production and use.But in a new study, an interdisciplinary team of researchers from MIT and elsewhere caution that burning ammonia for maritime fuel could worsen air quality further and lead to devastating public health impacts, unless it is adopted alongside strengthened emissions regulations.Ammonia combustion generates nitrous oxide (N2O), a greenhouse gas that is about 300 times more potent than carbon dioxide. It also emits nitrogen in the form of nitrogen oxides (NO and NO2, referred to as NOx), and unburnt ammonia may slip out, which eventually forms fine particulate matter in the atmosphere. These tiny particles can be inhaled deep into the lungs, causing health problems like heart attacks, strokes, and asthma.The new study indicates that, under current legislation, switching the global fleet to ammonia fuel could cause up to about 600,000 additional premature deaths each year. However, with stronger regulations and cleaner engine technology, the switch could lead to about 66,000 fewer premature deaths than currently caused by maritime shipping emissions, with far less impact on global warming.“Not all climate solutions are created equal. There is almost always some price to pay. We have to take a more holistic approach and consider all the costs and benefits of different climate solutions, rather than just their potential to decarbonize,” says Anthony Wong, a postdoc in the MIT Center for Global Change Science and lead author of the study.His co-authors include Noelle Selin, an MIT professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); Sebastian Eastham, a former principal research scientist who is now a senior lecturer at Imperial College London; Christine Mounaïm-Rouselle, a professor at the University of Orléans in France; Yiqi Zhang, a researcher at the Hong Kong University of Science and Technology; and Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics. The research appears this week in Environmental Research Letters.Greener, cleaner ammoniaTraditionally, ammonia is made by stripping hydrogen from natural gas and then combining it with nitrogen at extremely high temperatures. This process is often associated with a large carbon footprint. The maritime shipping industry is betting on the development of “green ammonia,” which is produced by using renewable energy to make hydrogen via electrolysis and to generate heat.“In theory, if you are burning green ammonia in a ship engine, the carbon emissions are almost zero,” Wong says.But even the greenest ammonia generates nitrous oxide (N2O), nitrogen oxides (NOx) when combusted, and some of the ammonia may slip out, unburnt. This nitrous oxide would escape into the atmosphere, where the greenhouse gas would remain for more than 100 years. At the same time, the nitrogen emitted as NOx and ammonia would fall to Earth, damaging fragile ecosystems. As these emissions are digested by bacteria, additional N2O  is produced.NOx and ammonia also mix with gases in the air to form fine particulate matter. A primary contributor to air pollution, fine particulate matter kills an estimated 4 million people each year.“Saying that ammonia is a ‘clean’ fuel is a bit of an overstretch. Just because it is carbon-free doesn’t necessarily mean it is clean and good for public health,” Wong says.A multifaceted modelThe researchers wanted to paint the whole picture, capturing the environmental and public health impacts of switching the global fleet to ammonia fuel. To do so, they designed scenarios to measure how pollutant impacts change under certain technology and policy assumptions.From a technological point of view, they considered two ship engines. The first burns pure ammonia, which generates higher levels of unburnt ammonia but emits fewer nitrogen oxides. The second engine technology involves mixing ammonia with hydrogen to improve combustion and optimize the performance of a catalytic converter, which controls both nitrogen oxides and unburnt ammonia pollution.They also considered three policy scenarios: current regulations, which only limit NOx emissions in some parts of the world; a scenario that adds ammonia emission limits over North America and Western Europe; and a scenario that adds global limits on ammonia and NOx emissions.The researchers used a ship track model to calculate how pollutant emissions change under each scenario and then fed the results into an air quality model. The air quality model calculates the impact of ship emissions on particulate matter and ozone pollution. Finally, they estimated the effects on global public health.One of the biggest challenges came from a lack of real-world data, since no ammonia-powered ships are yet sailing the seas. Instead, the researchers relied on experimental ammonia combustion data from collaborators to build their model.“We had to come up with some clever ways to make that data useful and informative to both the technology and regulatory situations,” he says.A range of outcomesIn the end, they found that with no new regulations and ship engines that burn pure ammonia, switching the entire fleet would cause 681,000 additional premature deaths each year.“While a scenario with no new regulations is not very realistic, it serves as a good warning of how dangerous ammonia emissions could be. And unlike NOx, ammonia emissions from shipping are currently unregulated,” Wong says.However, even without new regulations, using cleaner engine technology would cut the number of premature deaths down to about 80,000, which is about 20,000 fewer than are currently attributed to maritime shipping emissions. With stronger global regulations and cleaner engine technology, the number of people killed by air pollution from shipping could be reduced by about 66,000.“The results of this study show the importance of developing policies alongside new technologies,” Selin says. “There is a potential for ammonia in shipping to be beneficial for both climate and air quality, but that requires that regulations be designed to address the entire range of potential impacts, including both climate and air quality.”Ammonia’s air quality impacts would not be felt uniformly across the globe, and addressing them fully would require coordinated strategies across very different contexts. Most premature deaths would occur in East Asia, since air quality regulations are less stringent in this region. Higher levels of existing air pollution cause the formation of more particulate matter from ammonia emissions. In addition, shipping volume over East Asia is far greater than elsewhere on Earth, compounding these negative effects.In the future, the researchers want to continue refining their analysis. They hope to use these findings as a starting point to urge the marine industry to share engine data they can use to better evaluate air quality and climate impacts. They also hope to inform policymakers about the importance and urgency of updating shipping emission regulations.This research was funded by the MIT Climate and Sustainability Consortium. More

  • in

    Study: Weaker ocean circulation could enhance CO2 buildup in the atmosphere

    As climate change advances, the ocean’s overturning circulation is predicted to weaken substantially. With such a slowdown, scientists estimate the ocean will pull down less carbon dioxide from the atmosphere. However, a slower circulation should also dredge up less carbon from the deep ocean that would otherwise be released back into the atmosphere. On balance, the ocean should maintain its role in reducing carbon emissions from the atmosphere, if at a slower pace.However, a new study by an MIT researcher finds that scientists may have to rethink the relationship between the ocean’s circulation and its long-term capacity to store carbon. As the ocean gets weaker, it could release more carbon from the deep ocean into the atmosphere instead.The reason has to do with a previously uncharacterized feedback between the ocean’s available iron, upwelling carbon and nutrients, surface microorganisms, and a little-known class of molecules known generally as “ligands.” When the ocean circulates more slowly, all these players interact in a self-perpetuating cycle that ultimately increases the amount of carbon that the ocean outgases back to the atmosphere.“By isolating the impact of this feedback, we see a fundamentally different relationship between ocean circulation and atmospheric carbon levels, with implications for the climate,” says study author Jonathan Lauderdale, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “What we thought is going on in the ocean is completely overturned.”Lauderdale says the findings show that “we can’t count on the ocean to store carbon in the deep ocean in response to future changes in circulation. We must be proactive in cutting emissions now, rather than relying on these natural processes to buy us time to mitigate climate change.”His study appears today in the journal Nature Communications.Box flowIn 2020, Lauderdale led a study that explored ocean nutrients, marine organisms, and iron, and how their interactions influence the growth of phytoplankton around the world. Phytoplankton are microscopic, plant-like organisms that live on the ocean surface and consume a diet of carbon and nutrients that upwell from the deep ocean and iron that drifts in from desert dust.The more phytoplankton that can grow, the more carbon dioxide they can absorb from the atmosphere via photosynthesis, and this plays a large role in the ocean’s ability to sequester carbon.For the 2020 study, the team developed a simple “box” model, representing conditions in different parts of the ocean as general boxes, each with a different balance of nutrients, iron, and ligands — organic molecules that are thought to be byproducts of phytoplankton. The team modeled a general flow between the boxes to represent the ocean’s larger circulation — the way seawater sinks, then is buoyed back up to the surface in different parts of the world.This modeling revealed that, even if scientists were to “seed” the oceans with extra iron, that iron wouldn’t have much of an effect on global phytoplankton growth. The reason was due to a limit set by ligands. It turns out that, if left on its own, iron is insoluble in the ocean and therefore unavailable to phytoplankton. Iron only becomes soluble at “useful” levels when linked with ligands, which keep iron in a form that plankton can consume. Lauderdale found that adding iron to one ocean region to consume additional nutrients robs other regions of nutrients that phytoplankton there need to grow. This lowers the production of ligands and the supply of iron back to the original ocean region, limiting the amount of extra carbon that would be taken up from the atmosphere.Unexpected switchOnce the team published their study, Lauderdale worked the box model into a form that he could make publicly accessible, including ocean and atmosphere carbon exchange and extending the boxes to represent more diverse environments, such as conditions similar to the Pacific, the North Atlantic, and the Southern Ocean. In the process, he tested other interactions within the model, including the effect of varying ocean circulation.He ran the model with different circulation strengths, expecting to see less atmospheric carbon dioxide with weaker ocean overturning — a relationship that previous studies have supported, dating back to the 1980s. But what he found instead was a clear and opposite trend: The weaker the ocean’s circulation, the more CO2 built up in the atmosphere.“I thought there was some mistake,” Lauderdale recalls. “Why were atmospheric carbon levels trending the wrong way?”When he checked the model, he found that the parameter describing ocean ligands had been left “on” as a variable. In other words, the model was calculating ligand concentrations as changing from one ocean region to another.On a hunch, Lauderdale turned this parameter “off,” which set ligand concentrations as constant in every modeled ocean environment, an assumption that many ocean models typically make. That one change reversed the trend, back to the assumed relationship: A weaker circulation led to reduced atmospheric carbon dioxide. But which trend was closer to the truth?Lauderdale looked to the scant available data on ocean ligands to see whether their concentrations were more constant or variable in the actual ocean. He found confirmation in GEOTRACES, an international study that coordinates measurements of trace elements and isotopes across the world’s oceans, that scientists can use to compare concentrations from region to region. Indeed, the molecules’ concentrations varied. If ligand concentrations do change from one region to another, then his surprise new result was likely representative of the real ocean: A weaker circulation leads to more carbon dioxide in the atmosphere.“It’s this one weird trick that changed everything,” Lauderdale says. “The ligand switch has revealed this completely different relationship between ocean circulation and atmospheric CO2 that we thought we understood pretty well.”Slow cycleTo see what might explain the overturned trend, Lauderdale analyzed biological activity and carbon, nutrient, iron, and ligand concentrations from the ocean model under different circulation strengths, comparing scenarios where ligands were variable or constant across the various boxes.This revealed a new feedback: The weaker the ocean’s circulation, the less carbon and nutrients the ocean pulls up from the deep. Any phytoplankton at the surface would then have fewer resources to grow and would produce fewer byproducts (including ligands) as a result. With fewer ligands available, less iron at the surface would be usable, further reducing the phytoplankton population. There would then be fewer phytoplankton available to absorb carbon dioxide from the atmosphere and consume upwelled carbon from the deep ocean.“My work shows that we need to look more carefully at how ocean biology can affect the climate,” Lauderdale points out. “Some climate models predict a 30 percent slowdown in the ocean circulation due to melting ice sheets, particularly around Antarctica. This huge slowdown in overturning circulation could actually be a big problem: In addition to a host of other climate issues, not only would the ocean take up less anthropogenic CO2 from the atmosphere, but that could be amplified by a net outgassing of deep ocean carbon, leading to an unanticipated increase in atmospheric CO2 and unexpected further climate warming.”  More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Getting to systemic sustainability

    Add up the commitments from the Paris Agreement, the Glasgow Climate Pact, and various commitments made by cities, countries, and businesses, and the world would be able to hold the global average temperature increase to 1.9 degrees Celsius above preindustrial levels, says Ani Dasgupta, the president and chief executive officer of the World Resources Institute (WRI).While that is well above the 1.5 C threshold that many scientists agree would limit the most severe impacts of climate change, it is below the 2.0 degree threshold that could lead to even more catastrophic impacts, such as the collapse of ice sheets and a 30-foot rise in sea levels.However, Dasgupta notes, actions have so far not matched up with commitments.“There’s a huge gap between commitment and outcomes,” Dasgupta said during his talk, “Energizing the global transition,” at the 2024 Earth Day Colloquium co-hosted by the MIT Energy Initiative and MIT Department of Earth, Atmospheric and Planetary Sciences, and sponsored by the Climate Nucleus.Dasgupta noted that oil companies did $6 trillion worth of business across the world last year — $1 trillion more than they were planning. About 7 percent of the world’s remaining tropical forests were destroyed during that same time, he added, and global inequality grew even worse than before.“None of these things were illegal, because the system we have today produces these outcomes,” he said. “My point is that it’s not one thing that needs to change. The whole system needs to change.”People, climate, and natureDasgupta, who previously held positions in nonprofits in India and at the World Bank, is a recognized leader in sustainable cities, poverty alleviation, and building cultures of inclusion. Under his leadership, WRI, a global research nonprofit that studies sustainable practices with the goal of fundamentally transforming the world’s food, land and water, energy, and cities, adopted a new five-year strategy called “Getting the Transition Right for People, Nature, and Climate 2023-2027.” It focuses on creating new economic opportunities to meet people’s essential needs, restore nature, and rapidly lower emissions, while building resilient communities. In fact, during his talk, Dasgupta said that his organization has moved away from talking about initiatives in terms of their impact on greenhouse gas emissions — instead taking a more holistic view of sustainability.“There is no net zero without nature,” Dasgupta said. He showed a slide with a graphic illustrating potential progress toward net-zero goals. “If nature gets diminished, that chart becomes even steeper. It’s very steep right now, but natural systems absorb carbon dioxide. So, if the natural systems keep getting destroyed, that curve becomes harder and harder.”A focus on people is necessary, Dasgupta said, in part because of the unequal climate impacts that the rich and the poor are likely to face in the coming years. “If you made it to this room, you will not be impacted by climate change,” he said. “You have resources to figure out what to do about it. The people who get impacted are people who don’t have resources. It is immensely unfair. Our belief is, if we don’t do climate policy that helps people directly, we won’t be able to make progress.”Where to start?Although Dasgupta stressed that systemic change is needed to bring carbon emissions in line with long-term climate goals, he made the case that it is unrealistic to implement this change around the globe all at once. “This transition will not happen in 196 countries at the same time,” he said. “The question is, how do we get to the tipping point so that it happens at scale? We’ve worked the past few years to ask the question, what is it you need to do to create this tipping point for change?”Analysts at WRI looked for countries that are large producers of carbon, those with substantial tropical forest cover, and those with large quantities of people living in poverty. “We basically tried to draw a map of, where are the biggest challenges for climate change?” Dasgupta said.That map features a relative handful of countries, including the United States, Mexico, China, Brazil, South Africa, India, and Indonesia. Dasgupta said, “Our argument is that, if we could figure out and focus all our efforts to help these countries transition, that will create a ripple effect — of understanding technology, understanding the market, understanding capacity, and understanding the politics of change that will unleash how the rest of these regions will bring change.”Spotlight on the subcontinentDasgupta used one of these countries, his native India, to illustrate the nuanced challenges and opportunities presented by various markets around the globe. In India, he noted, there are around 3 million projected jobs tied to the country’s transition to renewable energy. However, that number is dwarfed by the 10 to 12 million jobs per year the Indian economy needs to create simply to keep up with population growth.“Every developing country faces this question — how to keep growing in a way that reduces their carbon footprint,” Dasgupta said.Five states in India worked with WRI to pool their buying power and procure 5,000 electric buses, saving 60 percent of the cost as a result. Over the next two decades, Dasgupta said, the fleet of electric buses in those five states is expected to increase to 800,000.In the Indian state of Rajasthan, Dasgupta said, 59 percent of power already comes from solar energy. At times, Rajasthan produces more solar than it can use, and officials are exploring ways to either store the excess energy or sell it to other states. But in another state, Jharkhand, where much of the country’s coal is sourced, only 5 percent of power comes from solar. Officials in Jharkhand have reached out to WRI to discuss how to transition their energy economy, as they recognize that coal will fall out of favor in the future, Dasgupta said.“The complexities of the transition are enormous in a country this big,” Dasgupta said. “This is true in most large countries.”The road aheadDespite the challenges ahead, the colloquium was also marked by notes of optimism. In his opening remarks, Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, pointed out how much progress has been made on environmental cleanup since the first Earth Day in 1970. “The world was a very different, much dirtier, place in many ways,” Stoner said. “Our air was a mess, our waterways were a mess, and it was beginning to be noticeable. Since then, Earth Day has become an important part of the fabric of American and global society.”While Dasgupta said that the world presently lacks the “orchestration” among various stakeholders needed to bring climate change under control, he expressed hope that collaboration in key countries could accelerate progress.“I strongly believe that what we need is a very different way of collaborating radically — across organizations like yours, organizations like ours, businesses, and governments,” Dasgupta said. “Otherwise, this transition will not happen at the scale and speed we need.” More

  • in

    H2 underground

    In 1987 in a village in Mali, workers were digging a water well when they felt a rush of air. One of the workers was smoking a cigarette, and the air caught fire, burning a clear blue flame. The well was capped at the time, but in 2012, it was tapped to provide energy for the village, powering a generator for nine years.The fuel source: geologic hydrogen.For decades, hydrogen has been discussed as a potentially revolutionary fuel. But efforts to produce “green” hydrogen (splitting water into hydrogen and oxygen using renewable electricity), “grey” hydrogen (making hydrogen from methane and releasing the biproduct carbon dioxide (CO2) into the atmosphere), “brown” hydrogen (produced through the gasification of coal), and “blue” hydrogen (making hydrogen from methane but capturing the CO2) have thus far proven either expensive and/or energy-intensive. Enter geologic hydrogen. Also known as “orange,” “gold,” “white,” “natural,” and even “clear” hydrogen, geologic hydrogen is generated by natural geochemical processes in the Earth’s crust. While there is still much to learn, a growing number of researchers and industry leaders are hopeful that it may turn out to be an abundant and affordable resource lying right beneath our feet.“There’s a tremendous amount of uncertainty about this,” noted Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, in his opening remarks at the MIT Energy Initiative (MITEI) Spring Symposium. “But the prospect of readily producible clean hydrogen showing up all over the world is a potential near-term game changer.”A new hope for hydrogenThis April, MITEI gathered researchers, industry leaders, and academic experts from around MIT and the world to discuss the challenges and opportunities posed by geologic hydrogen in a daylong symposium entitled “Geologic hydrogen: Are orange and gold the new green?” The field is so new that, until a year ago, the U.S. Department of Energy (DOE)’s website incorrectly claimed that hydrogen only occurs naturally on Earth in compound forms, chemically bonded to other elements.“There’s a common misconception that hydrogen doesn’t occur naturally on Earth,” said Geoffrey Ellis, a research geologist with the U.S. Geological Survey. He noted that natural hydrogen production tends to occur in different locations from where oil and natural gas are likely to be discovered, which explains why geologic hydrogen discoveries have been relatively rare, at least until recently.“Petroleum exploration is not targeting hydrogen,” Ellis said. “Companies are simply not really looking for it, they’re not interested in it, and oftentimes they don’t measure for it. The energy industry spends billions of dollars every year on exploration with very sophisticated technology, and still they drill dry holes all the time. So I think it’s naive to think that we would suddenly be finding hydrogen all the time when we’re not looking for it.”In fact, the number of researchers and startup energy companies with targeted efforts to characterize geologic hydrogen has increased over the past several years — and these searches have uncovered new prospects, said Mary Haas, a venture partner at Breakthrough Energy Ventures. “We’ve seen a dramatic uptick in exploratory activity, now that there is a focused effort by a small community worldwide. At Breakthrough Energy, we are excited about the potential of this space, as well as our role in accelerating its progress,” she said. Haas noted that if geologic hydrogen could be produced at $1 per kilogram, this would be consistent with the DOE’s targeted “liftoff” point for the energy source. “If that happens,” she said, “it would be transformative.”Haas noted that only a small portion of identified hydrogen sites are currently under commercial exploration, and she cautioned that it’s not yet clear how large a role the resource might play in the transition to green energy. But, she said, “It’s worthwhile and important to find out.”Inventing a new energy subsectorGeologic hydrogen is produced when water reacts with iron-rich minerals in rock. Researchers and industry are exploring how to stimulate this natural production by pumping water into promising deposits.In any new exploration area, teams must ask a series of questions to qualify the site, said Avon McIntyre, the executive director of HyTerra Ltd., an Australian company focused on the exploration and production of geologic hydrogen. These questions include: Is the geology favorable? Does local legislation allow for exploration and production? Does the site offer a clear path to value? And what are the carbon implications of producing hydrogen at the site?“We have to be humble,” McIntyre said. “We can’t be too prescriptive and think that we’ll leap straight into success. We have a unique opportunity to stop and think about what this industry will look like, how it will work, and how we can bring together various disciplines.” This was a theme that arose multiple times over the course of the symposium: the idea that many different stakeholders — including those from academia, industry, and government — will need to work together to explore the viability of geologic hydrogen and bring it to market at scale.In addition to the potential for hydrogen production to give rise to greenhouse gas emissions (in cases, for instance, where hydrogen deposits are contaminated with natural gas), researchers and industry must also consider landscape deformation and even potential seismic implications, said Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the MIT Department of Earth, Atmospheric and Planetary Sciences.The surface impacts of hydrogen exploration and production will likely be similar to those caused by the hydro-fracturing process (“fracking”) used in oil and natural gas extraction, Hager said.“There will be unavoidable surface deformation. In most places, you don’t want this if there’s infrastructure around,” Hager said. “Seismicity in the stimulated zone itself should not be a problem, because the areas are tested first. But we need to avoid stressing surrounding brittle rocks.”McIntyre noted that the commercial case for hydrogen remains a challenge to quantify, without even a “spot” price that companies can use to make economic calculations. Early on, he said, capturing helium at hydrogen exploration sites could be a path to early cash flow, but that may ultimately serve as a “distraction” as teams attempt to scale up to the primary goal of hydrogen production. He also noted that it is not even yet clear whether hard rock, soft rock, or underwater environments hold the most potential for geologic hydrogen, but all show promise.“If you stack all of these things together,” McIntyre said, “what we end up doing may look very different from what we think we’re going to do right now.”The path aheadWhile the long-term prospects for geologic hydrogen are shrouded in uncertainty, most speakers at the symposium struck a tone of optimism. Ellis noted that the DOE has dedicated $20 million in funding to a stimulated hydrogen program. Paris Smalls, the co-founder and CEO of Eden GeoPower Inc., said “we think there is a path” to producing geologic hydrogen below the $1 per kilogram threshold. And Iwnetim Abate, an assistant professor in the MIT Department of Materials Science and Engineering, said that geologic hydrogen opens up the idea of Earth as a “factory to produce clean fuels,” utilizing the subsurface heat and pressure instead of relying on burning fossil fuels or natural gas for the same purpose.“Earth has had 4.6 billion years to do these experiments,” said Oliver Jagoutz, a professor of geology in the MIT Department of Earth, Atmospheric and Planetary Sciences. “So there is probably a very good solution out there.”Alexis Templeton, a professor of geological sciences at the University of Colorado at Boulder, made the case for moving quickly. “Let’s go to pilot, faster than you might think,” she said. “Why? Because we do have some systems that we understand. We could test the engineering approaches and make sure that we are doing the right tool development, the right technology development, the right experiments in the lab. To do that, we desperately need data from the field.”“This is growing so fast,” Templeton added. “The momentum and the development of geologic hydrogen is really quite substantial. We need to start getting data at scale. And then, I think, more people will jump off the sidelines very quickly.”  More