More stories

  • in

    Atmospheric observations in China show rise in emissions of a potent greenhouse gas

    To achieve the aspirational goal of the Paris Agreement on climate change — limiting the increase in global average surface temperature to 1.5 degrees Celsius above preindustrial levels — will require its 196 signatories to dramatically reduce their greenhouse gas (GHG) emissions. Those greenhouse gases differ widely in their global warming potential (GWP), or ability to absorb radiative energy and thereby warm the Earth’s surface. For example, measured over a 100-year period, the GWP of methane is about 28 times that of carbon dioxide (CO2), and the GWP of sulfur hexafluoride (SF6) is 24,300 times that of CO2, according to the Intergovernmental Panel on Climate Change (IPCC) Sixth Assessment Report. 

    Used primarily in high-voltage electrical switchgear in electric power grids, SF6 is one of the most potent greenhouse gases on Earth. In the 21st century, atmospheric concentrations of SF6 have risen sharply along with global electric power demand, threatening the world’s efforts to stabilize the climate. This heightened demand for electric power is particularly pronounced in China, which has dominated the expansion of the global power industry in the past decade. Quantifying China’s contribution to global SF6 emissions — and pinpointing its sources in the country — could lead that nation to implement new measures to reduce them, and thereby reduce, if not eliminate, an impediment to the Paris Agreement’s aspirational goal. 

    To that end, a new study by researchers at the MIT Joint Program on the Science and Policy of Global Change, Fudan University, Peking University, University of Bristol, and Meteorological Observation Center of China Meteorological Administration determined total SF6 emissions in China over 2011-21 from atmospheric observations collected from nine stations within a Chinese network, including one station from the Advanced Global Atmospheric Gases Experiment (AGAGE) network. For comparison, global total emissions were determined from five globally distributed, relatively unpolluted “background” AGAGE stations, involving additional researchers from the Scripps Institution of Oceanography and CSIRO, Australia’s National Science Agency.

    The researchers found that SF6 emissions in China almost doubled from 2.6 gigagrams (Gg) per year in 2011, when they accounted for 34 percent of global SF6 emissions, to 5.1 Gg per year in 2021, when they accounted for 57 percent of global total SF6 emissions. This increase from China over the 10-year period — some of it emerging from the country’s less-populated western regions — was larger than the global total SF6 emissions rise, highlighting the importance of lowering SF6 emissions from China in the future.

    The open-access study, which appears in the journal Nature Communications, explores prospects for future SF6 emissions reduction in China.

    “Adopting maintenance practices that minimize SF6 leakage rates or using SF6-free equipment or SF6 substitutes in the electric power grid will benefit greenhouse-gas mitigation in China,” says Minde An, a postdoc at the MIT Center for Global Change Science (CGCS) and the study’s lead author. “We see our findings as a first step in quantifying the problem and identifying how it can be addressed.”

    Emissions of SF6 are expected to last more than 1,000 years in the atmosphere, raising the stakes for policymakers in China and around the world.

    “Any increase in SF6 emissions this century will effectively alter our planet’s radiative budget — the balance between incoming energy from the sun and outgoing energy from the Earth — far beyond the multi-decadal time frame of current climate policies,” says MIT Joint Program and CGCS Director Ronald Prinn, a coauthor of the study. “So it’s imperative that China and all other nations take immediate action to reduce, and ultimately eliminate, their SF6 emissions.”

    The study was supported by the National Key Research and Development Program of China and Shanghai B&R Joint Laboratory Project, the U.S. National Aeronautics and Space Administration, and other funding agencies.   More

  • in

    A delicate dance

    In early 2022, economist Catherine Wolfram was at her desk in the U.S. Treasury building. She could see the east wing of the White House, just steps away.

    Russia had just invaded Ukraine, and Wolfram was thinking about Russia, oil, and sanctions. She and her colleagues had been tasked with figuring out how to restrict the revenues that Russia was using to fuel its brutal war while keeping Russian oil available and affordable to the countries that depended on it.

    Now the William F. Pounds Professor of Energy Economics at MIT, Wolfram was on leave from academia to serve as deputy assistant secretary for climate and energy economics.

    Working for Treasury Secretary Janet L. Yellen, Wolfram and her colleagues developed dozens of models and forecasts and projections. It struck her, she said later, that “huge decisions [affecting the global economy] would be made on the basis of spreadsheets that I was helping create.” Wolfram composed a memo to the Biden administration and hoped her projections would pan out the way she believed they would.

    Tackling conundrums that weigh competing, sometimes contradictory, interests has defined much of Wolfram’s career.

    Wolfram specializes in the economics of energy markets. She looks at ways to decarbonize global energy systems while recognizing that energy drives economic development, especially in the developing world.

    “The way we’re currently making energy is contributing to climate change. There’s a delicate dance we have to do to make sure that we treat this important industry carefully, but also transform it rapidly to a cleaner, decarbonized system,” she says.

    Economists as influencers

    While Wolfram was growing up in a suburb of St. Paul, Minnesota, her father was a law professor and her mother taught English as a second language. Her mother helped spawn Wolfram’s interest in other cultures and her love of travel, but it was an experience closer to home that sparked her awareness of the effect of human activities on the state of the planet.

    Minnesota’s nickname is “Land of 10,000 Lakes.” Wolfram remembers swimming in a nearby lake sometimes covered by a thick sludge of algae. “Thinking back on it, it must’ve had to do with fertilizer runoff,” she says. “That was probably the first thing that made me think about the environment and policy.”

    In high school, Wolfram liked “the fact that you could use math to understand the world. I also was interested in the types of questions about human behavior that economists were thinking about.

    “I definitely think economics is good at sussing out how different actors are likely to react to a particular policy and then designing policies with that in mind.”

    After receiving a bachelor’s degree in economics from Harvard University in 1989, Wolfram worked with a Massachusetts agency that governed rate hikes for utilities. Seeing its reliance on research, she says, illuminated the role academics could play in policy setting. It made her think she could make a difference from within academia.

    While pursuing a PhD in economics from MIT, Wolfram counted Paul L. Joskow, the Elizabeth and James Killian Professor of Economics and former director of the MIT Center for Energy and Environmental Policy Research, and Nancy L. Rose, the Charles P. Kindleberger Professor of Applied Economics, among her mentors and influencers.

    After spending 1996 to 2000 as an assistant professor of economics at Harvard, she joined the faculty at the Haas School of Business at the University of California at Berkeley.

    At Berkeley, it struck Wolfram that while she labored over ways to marginally boost the energy efficiency of U.S. power plants, the economies of China and India were growing rapidly, with a corresponding growth in energy use and carbon dioxide emissions. “It hit home that to understand the climate issue, I needed to understand energy demand in the developing world,” she says.

    The problem was that the developing world didn’t always offer up the kind of neatly packaged, comprehensive data economists relied on. She wondered if, by relying on readily accessible data, the field was looking under the lamppost — while losing sight of what the rest of the street looked like.

    To make up for a lack of available data on the state of electrification in sub-Saharan Africa, for instance, Wolfram developed and administered surveys to individual, remote rural households using on-the-ground field teams.

    Her results suggested that in the world’s poorest countries, the challenges involved in expanding the grid in rural areas should be weighed against potentially greater economic and social returns on investments in the transportation, education, or health sectors.

    Taking the lead

    Within months of Wolfram’s memo to the Biden administration, leaders of the intergovernmental political forum Group of Seven (G7) agreed to the price cap. Tankers from coalition countries would only transport Russian crude sold at or below the price cap level, initially set at $60 per barrel.

    “A price cap was not something that had ever been done before,” Wolfram says. “In some ways, we were making it up out of whole cloth. It was exciting to see that I wrote one of the original memos about it, and then literally three-and-a-half months later, the G7 was making an announcement.

    “As economists and as policymakers, we must set the parameters and get the incentives right. The price cap was basically asking developing countries to buy cheap oil, which was consistent with their incentives.”

    In May 2023, the U.S. Department of the Treasury reported that despite widespread initial skepticism about the price cap, market participants and geopolitical analysts believe it is accomplishing its goals of restricting Russia’s oil revenues while maintaining the supply of Russian oil and keeping energy costs in check for consumers and businesses around the world.

    Wolfram held the U.S. Treasury post from March 2021 to October 2022 while on leave from UC Berkeley. In July 2023, she joined MIT Sloan School of Management partly to be geographically closer to the policymakers of the nation’s capital. She’s also excited about the work taking place elsewhere at the Institute to stay ahead of climate change.

    Her time in D.C. was eye-opening, particularly in terms of the leadership power of the United States. She worries that the United States is falling prey to “lost opportunities” in terms of addressing climate change. “We were showing real leadership on the price cap, and if we could only do that on climate, I think we could make faster inroads on a global agreement,” she says.

    Now focused on structuring global agreements in energy policy among developed and developing countries, she’s considering how the United States can take advantage of its position as a world leader. “We need to be thinking about how what we do in the U.S. affects the rest of the world from a climate perspective. We can’t go it alone.

    “The U.S. needs to be more aligned with the European Union, Canada, and Japan to try to find areas where we’re taking a common approach to addressing climate change,” she says. She will touch on some of those areas in the class she will teach in spring 2024 titled “Climate and Energy in the Global Economy,” offered through MIT Sloan.

    Looking ahead, she says, “I’m a techno optimist. I believe in human innovation. I’m optimistic that we’ll find ways to live with climate change and, hopefully, ways to minimize it.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Engineers find a new way to convert carbon dioxide into useful products

    MIT chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

    If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

    “This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonization because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering and the senior author of the study.

    The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

    Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appears in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

    Breaking down CO2

    Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

    To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

    During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

    To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

    “DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

    To attach single strands of DNA to a carbon electrode, the researchers used two “chemical handles,” one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode — just like Velcro.

    Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

    An efficient reaction

    Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100 percent, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40 percent.

    This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

    By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

    The research was funded by the U.S. Army Research Office, the CIFAR Azrieli Global Scholars Program, the MIT Energy Initiative, and the MIT Deshpande Center. More

  • in

    Making the clean energy transition work for everyone

    The clean energy transition is already underway, but how do we make sure it happens in a manner that is affordable, sustainable, and fair for everyone?

    That was the overarching question at this year’s MIT Energy Conference, which took place March 11 and 12 in Boston and was titled “Short and Long: A Balanced Approach to the Energy Transition.”

    Each year, the student-run conference brings together leaders in the energy sector to discuss the progress and challenges they see in their work toward a greener future. Participants come from research, industry, government, academia, and the investment community to network and exchange ideas over two whirlwind days of keynote talks, fireside chats, and panel discussions.

    Several participants noted that clean energy technologies are already cost-competitive with fossil fuels, but changing the way the world works requires more than just technology.

    “None of this is easy, but I think developing innovative new technologies is really easy compared to the things we’re talking about here, which is how to blend social justice, soft engineering, and systems thinking that puts people first,” Daniel Kammen, a distinguished professor of energy at the University of California at Berkeley, said in a keynote talk. “While clean energy has a long way to go, it is more than ready to transition us from fossil fuels.”

    The event also featured a keynote discussion between MIT President Sally Kornbluth and MIT’s Kyocera Professor of Ceramics Yet-Ming Chiang, in which Kornbluth discussed her first year at MIT as well as a recently announced, campus-wide effort to solve critical climate problems known as the Climate Project at MIT.

    “The reason I wanted to come to MIT was I saw that MIT has the potential to solve the world’s biggest problems, and first among those for me was the climate crisis,” Kornbluth said. “I’m excited about where we are, I’m excited about the enthusiasm of the community, and I think we’ll be able to make really impactful discoveries through this project.”

    Fostering new technologies

    Several panels convened experts in new or emerging technology fields to discuss what it will take for their solutions to contribute to deep decarbonization.

    “The fun thing and challenging thing about first-of-a-kind technologies is they’re all kind of different,” said Jonah Wagner, principal assistant director for industrial innovation and clean energy in the U.S. Office of Science and Technology Policy. “You can map their growth against specific challenges you expect to see, but every single technology is going to face their own challenges, and every single one will have to defy an engineering barrier to get off the ground.”

    Among the emerging technologies discussed was next-generation geothermal energy, which uses new techniques to extract heat from the Earth’s crust in new places.

    A promising aspect of the technology is that it can leverage existing infrastructure and expertise from the oil and gas industry. Many newly developed techniques for geothermal production, for instance, use the same drills and rigs as those used for hydraulic fracturing.

    “The fact that we have a robust ecosystem of oil and gas labor and technology in the U.S. makes innovation in geothermal much more accessible compared to some of the challenges we’re seeing in nuclear or direct-air capture, where some of the supply chains are disaggregated around the world,” said Gabrial Malek, chief of staff at the geothermal company Fervo Energy.

    Another technology generating excitement — if not net energy quite yet — is fusion, the process of combining, or fusing, light atoms together to form heavier ones for a net energy gain, in the same process that powers the sun. MIT spinout Commonwealth Fusion Systems (CFS) has already validated many aspects of its approach for achieving fusion power, and the company’s unique partnership with MIT was discussed in a panel on the industry’s progress.

    “We’re standing on the shoulders of decades of research from the scientific community, and we want to maintain those ties even as we continue developing our technology,” CFS Chief Science Officer Brandon Sorbom PhD ’17 said, noting that CFS is one of the largest company sponsors of research at MIT and collaborates with institutions around the world. “Engaging with the community is a really valuable lever to get new ideas and to sanity check our own ideas.”

    Sorbom said that as CFS advances fusion energy, the company is thinking about how it can replicate its processes to lower costs and maximize the technology’s impact around the planet.

    “For fusion to work, it has to work for everyone,” Sorbom said. “I think the affordability piece is really important. We can’t just build this technological jewel that only one class of nations can afford. It has to be a technology that can be deployed throughout the entire world.”

    The event also gave students — many from MIT — a chance to learn more about careers in energy and featured a startup showcase, in which dozens of companies displayed their energy and sustainability solutions.

    “More than 700 people are here from every corner of the energy industry, so there are so many folks to connect with and help me push my vision into reality,” says GreenLIB CEO Fred Rostami, whose company recycles lithium-ion batteries. “The good thing about the energy transition is that a lot of these technologies and industries overlap, so I think we can enable this transition by working together at events like this.”

    A focused climate strategy

    Kornbluth noted that when she came to MIT, a large percentage of students and faculty were already working on climate-related technologies. With the Climate Project at MIT, she wanted to help ensure the whole of those efforts is greater than the sum of its parts.

    The project is organized around six distinct missions, including decarbonizing energy and industry, empowering frontline communities, and building healthy, resilient cities. Kornbluth says the mission areas will help MIT community members collaborate around multidisciplinary challenges. Her team, which includes a committee of faculty advisors, has begun to search for the leads of each mission area, and Kornbluth said she is planning to appoint a vice president for climate at the Institute.

    “I want someone who has the purview of the whole Institute and will report directly to me to help make sure this project stays on track,” Kornbluth explained.

    In his conversation about the initiative with Kornbluth, Yet-Ming Chiang said projects will be funded based on their potential to reduce emissions and make the planet more sustainable at scale.

    “Projects should be very high risk, with very high impact,” Chiang explained. “They should have a chance to prove themselves, and those efforts should not be limited by resources, only by time.”

    In discussing her vision of the climate project, Kornbluth alluded to the “short and long” theme of the conference.

    “It’s about balancing research and commercialization,” Kornbluth said. “The climate project has a very variable timeframe, and I think universities are the sector that can think about the things that might be 30 years out. We have to think about the incentives across the entire innovation pipeline and how we can keep an eye on the long term while making sure the short-term things get out rapidly.” More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Moving past the Iron Age

    MIT graduate student Sydney Rose Johnson has never seen the steel mills in central India. She’s never toured the American Midwest’s hulking steel plants or the mini mills dotting the Mississippi River. But in the past year, she’s become more familiar with steel production than she ever imagined.

    A fourth-year dual degree MBA and PhD candidate in chemical engineering and a graduate research assistant with the MIT Energy Initiative (MITEI) as well as a 2022-23 Shell Energy Fellow, Johnson looks at ways to reduce carbon dioxide (CO2) emissions generated by industrial processes in hard-to-abate industries. Those include steel.

    Almost every aspect of infrastructure and transportation — buildings, bridges, cars, trains, mass transit — contains steel. The manufacture of steel hasn’t changed much since the Iron Age, with some steel plants in the United States and India operating almost continually for more than a century, their massive blast furnaces re-lined periodically with carbon and graphite to keep them going.

    According to the World Economic Forum, steel demand is projected to increase 30 percent by 2050, spurred in part by population growth and economic development in China, India, Africa, and Southeast Asia.

    The steel industry is among the three biggest producers of CO2 worldwide. Every ton of steel produced in 2020 emitted, on average, 1.89 tons of CO2 into the atmosphere — around 8 percent of global CO2 emissions, according to the World Steel Association.

    A combination of technical strategies and financial investments, Johnson notes, will be needed to wrestle that 8 percent figure down to something more planet-friendly.

    Johnson’s thesis focuses on modeling and analyzing ways to decarbonize steel. Using data mined from academic and industry sources, she builds models to calculate emissions, costs, and energy consumption for plant-level production.

    “I optimize steel production pathways using emission goals, industry commitments, and cost,” she says. Based on the projected growth of India’s steel industry, she applies this approach to case studies that predict outcomes for some of the country’s thousand-plus factories, which together have a production capacity of 154 million metric tons of steel. For the United States, she looks at the effect of Inflation Reduction Act (IRA) credits. The 2022 IRA provides incentives that could accelerate the steel industry’s efforts to minimize its carbon emissions.

    Johnson compares emissions and costs across different production pathways, asking questions such as: “If we start today, what would a cost-optimal production scenario look like years from now? How would it change if we added in credits? What would have to happen to cut 2005 levels of emissions in half by 2030?”

    “My goal is to gain an understanding of how current and emerging decarbonization strategies will be integrated into the industry,” Johnson says.

    Grappling with industrial problems

    Growing up in Marietta, Georgia, outside Atlanta, the closest she ever came to a plant of any kind was through her father, a chemical engineer working in logistics and procuring steel for an aerospace company, and during high school, when she spent a semester working alongside chemical engineers tweaking the pH of an anti-foaming agent.

    At Kennesaw Mountain High School, a STEM magnet program in Cobb County, students devote an entire semester of their senior year to an internship and research project.

    Johnson chose to work at Kemira Chemicals, which develops chemical solutions for water-intensive industries with a focus on pulp and paper, water treatment, and energy systems.

    “My goal was to understand why a polymer product was falling out of suspension — essentially, why it was less stable,” she recalls. She learned how to formulate a lab-scale version of the product and conduct tests to measure its viscosity and acidity. Comparing the lab-scale and regular product results revealed that acidity was an important factor. “Through conversations with my mentor, I learned this was connected with the holding conditions, which led to the product being oxidized,” she says. With the anti-foaming agent’s problem identified, steps could be taken to fix it.

    “I learned how to apply problem-solving. I got to learn more about working in an industrial environment by connecting with the team in quality control as well as with R&D and chemical engineers at the plant site,” Johnson says. “This experience confirmed I wanted to pursue engineering in college.”

    As an undergraduate at Stanford University, she learned about the different fields — biotechnology, environmental science, electrochemistry, and energy, among others — open to chemical engineers. “It seemed like a very diverse field and application range,” she says. “I was just so intrigued by the different things I saw people doing and all these different sets of issues.”

    Turning up the heat

    At MIT, she turned her attention to how certain industries can offset their detrimental effects on climate.

    “I’m interested in the impact of technology on global communities, the environment, and policy. Energy applications affect every field. My goal as a chemical engineer is to have a broad perspective on problem-solving and to find solutions that benefit as many people, especially those under-resourced, as possible,” says Johnson, who has served on the MIT Chemical Engineering Graduate Student Advisory Board, the MIT Energy and Climate Club, and is involved with diversity and inclusion initiatives.

    The steel industry, Johnson acknowledges, is not what she first imagined when she saw herself working toward mitigating climate change.

    “But now, understanding the role the material has in infrastructure development, combined with its heavy use of coal, has illuminated how the sector, along with other hard-to-abate industries, is important in the climate change conversation,” Johnson says.

    Despite the advanced age of many steel mills, some are quite energy-efficient, she notes. Yet these operations, which produce heat upwards of 3,000 degrees Fahrenheit, are still emission-intensive.

    Steel is made from iron ore, a mixture of iron, oxygen, and other minerals found on virtually every continent, with Brazil and Australia alone exporting millions of metric tons per year. Commonly based on a process dating back to the 19th century, iron is extracted from the ore through smelting — heating the ore with blast furnaces until the metal becomes spongy and its chemical components begin to break down.

    A reducing agent is needed to release the oxygen trapped in the ore, transforming it from its raw form to pure iron. That’s where most emissions come from, Johnson notes.

    “We want to reduce emissions, and we want to make a cleaner and safer environment for everyone,” she says. “It’s not just the CO2 emissions. It’s also sometimes NOx and SOx [nitrogen oxides and sulfur oxides] and air pollution particulate matter at some of these production facilities that can affect people as well.”

    In 2020, the International Energy Agency released a roadmap exploring potential technologies and strategies that would make the iron and steel sector more compatible with the agency’s vision of increased sustainability. Emission reductions can be accomplished with more modern technology, the agency suggests, or by substituting the fuels producing the immense heat needed to process ore. Traditionally, the fuels used for iron reduction have been coal and natural gas. Alternative fuels include clean hydrogen, electricity, and biomass.

    Using the MITEI Sustainable Energy System Analysis Modeling Environment (SESAME), Johnson analyzes various decarbonization strategies. She considers options such as switching fuel for furnaces to hydrogen with a little bit of natural gas or adding carbon-capture devices. The models demonstrate how effective these tactics are likely to be. The answers aren’t always encouraging.

    “Upstream emissions can determine how effective the strategies are,” Johnson says. Charcoal derived from forestry biomass seemed to be a promising alternative fuel, but her models showed that processing the charcoal for use in the blast furnace limited its effectiveness in negating emissions.

    Despite the challenges, “there are definitely ways of moving forward,” Johnson says. “It’s been an intriguing journey in terms of understanding where the industry is at. There’s still a long way to go, but it’s doable.”

    Johnson is heartened by the steel industry’s efforts to recycle scrap into new steel products and incorporate more emission-friendly technologies and practices, some of which result in significantly lower CO2 emissions than conventional production.

    A major issue is that low-carbon steel can be more than 50 percent more costly than conventionally produced steel. “There are costs associated with making the transition, but in the context of the environmental implications, I think it’s well worth it to adopt these technologies,” she says.

    After graduation, Johnson plans to continue to work in the energy field. “I definitely want to use a combination of engineering knowledge and business knowledge to work toward mitigating climate change, potentially in the startup space with clean technology or even in a policy context,” she says. “I’m interested in connecting the private and public sectors to implement measures for improving our environment and benefiting as many people as possible.” More

  • in

    Explained: Carbon credits

    One of the most contentious issues faced at the 28th Conference of Parties (COP28) on climate change last December was a proposal for a U.N.-sanctioned market for trading carbon credits. Such a mechanism would allow nations and industries making slow progress in reducing their own carbon emissions to pay others to take emissions-reducing measures, such as improving energy efficiency or protecting forests.

    Such trading systems have already grown to a multibillion-dollar market despite a lack of clear international regulations to define and monitor the claimed emissions reductions. During weeks of feverish negotiations, some nations, including the U.S., advocated for a somewhat looser approach to regulations in the interests of getting a system in place quickly. Others, including the European Union, advocated much tighter regulation, in light of a history of questionable or even counterproductive projects of this kind in the past. In the end, no agreement was reached on the subject, which will be revisited at a later meeting.

    The concept seems simple enough: Offset emissions in one place by preventing or capturing an equal amount of emissions elsewhere. But implementing that idea has turned out to be far more complex and fraught with problems than many expected.

    For example, projects that aim to preserve a section of forest — which can remove carbon dioxide from the air and sequester it in the soil — face numerous issues. Will the preservation of one parcel just lead to the clearcutting of an adjacent parcel? Would the preserved land have been left uncut anyway? And what if it ends up being destroyed by wildfire, drought, or insect infestation — all of which are expected to become more likely with climate change?

    Similarly, projects that aim to capture carbon dioxide emissions and inject them into the ground are sometimes used to justify increasing the production of petroleum or natural gas, negating the intended climate mitigation of the process.

    Several experts at MIT now say that the system could be effective, at least in certain circumstances, but it must be thoroughly evaluated and regulated.

    Carbon removal, natural or mechanical

    Sergey Paltsev, deputy director of MIT’s Joint Program on the Science and Policy of Global Change, co-led a study and workshop last year that included policymakers, industry representatives, and researchers. They focused on one kind of carbon offsets, those based on natural climate solutions — restoration or preservation of natural systems that not only sequester carbon but also provide other benefits, such as greater biodiversity. “We find a lot of confusion and misperceptions and misinformation, even about how you define the term carbon credit or offset,” he says.

    He points out that there has been a lot of criticism of the whole idea of carbon offsets, “and that criticism is well-placed. I think that’s a very healthy conversation, to clarify what makes sense and what doesn’t make sense. What are the real actions versus what is greenwashing?”

    He says that government-mandated and managed carbon trading programs in some places, including British Columbia and parts of Europe, have been somewhat effective because they have clear standards in place, whereas unregulated carbon credit systems have often been abused.

    Charles Harvey, an MIT professor of civil and environmental engineering, should know, having been actively involved in both sides of the issue over the last two decades. He co-founded a company in 2008 that was the first private U.S. company to attempt to remove carbon dioxide from emissions on a commercial scale, a process called carbon capture and sequestration, or CCS. Such projects have been a major recipient of federal subsidies aimed at combatting climate change, but Harvey now says these are largely a waste of money and in most cases do not achieve their stated objective.

    In fact, he says that according to industry sources, as of 2021 more than 90 percent of CCS projects in the U.S. have been used for the production of more fossil fuels — oil and natural gas. Here’s how it works: Natural gas wells often produce methane mixed with carbon dioxide, which must be removed to produce a marketable natural gas. This carbon dioxide is then injected into oil wells to stimulate more production. So, the net effect is the creation of more total greenhouse gas emissions rather than less, explains Harvey, who recently received a grant from the Rockefeller Foundation to explore CCS projects and whether they can be made to contribute to true emissions reductions.

    What went wrong with the ambitious startup CCS company Harvey co-founded? “What happened is that the prices of renewables and energy storage are now incredibly cheap,” he says. “It makes no sense to do this, ever, on power plants because honestly, fossil fuel power plants don’t even really make economic sense anymore.”

    Where does Harvey see potential for carbon credits to work? One possibility is the preservation or restoration of tropical peatlands, which he has received another grant to study. These are vast areas of permanently waterlogged land in which dead plant matter —and the carbon it contains — remains in place because the water prevents the normal decomposition processes that would otherwise release the stored carbon back into the air.

    While it is virtually impossible to quantify the amount of carbon stored in the soil of forest or farmland, in peatlands that’s easy to do because essentially all of the submerged material is carbon-based. Simply measuring changes in the elevation of such land, which can be done remotely by plane or satellite, gives a precise measure of how much carbon has been stored or released. When a patch of peat forest that has been clear-cut to build plantations or roads is reforested, the amount of carbon emissions that were prevented can be measured accurately.

    Because of that potential for accurate documentation, protecting or restoring peat bogs can also be a good way to achieve meaningful offsets for carbon emissions elsewhere, Harvey says. Rewetting a previously drained peat forest can immediately counteract the release of its stored carbon and can keep it there as long as it is not drained again — something that can be verified using satellite data.

    Paltsev adds that while such nature-based systems for countering carbon emissions can be a key component of addressing climate change, especially in very difficult-to-decarbonize industries such as aviation, carbon credits for such programs “shouldn’t be a replacement for our efforts at emissions reduction. It should be in addition.”

    Criteria for meaningful offsets

    John Sterman, the Jay W. Forrester Professor of Management at the MIT Sloan School of Management, has published a set of criteria for evaluating proposed carbon offset plans to make sure they would provide the benefits they claim. At present, “there’s no regulation, there’s no oversight” for carbon offsets, he says. “There have been many scandals over this.”

    For example, one company was providing what it claimed was certification for carbon offset projects but was found to have such lax standards that the claimed offsets were often not real. For example, there were multiple claims to protect the same piece of forest and claims to protect land that was already legally protected.

    Sterman’s proposed set of criteria goes by the acronym AVID+. “It stands for four principles that you have to meet in order for your offset to be legitimate: It has to be additional, verifiable, immediate, and durable,” he says. “And then I call it AVID+,” he adds, the “plus” being for plans that have additional benefits as well, such as improving health, creating jobs, or helping historically disadvantaged communities.

    Offsets can be useful, he says, for addressing especially hard-to-abate industries such as steel or cement manufacturing, or aviation. But it is essential to meet all four of the criteria, or else real emissions are not really being offset. For example, planting trees today, while often a good thing to do, would take decades to offset emissions going into the atmosphere now, where they may persist for centuries — so that fails to meet the “immediate” requirement.

    And protecting existing forests, while also desirable, is very hard to prove as being additional, because “that requires a counterfactual that you can never observe,” he says. “That’s where a lot of squirrely accounting and a lot of fraud comes in, because how do you know that the forest would have been cut down but for the offset?” In one well-documented case, he points out, a company tried to sell carbon offsets for a section of forest that was already an established nature preserve.

    Are there offsets that can meet all the criteria and provide real benefits in helping to address climate change? Yes, Sterman and Harvey say, but they need to be evaluated carefully.

    “My favorite example,” Sterman says, “is doing deep energy retrofits and putting solar panels on low-income housing.” These measures can help address the so-called landlord-tenant problem: If tenants typically pay the utility bills, landlords have little incentive to pay for efficiency improvements, and the tenants don’t have the capital to make such improvements on their own. “Policies that would make this possible are pretty good candidates for legitimate offsets, because they are additional — low-income households can’t afford to do it without assistance, so it’s not going to happen without a program. It’s verifiable, because you’ve got the utility bills pre and post.” They are also quite immediate, typically taking only a year or so to implement, and “they’re pretty durable,” he says.

    Another example is a recent plan in Alaska that allows cruise ships to offset the emissions caused by their trips by paying into a fund that provides subsidies for Alaskan citizens to install heat pumps in their homes, thus preventing emissions from wood or fossil fuel heating systems. “I think this is a pretty good candidate to meet the criteria, certainly a lot better than much of what’s being done today,” Sterman says.

    But eventually, what is really needed, the researchers agree, are real, enforceable standards. After COP28, carbon offsets are still allowed, Sterman says, “but there is still no widely accepted mandatory regulation. We’re still in the wild West.”

    Paltsev nevertheless sees reasons for optimism about nature-based carbon offset systems. For example, he says the aviation industry has recently agreed to implement a set of standards for offsetting their emissions, known as CORSIA, for carbon offsetting and reduction scheme for international aviation. “It’s a point for optimism,” he says, “because they issued very tough guidelines as to what projects are eligible and what projects are not.”

    He adds, “There is a solution if you want to find a good solution. It is doable, when there is a will and there is the need.” More