More stories

  • in

    Reality check on technologies to remove carbon dioxide from the air

    In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.DAC: The promise and the realityIncluding DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.Challenge 1: Scaling upWhen it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”Challenge 2: Energy requirementGiven the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.Challenge 3: SitingSome analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.Challenge 4: CostConsidering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.Then there’s the cost of storage, which is ignored in many DAC cost estimates.Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.The bottom lineIn their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.” More

  • in

    MIT engineers make converting CO2 into useful products more practical

    As the world struggles to reduce greenhouse gas emissions, researchers are seeking practical, economical ways to capture carbon dioxide and convert it into useful products, such as transportation fuels, chemical feedstocks, or even building materials. But so far, such attempts have struggled to reach economic viability.New research by engineers at MIT could lead to rapid improvements in a variety of electrochemical systems that are under development to convert carbon dioxide into a valuable commodity. The team developed a new design for the electrodes used in these systems, which increases the efficiency of the conversion process.The findings are reported today in the journal Nature Communications, in a paper by MIT doctoral student Simon Rufer, professor of mechanical engineering Kripa Varanasi, and three others.“The CO2 problem is a big challenge for our times, and we are using all kinds of levers to solve and address this problem,” Varanasi says. It will be essential to find practical ways of removing the gas, he says, either from sources such as power plant emissions, or straight out of the air or the oceans. But then, once the CO2 has been removed, it has to go somewhere.A wide variety of systems have been developed for converting that captured gas into a useful chemical product, Varanasi says. “It’s not that we can’t do it — we can do it. But the question is how can we make this efficient? How can we make this cost-effective?”In the new study, the team focused on the electrochemical conversion of CO2 to ethylene, a widely used chemical that can be made into a variety of plastics as well as fuels, and which today is made from petroleum. But the approach they developed could also be applied to producing other high-value chemical products as well, including methane, methanol, carbon monoxide, and others, the researchers say.Currently, ethylene sells for about $1,000 per ton, so the goal is to be able to meet or beat that price. The electrochemical process that converts CO2 into ethylene involves a water-based solution and a catalyst material, which come into contact along with an electric current in a device called a gas diffusion electrode.There are two competing characteristics of the gas diffusion electrode materials that affect their performance: They must be good electrical conductors so that the current that drives the process doesn’t get wasted through resistance heating, but they must also be “hydrophobic,” or water repelling, so the water-based electrolyte solution doesn’t leak through and interfere with the reactions taking place at the electrode surface.Unfortunately, it’s a tradeoff. Improving the conductivity reduces the hydrophobicity, and vice versa. Varanasi and his team set out to see if they could find a way around that conflict, and after many months of trying, they did just that.The solution, devised by Rufer and Varanasi, is elegant in its simplicity. They used a plastic material, PTFE (essentially Teflon), that has been known to have good hydrophobic properties. However, PTFE’s lack of conductivity means that electrons must travel through a very thin catalyst layer, leading to significant voltage drop with distance. To overcome this limitation, the researchers wove a series of conductive copper wires through the very thin sheet of the PTFE.“This work really addressed this challenge, as we can now get both conductivity and hydrophobicity,” Varanasi says.Research on potential carbon conversion systems tends to be done on very small, lab-scale samples, typically less than 1-inch (2.5-centimeter) squares. To demonstrate the potential for scaling up, Varanasi’s team produced a sheet 10 times larger in area and demonstrated its effective performance.To get to that point, they had to do some basic tests that had apparently never been done before, running tests under identical conditions but using electrodes of different sizes to analyze the relationship between conductivity and electrode size. They found that conductivity dropped off dramatically with size, which would mean much more energy, and thus cost, would be needed to drive the reaction.“That’s exactly what we would expect, but it was something that nobody had really dedicatedly investigated before,” Rufer says. In addition, the larger sizes produced more unwanted chemical byproducts besides the intended ethylene.Real-world industrial applications would require electrodes that are perhaps 100 times larger than the lab versions, so adding the conductive wires will be necessary for making such systems practical, the researchers say. They also developed a model which captures the spatial variability in voltage and product distribution on electrodes due to ohmic losses. The model along with the experimental data they collected enabled them to calculate the optimal spacing for conductive wires to counteract the drop off in conductivity.In effect, by weaving the wire through the material, the material is divided into smaller subsections determined by the spacing of the wires. “We split it into a bunch of little subsegments, each of which is effectively a smaller electrode,” Rufer says. “And as we’ve seen, small electrodes can work really well.”Because the copper wire is so much more conductive than the PTFE material, it acts as a kind of superhighway for electrons passing through, bridging the areas where they are confined to the substrate and face greater resistance.To demonstrate that their system is robust, the researchers ran a test electrode for 75 hours continuously, with little change in performance. Overall, Rufer says, their system “is the first PTFE-based electrode which has gone beyond the lab scale on the order of 5 centimeters or smaller. It’s the first work that has progressed into a much larger scale and has done so without sacrificing efficiency.”The weaving process for incorporating the wire can be easily integrated into existing manufacturing processes, even in a large-scale roll-to-roll process, he adds.“Our approach is very powerful because it doesn’t have anything to do with the actual catalyst being used,” Rufer says. “You can sew this micrometric copper wire into any gas diffusion electrode you want, independent of catalyst morphology or chemistry. So, this approach can be used to scale anybody’s electrode.”“Given that we will need to process gigatons of CO2 annually to combat the CO2 challenge, we really need to think about solutions that can scale,” Varanasi says. “Starting with this mindset enables us to identify critical bottlenecks and develop innovative approaches that can make a meaningful impact in solving the problem. Our hierarchically conductive electrode is a result of such thinking.”The research team included MIT graduate students Michael Nitzsche and Sanjay Garimella,  as well as Jack Lake PhD ’23. The work was supported by Shell, through the MIT Energy Initiative. More

  • in

    Study: Weaker ocean circulation could enhance CO2 buildup in the atmosphere

    As climate change advances, the ocean’s overturning circulation is predicted to weaken substantially. With such a slowdown, scientists estimate the ocean will pull down less carbon dioxide from the atmosphere. However, a slower circulation should also dredge up less carbon from the deep ocean that would otherwise be released back into the atmosphere. On balance, the ocean should maintain its role in reducing carbon emissions from the atmosphere, if at a slower pace.However, a new study by an MIT researcher finds that scientists may have to rethink the relationship between the ocean’s circulation and its long-term capacity to store carbon. As the ocean gets weaker, it could release more carbon from the deep ocean into the atmosphere instead.The reason has to do with a previously uncharacterized feedback between the ocean’s available iron, upwelling carbon and nutrients, surface microorganisms, and a little-known class of molecules known generally as “ligands.” When the ocean circulates more slowly, all these players interact in a self-perpetuating cycle that ultimately increases the amount of carbon that the ocean outgases back to the atmosphere.“By isolating the impact of this feedback, we see a fundamentally different relationship between ocean circulation and atmospheric carbon levels, with implications for the climate,” says study author Jonathan Lauderdale, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “What we thought is going on in the ocean is completely overturned.”Lauderdale says the findings show that “we can’t count on the ocean to store carbon in the deep ocean in response to future changes in circulation. We must be proactive in cutting emissions now, rather than relying on these natural processes to buy us time to mitigate climate change.”His study appears today in the journal Nature Communications.Box flowIn 2020, Lauderdale led a study that explored ocean nutrients, marine organisms, and iron, and how their interactions influence the growth of phytoplankton around the world. Phytoplankton are microscopic, plant-like organisms that live on the ocean surface and consume a diet of carbon and nutrients that upwell from the deep ocean and iron that drifts in from desert dust.The more phytoplankton that can grow, the more carbon dioxide they can absorb from the atmosphere via photosynthesis, and this plays a large role in the ocean’s ability to sequester carbon.For the 2020 study, the team developed a simple “box” model, representing conditions in different parts of the ocean as general boxes, each with a different balance of nutrients, iron, and ligands — organic molecules that are thought to be byproducts of phytoplankton. The team modeled a general flow between the boxes to represent the ocean’s larger circulation — the way seawater sinks, then is buoyed back up to the surface in different parts of the world.This modeling revealed that, even if scientists were to “seed” the oceans with extra iron, that iron wouldn’t have much of an effect on global phytoplankton growth. The reason was due to a limit set by ligands. It turns out that, if left on its own, iron is insoluble in the ocean and therefore unavailable to phytoplankton. Iron only becomes soluble at “useful” levels when linked with ligands, which keep iron in a form that plankton can consume. Lauderdale found that adding iron to one ocean region to consume additional nutrients robs other regions of nutrients that phytoplankton there need to grow. This lowers the production of ligands and the supply of iron back to the original ocean region, limiting the amount of extra carbon that would be taken up from the atmosphere.Unexpected switchOnce the team published their study, Lauderdale worked the box model into a form that he could make publicly accessible, including ocean and atmosphere carbon exchange and extending the boxes to represent more diverse environments, such as conditions similar to the Pacific, the North Atlantic, and the Southern Ocean. In the process, he tested other interactions within the model, including the effect of varying ocean circulation.He ran the model with different circulation strengths, expecting to see less atmospheric carbon dioxide with weaker ocean overturning — a relationship that previous studies have supported, dating back to the 1980s. But what he found instead was a clear and opposite trend: The weaker the ocean’s circulation, the more CO2 built up in the atmosphere.“I thought there was some mistake,” Lauderdale recalls. “Why were atmospheric carbon levels trending the wrong way?”When he checked the model, he found that the parameter describing ocean ligands had been left “on” as a variable. In other words, the model was calculating ligand concentrations as changing from one ocean region to another.On a hunch, Lauderdale turned this parameter “off,” which set ligand concentrations as constant in every modeled ocean environment, an assumption that many ocean models typically make. That one change reversed the trend, back to the assumed relationship: A weaker circulation led to reduced atmospheric carbon dioxide. But which trend was closer to the truth?Lauderdale looked to the scant available data on ocean ligands to see whether their concentrations were more constant or variable in the actual ocean. He found confirmation in GEOTRACES, an international study that coordinates measurements of trace elements and isotopes across the world’s oceans, that scientists can use to compare concentrations from region to region. Indeed, the molecules’ concentrations varied. If ligand concentrations do change from one region to another, then his surprise new result was likely representative of the real ocean: A weaker circulation leads to more carbon dioxide in the atmosphere.“It’s this one weird trick that changed everything,” Lauderdale says. “The ligand switch has revealed this completely different relationship between ocean circulation and atmospheric CO2 that we thought we understood pretty well.”Slow cycleTo see what might explain the overturned trend, Lauderdale analyzed biological activity and carbon, nutrient, iron, and ligand concentrations from the ocean model under different circulation strengths, comparing scenarios where ligands were variable or constant across the various boxes.This revealed a new feedback: The weaker the ocean’s circulation, the less carbon and nutrients the ocean pulls up from the deep. Any phytoplankton at the surface would then have fewer resources to grow and would produce fewer byproducts (including ligands) as a result. With fewer ligands available, less iron at the surface would be usable, further reducing the phytoplankton population. There would then be fewer phytoplankton available to absorb carbon dioxide from the atmosphere and consume upwelled carbon from the deep ocean.“My work shows that we need to look more carefully at how ocean biology can affect the climate,” Lauderdale points out. “Some climate models predict a 30 percent slowdown in the ocean circulation due to melting ice sheets, particularly around Antarctica. This huge slowdown in overturning circulation could actually be a big problem: In addition to a host of other climate issues, not only would the ocean take up less anthropogenic CO2 from the atmosphere, but that could be amplified by a net outgassing of deep ocean carbon, leading to an unanticipated increase in atmospheric CO2 and unexpected further climate warming.”  More

  • in

    Engineers find a new way to convert carbon dioxide into useful products

    MIT chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

    If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

    “This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonization because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering and the senior author of the study.

    The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

    Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appears in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

    Breaking down CO2

    Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

    To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

    During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

    To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

    “DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

    To attach single strands of DNA to a carbon electrode, the researchers used two “chemical handles,” one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode — just like Velcro.

    Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

    An efficient reaction

    Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100 percent, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40 percent.

    This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

    By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

    The research was funded by the U.S. Army Research Office, the CIFAR Azrieli Global Scholars Program, the MIT Energy Initiative, and the MIT Deshpande Center. More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More

  • in

    Local journalism is a critical “gate” to engage Americans on climate change

    Last year, Pew Research Center data revealed that only 37 percent of Americans said addressing climate change should be a top priority for the president and Congress. Furthermore, climate change was ranked 17th out of 21 national issues included in a Pew survey. 

    But in reality, it’s not that Americans don’t care about climate change, says celebrated climate scientist and communicator MIT Professor Katharine Hayhoe. It’s that they don’t know that they already do. 

    To get Americans to care about climate change, she adds, it’s imperative to guide them to their gate. At first, it might not be clear where that gate is. But it exists. 

    That message was threaded through the Connecting with Americans on Climate Change webinar last fall, which featured a discussion with Hayhoe and the five journalists who made up the 2023 cohort of the MIT Environmental Solutions Journalism Fellowship. Hayhoe referred to a “gate” as a conversational entry point about climate impacts and solutions. The catch? It doesn’t have to be climate-specific. Instead, it can focus on the things that people already hold close to their heart.

    “If you show people … whether it’s a military veteran or a parent or a fiscal conservative or somebody who is in a rural farming area or somebody who loves kayaking or birds or who just loves their kids … how they’re the perfect person to care [about climate change], then it actually enhances their identity to advocate for and adopt climate solutions,” said Hayhoe. “It makes them a better parent, a more frugal fiscal conservative, somebody who’s more invested in the security of their country. It actually enhances who they already are instead of trying to turn them into someone else.”

    The MIT Environmental Solutions Journalism Fellowship provides financial and technical support to journalists dedicated to connecting local stories to broader climate contexts, especially in parts of the country where climate change is disputed or underreported. 

    Climate journalism is typically limited to larger national news outlets that have the resources to employ dedicated climate reporters. And since many local papers are already struggling — with the country on track to lose a third of its papers by the end of next year, leaving over 50 percent of counties in the United States with just one or no local news outlets — local climate beats can be neglected. This makes the work executed by the ESI’s fellows all the more imperative. Because for many Americans, the relevance of these stories to their own community is their gate to climate action. 

    “This is the only climate journalism fellowship that focuses exclusively on local storytelling,” says Laur Hesse Fisher, program director at MIT ESI and founder of the fellowship. “It’s a model for engaging some of the hardest audiences to reach: people who don’t think they care much about climate change. These talented journalists tell powerful, impactful stories that resonate directly with these audiences.”

    From March to June, the second cohort of ESI Journalism Fellows pursued local, high-impact climate reporting in Montana, Arizona, Maine, West Virginia, and Kentucky. 

    Collectively, their 26 stories had over 70,000 direct visits on their host outlets’ websites as of August 2023, gaining hundreds of responses from local voters, lawmakers, and citizen groups. Even though they targeted local audiences, they also had national appeal, as they were republished by 46 outlets — including Vox, Grist, WNYC, WBUR, the NPR homepage, and three separate stories on NPR’s “Here & Now” program, which is broadcast by 45 additional partner radio stations across the country — with a collective reach in the hundreds of thousands. 

    Micah Drew published an eight-part series in The Flathead Beacon titled, “Montana’s Climate Change Lawsuit.” It followed a landmark case of 16 young people in Montana suing the state for violating their right to a “clean and healthful environment.” Of the plaintiffs, Drew said, “They were able to articulate very clearly what they’ve seen, what they’ve lived through in a pretty short amount of life. Some of them talked about wildfires — which we have a lot of here in Montana — and [how] wildfire smoke has canceled soccer games at the high school level. It cancels cross-country practice; it cancels sporting events. I mean, that’s a whole section of your livelihood when you’re that young that’s now being affected.”

    Joan Meiners is a climate news reporter for the Arizona Republic. Her five-part series was situated at the intersection of Phoenix’s extreme heat and housing crises. “I found that we are building three times more sprawling, single-family detached homes … as the number of apartment building units,” she says. “And with an affordability crisis, with a climate crisis, we really need to rethink that. The good news, which I also found through research for this series … is that Arizona doesn’t have a statewide building code, so each municipality decides on what they’re going to require builders to follow … and there’s a lot that different municipalities can do just by showing up to their city council meetings [and] revising the building codes.”

    For The Maine Monitor, freelance journalist Annie Ropeik generated a four-part series, called “Hooked on Heating Oil,” on how Maine came to rely on oil for home heating more than any other state. When asked about solutions, Ropeik says, “Access to fossil fuel alternatives was really the central equity issue that I was looking at in my project, beyond just, ‘Maine is really relying on heating oil, that obviously has climate impacts, it’s really expensive.’ What does that mean for people in different financial situations, and what does that access to solutions look like for those different communities? What are the barriers there and how can we address those?”

    Energy and environment reporter Mike Tony created a four-part series in The Charleston Gazette-Mail on West Virginia’s flood vulnerabilities and the state’s lack of climate action. On connecting with audiences, Tony says, “The idea was to pick a topic like flooding that really affects the whole state, and from there, use that as a sort of an inroad to collect perspectives from West Virginians on how it’s affecting them. And then use that as a springboard to scrutinizing the climate politics that are precluding more aggressive action.”

    Finally, Ryan Van Velzer, Louisville Public Media’s energy and environment reporter, covered the decline of Kentucky’s fossil fuel industry and offered solutions for a sustainable future in a four-part series titled, “Coal’s Dying Light.” For him, it was “really difficult to convince people that climate change is real when the economy is fundamentally intertwined with fossil fuels. To a lot of these people, climate change, and the changes necessary to mitigate climate change, can cause real and perceived economic harm to these communities.” 

    With these projects in mind, someone’s gate to caring about climate change is probably nearby — in their own home, community, or greater region. 

    It’s likely closer than they think. 

    To learn more about the next fellowship cohort — which will support projects that report on climate solutions being implemented locally and how they reduce emissions while simultaneously solving pertinent local issues — sign up for the MIT Environmental Solutions Initiative newsletter. Questions about the fellowship can be directed to Laur Hesse Fisher at climate@mit.edu. More

  • in

    Faculty, staff, students to evaluate ways to decarbonize MIT’s campus

    With a goal to decarbonize the MIT campus by 2050, the Institute must look at “new ideas, transformed into practical solutions, in record time,” as stated in “Fast Forward: MIT’s Climate Action Plan for the Decade.” This charge calls on the MIT community to explore game-changing and evolving technologies with the potential to move campuses like MIT away from carbon emissions-based energy systems.

    To help meet this tremendous challenge, the Decarbonization Working Group — a new subset of the Climate Nucleus — recently launched. Comprised of appointed MIT faculty, researchers, and students, the working group is leveraging its members’ expertise to meet the charge of exploring and assessing existing and in-development solutions to decarbonize the MIT campus by 2050. The group is specifically charged with informing MIT’s efforts to decarbonize the campus’s district energy system.

    Co-chaired by Director of Sustainability Julie Newman and Department of Architecture Professor Christoph Reinhart, the working group includes members with deep knowledge of low- and zero-carbon technologies and grid-level strategies. In convening the group, Newman and Reinhart sought out members researching these technologies as well as exploring their practical use. “In my work on multiple projects on campus, I have seen how cutting-edge research often relies on energy-intensive equipment,” shares PhD student and group member Ippolyti Dellatolas. “It’s clear how new energy-efficiency strategies and technologies could use campus as a living lab and then broadly deploy these solutions across campus for scalable emissions reductions.” This approach is one of MIT’s strong suits and a recurring theme in its climate action plans — using the MIT campus as a test bed for learning and application. “We seek to study and analyze solutions for our campus, with the understanding that our findings have implications far beyond our campus boundaries,” says Newman.

    The efforts of the working group represent just one part of the multipronged approach to identify ways to decarbonize the MIT campus. The group will work in parallel and at times collaboratively with the team from the Office of the Vice President for Campus Services and Stewardship that is managing the development plan for potential zero-carbon pathways for campus buildings and the district energy system. In May 2023, MIT engaged Affiliated Engineers, Inc. (AEI), to support the Institute’s efforts to identify, evaluate, and model various carbon-reduction strategies and technologies to provide MIT with a series of potential decarbonization pathways. Each of the pathways must demonstrate how to manage the generation of energy and its distribution and use on campus. As MIT explores electrification, a significant challenge will be the availability of resilient clean power from the grid to help generate heat for our campus without reliance on natural gas.

    When the Decarbonization Working Group began work this fall, members took the time to learn more about current systems and baseline information. Beginning this month, members will organize analysis around each of their individual areas of expertise and interest and begin to evaluate existing and emerging carbon reduction technologies. “We are fortunate that there are constantly new ideas and technologies being tested in this space and that we have a committed group of faculty working together to evaluate them,” Newman says. “We are aware that not every technology is the right fit for our unique dense urban campus, and nor are we solving for a zero-carbon campus as an island, but rather in the context of an evolving regional power grid.”

    Supported by funding from the Climate Nucleus, evaluating technologies will include site visits to locations where priority technologies are currently deployed or being tested. These site visits may range from university campuses implementing district geothermal and heat pumps to test sites of deep geothermal or microgrid infrastructure manufacturers. “This is a unique moment for MIT to demonstrate leadership by combining best decarbonization practices, such as retrofitting building systems to achieve deep energy reductions and converting to low-temperature district heating systems with ‘nearly there’ technologies such as deep geothermal, micronuclear, energy storage, and ubiquitous occupancy-driven temperature control,” says Reinhart. “As first adopters, we can find out what works, allowing other campuses to follow us at reduced risks.”

    The findings and recommendations of the working group will be delivered in a report to the community at the end of 2024. There will be opportunities for the MIT community to learn more about MIT’s decarbonization efforts at community events on Jan. 24 and March 14, as well as MIT’s Sustainability Connect forum on Feb. 8. More

  • in

    Meeting the clean energy needs of tomorrow

    Yuri Sebregts, chief technology officer at Shell, succinctly laid out the energy dilemma facing the world over the rest of this century. On one hand, demand for energy is quickly growing as countries in the developing world modernize and the global population grows, with 100 gigajoules of energy per person needed annually to enable quality-of-life benefits and industrialization around the globe. On the other, traditional energy sources are quickly warming the planet, with the world already seeing the devastating effects of increasingly frequent extreme weather events. 

    While the goals of energy security and energy sustainability are seemingly at odds with one another, the two must be pursued in tandem, Sebregts said during his address at the MIT Energy Initiative Fall Colloquium.

    “An environmentally sustainable energy system that isn’t also a secure energy system is not sustainable,” Sebregts said. “And conversely, a secure energy system that is not environmentally sustainable will do little to ensure long-term energy access and affordability. Therefore, security and sustainability must go hand-in-hand. You can’t trade off one for the other.”

    Sebregts noted that there are several potential pathways to help strike this balance, including investments in renewable energy sources, the use of carbon offsets, and the creation of more efficient tools, products, and processes. However, he acknowledged that meeting growing energy demands while minimizing environmental impacts is a global challenge requiring an unprecedented level of cooperation among countries and corporations across the world. 

    “At Shell, we recognize that this will require a lot of collaboration between governments, businesses, and civil society,” Sebregts said. “That’s not always easy.”

    Global conflict and global warming

    In 2021, Sebregts noted, world leaders gathered in Glasgow, Scotland and collectively promised to deliver on the “stretch goal” of the 2015 Paris Agreement, which would limit global warming to 1.5 degrees Celsius — a level that scientists believe will help avoid the worst potential impacts of climate change. But, just a few months later, Russia invaded Ukraine, resulting in chaos in global energy markets and illustrating the massive impact that geopolitical friction can have on efforts to reduce carbon emissions.

    “Even though global volatility has been a near constant of this century, the situation in Ukraine is proving to be a turning point,” Sebregts said. “The stress it placed on the global supply of energy, food, and other critical materials was enormous.”

    In Europe, Sebregts noted, countries affected by the loss of Russia’s natural gas supply began importing from the Middle East and the United States. This, in turn, drove up prices. While this did result in some efforts to limit energy use, such as Europeans lowering their thermostats in the winter, it also caused some energy buyers to turn to coal. For instance, the German government approved additional coal mining to boost its energy security — temporarily reversing a decades-long transition away from the fuel. To put this into wider perspective, in a single quarter, China increased its coal generation capacity by as much as Germany had reduced its own over the previous 20 years.

    The promise of electrification

    Sebregts noted the strides being made toward electrification, which is expected to have a significant impact on global carbon emissions. To meet net-zero emissions (the point at which humans are adding no more carbon to the atmosphere than they are removing) by 2050, the share of electricity as a portion of total worldwide energy consumption must reach 37 percent by 2030, up from 20 percent in 2020, Sebregts said.

    He pointed out that Shell has become one of the world’s largest electric vehicle charging companies, with more than 30,000 public charge points. By 2025, that number will increase to 70,000, and it is expected to soar to 200,000 by 2030. While demand and infrastructure for electric vehicles are growing, Sebregts said that the “real needle-mover” will be industrial electrification, especially in so-called “hard-to-abate” sectors.

    This progress will depend heavily on global cooperation — Sebregts pointed out that China dominates the international market for many rare elements that are key components of electrification infrastructure. “It shouldn’t be a surprise that the political instability, shifting geopolitical tensions, and environmental and social governance issues are significant risks for the energy transition,” he said. “It is imperative that we reduce, control, and mitigate these risks as much as possible.”

    Two possible paths

    For decades, Sebregts said, Shell has created scenarios to help senior managers think through the long-term challenges facing the company. While Sebregts stressed that these scenarios are not predictions, they do take into account real-world conditions, and they are meant to give leaders the opportunity to grapple with plausible situations.

    With this in mind, Sebregts outlined Shell’s most recent Energy Security Scenarios, describing the potential future consequences of attempts to balance growing energy demand with sustainability — scenarios that envision vastly different levels of global cooperation, with huge differences in projected results. 

    The first scenario, dubbed “Archipelagos,” imagines countries pursuing energy security through self-interest — a fragmented, competitive process that would result in a global temperature increase of 2.2 degrees Celsius by the end of this century. The second scenario, “Sky 2050,” envisions countries around the world collaborating to change the energy system for their mutual benefit. This more optimistic scenario would see a much lower global temperature increase of 1.2 C by 2100.

    “The good news is that in both scenarios, the world is heading for net-zero emissions at some point,” Sebregts said. “The difference is a question of when it gets there. In Sky 2050, it is the middle of the century. In Archipelagos, it is early in the next century.”

    On the other hand, Sebregts added, the average global temperature will increase by more than 1.5 C for some period of time in either scenario. But, in the Archipelagos scenario, this overshoot will be much larger, and will take much longer to come down. “So, two very different futures,” Sebregts said. “Two very different worlds.”

    The work ahead

    Questioned about the costs of transitioning to a net-zero energy ecosystem, Sebregts said that it is “very hard” to provide an accurate answer. “If you impose an additional constraint … you’re going to have to add some level of cost,” he said. “But then, of course, there’s 30 years of technology development pathway that might counteract some of that.”

    In some cases, such as air travel, Sebregts said, it will likely remain impractical to either rely on electrification or sequester carbon at the source of emission. Direct air capture (DAC) methods, which mechanically pull carbon directly from the atmosphere, will have a role to play in offsetting these emissions, he said. Sebregts predicted that the price of DAC could come down significantly by the middle of this century. “I would venture that a price of $200 to $250 a ton of CO2 by 2050 is something that the world would be willing to spend, at least in developed economies, to offset those very hard-to-abate instances.”

    Sebregts noted that Shell is working on demonstrating DAC technologies in Houston, Texas, constructing what will become Europe’s largest hydrogen plant in the Netherlands, and taking other steps to profitably transition to a net-zero emissions energy company by 2050. “We need to understand what can help our customers transition quicker and how we can continue to satisfy their needs,” he said. “We must ensure that energy is affordable, accessible, and sustainable, as soon as possible.” More