More stories

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Tests show high-temperature superconducting magnets are ready for fusion

    In the predawn hours of Sept. 5, 2021, engineers achieved a major milestone in the labs of MIT’s Plasma Science and Fusion Center (PSFC), when a new type of magnet, made from high-temperature superconducting material, achieved a world-record magnetic field strength of 20 tesla for a large-scale magnet. That’s the intensity needed to build a fusion power plant that is expected to produce a net output of power and potentially usher in an era of virtually limitless power production.

    The test was immediately declared a success, having met all the criteria established for the design of the new fusion device, dubbed SPARC, for which the magnets are the key enabling technology. Champagne corks popped as the weary team of experimenters, who had labored long and hard to make the achievement possible, celebrated their accomplishment.

    But that was far from the end of the process. Over the ensuing months, the team tore apart and inspected the components of the magnet, pored over and analyzed the data from hundreds of instruments that recorded details of the tests, and performed two additional test runs on the same magnet, ultimately pushing it to its breaking point in order to learn the details of any possible failure modes.

    All of this work has now culminated in a detailed report by researchers at PSFC and MIT spinout company Commonwealth Fusion Systems (CFS), published in a collection of six peer-reviewed papers in a special edition of the March issue of IEEE Transactions on Applied Superconductivity. Together, the papers describe the design and fabrication of the magnet and the diagnostic equipment needed to evaluate its performance, as well as the lessons learned from the process. Overall, the team found, the predictions and computer modeling were spot-on, verifying that the magnet’s unique design elements could serve as the foundation for a fusion power plant.

    Enabling practical fusion power

    The successful test of the magnet, says Hitachi America Professor of Engineering Dennis Whyte, who recently stepped down as director of the PSFC, was “the most important thing, in my opinion, in the last 30 years of fusion research.”

    Before the Sept. 5 demonstration, the best-available superconducting magnets were powerful enough to potentially achieve fusion energy — but only at sizes and costs that could never be practical or economically viable. Then, when the tests showed the practicality of such a strong magnet at a greatly reduced size, “overnight, it basically changed the cost per watt of a fusion reactor by a factor of almost 40 in one day,” Whyte says.

    “Now fusion has a chance,” Whyte adds. Tokamaks, the most widely used design for experimental fusion devices, “have a chance, in my opinion, of being economical because you’ve got a quantum change in your ability, with the known confinement physics rules, about being able to greatly reduce the size and the cost of objects that would make fusion possible.”

    The comprehensive data and analysis from the PSFC’s magnet test, as detailed in the six new papers, has demonstrated that plans for a new generation of fusion devices — the one designed by MIT and CFS, as well as similar designs by other commercial fusion companies — are built on a solid foundation in science.

    The superconducting breakthrough

    Fusion, the process of combining light atoms to form heavier ones, powers the sun and stars, but harnessing that process on Earth has proved to be a daunting challenge, with decades of hard work and many billions of dollars spent on experimental devices. The long-sought, but never yet achieved, goal is to build a fusion power plant that produces more energy than it consumes. Such a power plant could produce electricity without emitting greenhouse gases during operation, and generating very little radioactive waste. Fusion’s fuel, a form of hydrogen that can be derived from seawater, is virtually limitless.

    But to make it work requires compressing the fuel at extraordinarily high temperatures and pressures, and since no known material could withstand such temperatures, the fuel must be held in place by extremely powerful magnetic fields. Producing such strong fields requires superconducting magnets, but all previous fusion magnets have been made with a superconducting material that requires frigid temperatures of about 4 degrees above absolute zero (4 kelvins, or -270 degrees Celsius). In the last few years, a newer material nicknamed REBCO, for rare-earth barium copper oxide, was added to fusion magnets, and allows them to operate at 20 kelvins, a temperature that despite being only 16 kelvins warmer, brings significant advantages in terms of material properties and practical engineering.

    Taking advantage of this new higher-temperature superconducting material was not just a matter of substituting it in existing magnet designs. Instead, “it was a rework from the ground up of almost all the principles that you use to build superconducting magnets,” Whyte says. The new REBCO material is “extraordinarily different than the previous generation of superconductors. You’re not just going to adapt and replace, you’re actually going to innovate from the ground up.” The new papers in Transactions on Applied Superconductivity describe the details of that redesign process, now that patent protection is in place.

    A key innovation: no insulation

    One of the dramatic innovations, which had many others in the field skeptical of its chances of success, was the elimination of insulation around the thin, flat ribbons of superconducting tape that formed the magnet. Like virtually all electrical wires, conventional superconducting magnets are fully protected by insulating material to prevent short-circuits between the wires. But in the new magnet, the tape was left completely bare; the engineers relied on REBCO’s much greater conductivity to keep the current flowing through the material.

    “When we started this project, in let’s say 2018, the technology of using high-temperature superconductors to build large-scale high-field magnets was in its infancy,” says Zach Hartwig, the Robert N. Noyce Career Development Professor in the Department of Nuclear Science and Engineering. Hartwig has a co-appointment at the PSFC and is the head of its engineering group, which led the magnet development project. “The state of the art was small benchtop experiments, not really representative of what it takes to build a full-size thing. Our magnet development project started at benchtop scale and ended up at full scale in a short amount of time,” he adds, noting that the team built a 20,000-pound magnet that produced a steady, even magnetic field of just over 20 tesla — far beyond any such field ever produced at large scale.

    “The standard way to build these magnets is you would wind the conductor and you have insulation between the windings, and you need insulation to deal with the high voltages that are generated during off-normal events such as a shutdown.” Eliminating the layers of insulation, he says, “has the advantage of being a low-voltage system. It greatly simplifies the fabrication processes and schedule.” It also leaves more room for other elements, such as more cooling or more structure for strength.

    The magnet assembly is a slightly smaller-scale version of the ones that will form the donut-shaped chamber of the SPARC fusion device now being built by CFS in Devens, Massachusetts. It consists of 16 plates, called pancakes, each bearing a spiral winding of the superconducting tape on one side and cooling channels for helium gas on the other.

    But the no-insulation design was considered risky, and a lot was riding on the test program. “This was the first magnet at any sufficient scale that really probed what is involved in designing and building and testing a magnet with this so-called no-insulation no-twist technology,” Hartwig says. “It was very much a surprise to the community when we announced that it was a no-insulation coil.”

    Pushing to the limit … and beyond

    The initial test, described in previous papers, proved that the design and manufacturing process not only worked but was highly stable — something that some researchers had doubted. The next two test runs, also performed in late 2021, then pushed the device to the limit by deliberately creating unstable conditions, including a complete shutoff of incoming power that can lead to a catastrophic overheating. Known as quenching, this is considered a worst-case scenario for the operation of such magnets, with the potential to destroy the equipment.

    Part of the mission of the test program, Hartwig says, was “to actually go off and intentionally quench a full-scale magnet, so that we can get the critical data at the right scale and the right conditions to advance the science, to validate the design codes, and then to take the magnet apart and see what went wrong, why did it go wrong, and how do we take the next iteration toward fixing that. … It was a very successful test.”

    That final test, which ended with the melting of one corner of one of the 16 pancakes, produced a wealth of new information, Hartwig says. For one thing, they had been using several different computational models to design and predict the performance of various aspects of the magnet’s performance, and for the most part, the models agreed in their overall predictions and were well-validated by the series of tests and real-world measurements. But in predicting the effect of the quench, the model predictions diverged, so it was necessary to get the experimental data to evaluate the models’ validity.

    “The highest-fidelity models that we had predicted almost exactly how the magnet would warm up, to what degree it would warm up as it started to quench, and where would the resulting damage to the magnet would be,” he says. As described in detail in one of the new reports, “That test actually told us exactly the physics that was going on, and it told us which models were useful going forward and which to leave by the wayside because they’re not right.”

    Whyte says, “Basically we did the worst thing possible to a coil, on purpose, after we had tested all other aspects of the coil performance. And we found that most of the coil survived with no damage,” while one isolated area sustained some melting. “It’s like a few percent of the volume of the coil that got damaged.” And that led to revisions in the design that are expected to prevent such damage in the actual fusion device magnets, even under the most extreme conditions.

    Hartwig emphasizes that a major reason the team was able to accomplish such a radical new record-setting magnet design, and get it right the very first time and on a breakneck schedule, was thanks to the deep level of knowledge, expertise, and equipment accumulated over decades of operation of the Alcator C-Mod tokamak, the Francis Bitter Magnet Laboratory, and other work carried out at PSFC. “This goes to the heart of the institutional capabilities of a place like this,” he says. “We had the capability, the infrastructure, and the space and the people to do these things under one roof.”

    The collaboration with CFS was also key, he says, with MIT and CFS combining the most powerful aspects of an academic institution and private company to do things together that neither could have done on their own. “For example, one of the major contributions from CFS was leveraging the power of a private company to establish and scale up a supply chain at an unprecedented level and timeline for the most critical material in the project: 300 kilometers (186 miles) of high-temperature superconductor, which was procured with rigorous quality control in under a year, and integrated on schedule into the magnet.”

    The integration of the two teams, those from MIT and those from CFS, also was crucial to the success, he says. “We thought of ourselves as one team, and that made it possible to do what we did.” More

  • in

    Power when the sun doesn’t shine

    In 2016, at the huge Houston energy conference CERAWeek, MIT materials scientist Yet-Ming Chiang found himself talking to a Tesla executive about a thorny problem: how to store the output of solar panels and wind turbines for long durations.        

    Chiang, the Kyocera Professor of Materials Science and Engineering, and Mateo Jaramillo, a vice president at Tesla, knew that utilities lacked a cost-effective way to store renewable energy to cover peak levels of demand and to bridge the gaps during windless and cloudy days. They also knew that the scarcity of raw materials used in conventional energy storage devices needed to be addressed if renewables were ever going to displace fossil fuels on the grid at scale.

    Energy storage technologies can facilitate access to renewable energy sources, boost the stability and reliability of power grids, and ultimately accelerate grid decarbonization. The global market for these systems — essentially large batteries — is expected to grow tremendously in the coming years. A study by the nonprofit LDES (Long Duration Energy Storage) Council pegs the long-duration energy storage market at between 80 and 140 terawatt-hours by 2040. “That’s a really big number,” Chiang notes. “Every 10 people on the planet will need access to the equivalent of one EV [electric vehicle] battery to support their energy needs.”

    In 2017, one year after they met in Houston, Chiang and Jaramillo joined forces to co-found Form Energy in Somerville, Massachusetts, with MIT graduates Marco Ferrara SM ’06, PhD ’08 and William Woodford PhD ’13, and energy storage veteran Ted Wiley.

    “There is a burgeoning market for electrical energy storage because we want to achieve decarbonization as fast and as cost-effectively as possible,” says Ferrara, Form’s senior vice president in charge of software and analytics.

    Investors agreed. Over the next six years, Form Energy would raise more than $800 million in venture capital.

    Bridging gaps

    The simplest battery consists of an anode, a cathode, and an electrolyte. During discharge, with the help of the electrolyte, electrons flow from the negative anode to the positive cathode. During charge, external voltage reverses the process. The anode becomes the positive terminal, the cathode becomes the negative terminal, and electrons move back to where they started. Materials used for the anode, cathode, and electrolyte determine the battery’s weight, power, and cost “entitlement,” which is the total cost at the component level.

    During the 1980s and 1990s, the use of lithium revolutionized batteries, making them smaller, lighter, and able to hold a charge for longer. The storage devices Form Energy has devised are rechargeable batteries based on iron, which has several advantages over lithium. A big one is cost.

    Chiang once declared to the MIT Club of Northern California, “I love lithium-ion.” Two of the four MIT spinoffs Chiang founded center on innovative lithium-ion batteries. But at hundreds of dollars a kilowatt-hour (kWh) and with a storage capacity typically measured in hours, lithium-ion was ill-suited for the use he now had in mind.

    The approach Chiang envisioned had to be cost-effective enough to boost the attractiveness of renewables. Making solar and wind energy reliable enough for millions of customers meant storing it long enough to fill the gaps created by extreme weather conditions, grid outages, and when there is a lull in the wind or a few days of clouds.

    To be competitive with legacy power plants, Chiang’s method had to come in at around $20 per kilowatt-hour of stored energy — one-tenth the cost of lithium-ion battery storage.

    But how to transition from expensive batteries that store and discharge over a couple of hours to some as-yet-undefined, cheap, longer-duration technology?

    “One big ball of iron”

    That’s where Ferrara comes in. Ferrara has a PhD in nuclear engineering from MIT and a PhD in electrical engineering and computer science from the University of L’Aquila in his native Italy. In 2017, as a research affiliate at the MIT Department of Materials Science and Engineering, he worked with Chiang to model the grid’s need to manage renewables’ intermittency.

    How intermittent depends on where you are. In the United States, for instance, there’s the windy Great Plains; the sun-drenched, relatively low-wind deserts of Arizona, New Mexico, and Nevada; and the often-cloudy Pacific Northwest.

    Ferrara, in collaboration with Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society and her MIT team, modeled four representative locations in the United States and concluded that energy storage with capacity costs below roughly $20/kWh and discharge durations of multiple days would allow a wind-solar mix to provide cost-competitive, firm electricity in resource-abundant locations.

    Now that they had a time frame, they turned their attention to materials. At the price point Form Energy was aiming for, lithium was out of the question. Chiang looked at plentiful and cheap sulfur. But a sulfur, sodium, water, and air battery had technical challenges.

    Thomas Edison once used iron as an electrode, and iron-air batteries were first studied in the 1960s. They were too heavy to make good transportation batteries. But this time, Chiang and team were looking at a battery that sat on the ground, so weight didn’t matter. Their priorities were cost and availability.

    “Iron is produced, mined, and processed on every continent,” Chiang says. “The Earth is one big ball of iron. We wouldn’t ever have to worry about even the most ambitious projections of how much storage that the world might use by mid-century.” If Form ever moves into the residential market, “it’ll be the safest battery you’ve ever parked at your house,” Chiang laughs. “Just iron, air, and water.”

    Scientists call it reversible rusting. While discharging, the battery takes in oxygen and converts iron to rust. Applying an electrical current converts the rusty pellets back to iron, and the battery “breathes out” oxygen as it charges. “In chemical terms, you have iron, and it becomes iron hydroxide,” Chiang says. “That means electrons were extracted. You get those electrons to go through the external circuit, and now you have a battery.”

    Form Energy’s battery modules are approximately the size of a washer-and-dryer unit. They are stacked in 40-foot containers, and several containers are electrically connected with power conversion systems to build storage plants that can cover several acres.

    The right place at the right time

    The modules don’t look or act like anything utilities have contracted for before.

    That’s one of Form’s key challenges. “There is not widespread knowledge of needing these new tools for decarbonized grids,” Ferrara says. “That’s not the way utilities have typically planned. They’re looking at all the tools in the toolkit that exist today, which may not contemplate a multi-day energy storage asset.”

    Form Energy’s customers are largely traditional power companies seeking to expand their portfolios of renewable electricity. Some are in the process of decommissioning coal plants and shifting to renewables.

    Ferrara’s research pinpointing the need for very low-cost multi-day storage provides key data for power suppliers seeking to determine the most cost-effective way to integrate more renewable energy.

    Using the same modeling techniques, Ferrara and team show potential customers how the technology fits in with their existing system, how it competes with other technologies, and how, in some cases, it can operate synergistically with other storage technologies.

    “They may need a portfolio of storage technologies to fully balance renewables on different timescales of intermittency,” he says. But other than the technology developed at Form, “there isn’t much out there, certainly not within the cost entitlement of what we’re bringing to market.”  Thanks to Chiang and Jaramillo’s chance encounter in Houston, Form has a several-year lead on other companies working to address this challenge. 

    In June 2023, Form Energy closed its biggest deal to date for a single project: Georgia Power’s order for a 15-megawatt/1,500-megawatt-hour system. That order brings Form’s total amount of energy storage under contracts with utility customers to 40 megawatts/4 gigawatt-hours. To meet the demand, Form is building a new commercial-scale battery manufacturing facility in West Virginia.

    The fact that Form Energy is creating jobs in an area that lost more than 10,000 steel jobs over the past decade is not lost on Chiang. “And these new jobs are in clean tech. It’s super exciting to me personally to be doing something that benefits communities outside of our traditional technology centers.

    “This is the right time for so many reasons,” Chiang says. He says he and his Form Energy co-founders feel “tremendous urgency to get these batteries out into the world.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    With just a little electricity, MIT researchers boost common catalytic reactions

    A simple technique that uses small amounts of energy could boost the efficiency of some key chemical processing reactions, by up to a factor of 100,000, MIT researchers report. These reactions are at the heart of petrochemical processing, pharmaceutical manufacturing, and many other industrial chemical processes.

    The surprising findings are reported today in the journal Science, in a paper by MIT graduate student Karl Westendorff, professors Yogesh Surendranath and Yuriy Roman-Leshkov, and two others.

    “The results are really striking,” says Surendranath, a professor of chemistry and chemical engineering. Rate increases of that magnitude have been seen before but in a different class of catalytic reactions known as redox half-reactions, which involve the gain or loss of an electron. The dramatically increased rates reported in the new study “have never been observed for reactions that don’t involve oxidation or reduction,” he says.

    The non-redox chemical reactions studied by the MIT team are catalyzed by acids. “If you’re a first-year chemistry student, probably the first type of catalyst you learn about is an acid catalyst,” Surendranath says. There are many hundreds of such acid-catalyzed reactions, “and they’re super important in everything from processing petrochemical feedstocks to making commodity chemicals to doing transformations in pharmaceutical products. The list goes on and on.”

    “These reactions are key to making many products we use daily,” adds Roman-Leshkov, a professor of chemical engineering and chemistry.

    But the people who study redox half-reactions, also known as electrochemical reactions, are part of an entirely different research community than those studying non-redox chemical reactions, known as thermochemical reactions. As a result, even though the technique used in the new study, which involves applying a small external voltage, was well-known in the electrochemical research community, it had not been systematically applied to acid-catalyzed thermochemical reactions.

    People working on thermochemical catalysis, Surendranath says, “usually don’t consider” the role of the electrochemical potential at the catalyst surface, “and they often don’t have good ways of measuring it. And what this study tells us is that relatively small changes, on the order of a few hundred millivolts, can have huge impacts — orders of magnitude changes in the rates of catalyzed reactions at those surfaces.”

    “This overlooked parameter of surface potential is something we should pay a lot of attention to because it can have a really, really outsized effect,” he says. “It changes the paradigm of how we think about catalysis.”

    Chemists traditionally think about surface catalysis based on the chemical binding energy of molecules to active sites on the surface, which influences the amount of energy needed for the reaction, he says. But the new findings show that the electrostatic environment is “equally important in defining the rate of the reaction.”

    The team has already filed a provisional patent application on parts of the process and is working on ways to apply the findings to specific chemical processes. Westendorff says their findings suggest that “we should design and develop different types of reactors to take advantage of this sort of strategy. And we’re working right now on scaling up these systems.”

    While their experiments so far were done with a two-dimensional planar electrode, most industrial reactions are run in three-dimensional vessels filled with powders. Catalysts are distributed through those powders, providing a lot more surface area for the reactions to take place. “We’re looking at how catalysis is currently done in industry and how we can design systems that take advantage of the already existing infrastructure,” Westendorff says.

    Surendranath adds that these new findings “raise tantalizing possibilities: Is this a more general phenomenon? Does electrochemical potential play a key role in other reaction classes as well? In our mind, this reshapes how we think about designing catalysts and promoting their reactivity.”

    Roman-Leshkov adds that “traditionally people who work in thermochemical catalysis would not associate these reactions with electrochemical processes at all. However, introducing this perspective to the community will redefine how we can integrate electrochemical characteristics into thermochemical catalysis. It will have a big impact on the community in general.”

    While there has typically been little interaction between electrochemical and thermochemical catalysis researchers, Surendranath says, “this study shows the community that there’s really a blurring of the line between the two, and that there is a huge opportunity in cross-fertilization between these two communities.”

    Westerndorff adds that to make it work, “you have to design a system that’s pretty unconventional to either community to isolate this effect.” And that helps explain why such a dramatic effect had never been seen before. He notes that even their paper’s editor asked them why this effect hadn’t been reported before. The answer has to do with “how disparate those two ideologies were before this,” he says. “It’s not just that people don’t really talk to each other. There are deep methodological differences between how the two communities conduct experiments. And this work is really, we think, a great step toward bridging the two.”

    In practice, the findings could lead to far more efficient production of a wide variety of chemical materials, the team says. “You get orders of magnitude changes in rate with very little energy input,” Surendranath says. “That’s what’s amazing about it.”

    The findings, he says, “build a more holistic picture of how catalytic reactions at interfaces work, irrespective of whether you’re going to bin them into the category of electrochemical reactions or thermochemical reactions.” He adds that “it’s rare that you find something that could really revise our foundational understanding of surface catalytic reactions in general. We’re very excited.”

    “This research is of the highest quality,” says Costas Vayenas, a professor of engineering at the university of Patras, in Greece, who was not associated with the study. The work “is very promising for practical applications, particularly since it extends previous related work in redox catalytic systems,” he says.

    The team included MIT postdoc Max Hulsey PhD ’22 and graduate student Thejas Wesley PhD ’23, and was supported by the Air Force Office of Scientific Research and the U.S. Department of Energy Basic Energy Sciences. More

  • in

    Anushree Chaudhuri: Involving local communities in renewable energy planning

    Anushree Chaudhuri has a history of making bold decisions. In fifth grade, she biked across her home state of California with little prior experience. In her first year at MIT, she advocated for student recommendations in the preparation of the Institute’s Climate Action Plan for the Decade. And recently, she led a field research project throughout California to document the perspectives of rural and Indigenous populations affected by climate change and clean energy projects.

    “It doesn’t matter who you are or how young you are, you can get involved with something and inspire others to do so,” the senior says.

    Initially a materials science and engineering major, Chaudhuri was quickly drawn to environmental policy issues and later decided to double-major in urban studies and planning and in economics. Chaudhuri will receive her bachelor’s degrees this month, followed by a master’s degree in city planning in the spring.

    The importance of community engagement in policymaking has become one of Chaudhuri’s core interests. A 2024 Marshall Scholar, she is headed to the U.K. next year to pursue a PhD related to environment and development. She hopes to build on her work in California and continue to bring attention to impacts that energy transitions can have on local communities, which tend to be rural and low-income. Addressing resistance to these projects can be challenging, but “ignoring it leaves these communities in the dust and widens the urban-rural divide,” she says.

    Silliness and sustainability 

    Chaudhuri classifies her many activities into two groups: those that help her unwind, like her living community, Conner Two, and those that require intensive deliberation, like her sustainability-related organizing.

    Conner Two, in the Burton-Conner residence hall, is where Chaudhuri feels most at home on campus. She describes the group’s activities as “silly” and emphasizes their love of jokes, even in the floor’s nickname, “the British Floor,” which is intentionally absurd, as the residents are rarely British.

    Chaudhuri’s first involvement with sustainability issues on campus was during the preparation of MIT’s Fast Forward Climate Action Plan in the 2020-2021 academic year. As a co-lead of one of several student working groups, she helped organize key discussions between the administration, climate experts, and student government to push for six main goals in the plan, including an ethical investing framework. Being involved with a significant student movement so early on in her undergraduate career was a learning opportunity for Chaudhuri and impressed upon her that young people can play critical roles in making far-reaching structural changes.

    The experience also made her realize how many organizations on campus shared similar goals even if their perspectives varied, and she saw the potential for more synergy among them.

    Chaudhuri went on to co-lead the Student Sustainability Coalition to help build community across the sustainability-related organizations on campus and create a centralized system that would make it easier for outsiders and group members to access information and work together. Through the coalition, students have collaborated on efforts including campus events, and off-campus matters such as the Cambridge Green New Deal hearings.

    Another benefit to such a network: It creates a support system that recognizes even small-scale victories. “Community is so important to avoid burnout when you’re working on something that can be very frustrating and an uphill battle like negotiating with leadership or seeking policy changes,” Chaudhuri says.

    Fieldwork

    For the past year, Chaudhuri has been doing independent research in California with the support of several advisory organizations to host conversations with groups affected by renewable energy projects, which, as she has documented, are often concentrated in rural, low-income, and Indigenous communities. The introduction of renewable energy facilities, such as wind and solar farms, can perpetuate existing inequities if they ignore serious community concerns, Chaudhuri says.

    As state or federal policymakers and private developers carry out the permitting process for these projects, “they can repeat histories of extraction, sometimes infringing on the rights of a local or Tribal government to decide what happens with their land,” she says.

    In her site visits, she is documenting community opposition to controversial solar and wind proposals and collecting oral histories. Doing fieldwork for the first time as an outsider was difficult for Chaudhuri, as she dealt with distrust, unpredictability, and needing to be completely flexible for her sources. “A lot of it was just being willing to drop everything and go and be a little bit adventurous and take some risks,” she says.

    Role models and reading

    Chaudhuri is quick to credit many of the role models and other formative influences in her life.

    After working on the Climate Action Plan, Chaudhuri attended a public narrative workshop at Harvard University led by Marshall Ganz, a grassroots community organizer who worked with Cesar Chavez and on the 2008 Obama presidential campaign. “That was a big inspiration and kind of shaped how I viewed leadership in, for example, campus advocacy, but also in other projects and internships.”

    Reading has also influenced Chaudhuri’s perspective on community organizing, “After the Climate Action Plan campaign, I realized that a lot of what made the campaign successful or not could track well with organizing and social change theories, and histories of social movements. So, that was a good experience for me, being able to critically reflect on it and tie it into these other things I was learning about.”

    Since beginning her studies at MIT, Chaudhuri has become especially interested in social theory and political philosophy, starting with ancient forms of Western and Eastern ethic, and up to 20th and 21st century philosophers who inspire her. Chaudhuri cites Amartya Sen and Olúfẹ́mi Táíwò as particularly influential. “I think [they’ve] provided a really compelling framework to guide a lot of my own values,” she says.

    Another role model is Brenda Mallory, the current chair of the U.S. Council on Environmental Quality, who Chaudhuri was grateful to meet at the United Nations COP27 Climate Conference. As an intern at the U.S. Department of Energy, Chaudhuri worked within a team on implementing the federal administration’s Justice40 initiative, which commits 40 percent of federal climate investments to disadvantaged communities. This initiative was largely directed by Mallory, and Chaudhuri admires how Mallory was able to make an impact at different levels of government through her leadership. Chaudhuri hopes to follow in Mallory’s footsteps someday, as a public official committed to just policies and programs.

     “Good leaders are those who empower good leadership in others,” Chaudhuri says. More

  • in

    Local journalism is a critical “gate” to engage Americans on climate change

    Last year, Pew Research Center data revealed that only 37 percent of Americans said addressing climate change should be a top priority for the president and Congress. Furthermore, climate change was ranked 17th out of 21 national issues included in a Pew survey. 

    But in reality, it’s not that Americans don’t care about climate change, says celebrated climate scientist and communicator MIT Professor Katharine Hayhoe. It’s that they don’t know that they already do. 

    To get Americans to care about climate change, she adds, it’s imperative to guide them to their gate. At first, it might not be clear where that gate is. But it exists. 

    That message was threaded through the Connecting with Americans on Climate Change webinar last fall, which featured a discussion with Hayhoe and the five journalists who made up the 2023 cohort of the MIT Environmental Solutions Journalism Fellowship. Hayhoe referred to a “gate” as a conversational entry point about climate impacts and solutions. The catch? It doesn’t have to be climate-specific. Instead, it can focus on the things that people already hold close to their heart.

    “If you show people … whether it’s a military veteran or a parent or a fiscal conservative or somebody who is in a rural farming area or somebody who loves kayaking or birds or who just loves their kids … how they’re the perfect person to care [about climate change], then it actually enhances their identity to advocate for and adopt climate solutions,” said Hayhoe. “It makes them a better parent, a more frugal fiscal conservative, somebody who’s more invested in the security of their country. It actually enhances who they already are instead of trying to turn them into someone else.”

    The MIT Environmental Solutions Journalism Fellowship provides financial and technical support to journalists dedicated to connecting local stories to broader climate contexts, especially in parts of the country where climate change is disputed or underreported. 

    Climate journalism is typically limited to larger national news outlets that have the resources to employ dedicated climate reporters. And since many local papers are already struggling — with the country on track to lose a third of its papers by the end of next year, leaving over 50 percent of counties in the United States with just one or no local news outlets — local climate beats can be neglected. This makes the work executed by the ESI’s fellows all the more imperative. Because for many Americans, the relevance of these stories to their own community is their gate to climate action. 

    “This is the only climate journalism fellowship that focuses exclusively on local storytelling,” says Laur Hesse Fisher, program director at MIT ESI and founder of the fellowship. “It’s a model for engaging some of the hardest audiences to reach: people who don’t think they care much about climate change. These talented journalists tell powerful, impactful stories that resonate directly with these audiences.”

    From March to June, the second cohort of ESI Journalism Fellows pursued local, high-impact climate reporting in Montana, Arizona, Maine, West Virginia, and Kentucky. 

    Collectively, their 26 stories had over 70,000 direct visits on their host outlets’ websites as of August 2023, gaining hundreds of responses from local voters, lawmakers, and citizen groups. Even though they targeted local audiences, they also had national appeal, as they were republished by 46 outlets — including Vox, Grist, WNYC, WBUR, the NPR homepage, and three separate stories on NPR’s “Here & Now” program, which is broadcast by 45 additional partner radio stations across the country — with a collective reach in the hundreds of thousands. 

    Micah Drew published an eight-part series in The Flathead Beacon titled, “Montana’s Climate Change Lawsuit.” It followed a landmark case of 16 young people in Montana suing the state for violating their right to a “clean and healthful environment.” Of the plaintiffs, Drew said, “They were able to articulate very clearly what they’ve seen, what they’ve lived through in a pretty short amount of life. Some of them talked about wildfires — which we have a lot of here in Montana — and [how] wildfire smoke has canceled soccer games at the high school level. It cancels cross-country practice; it cancels sporting events. I mean, that’s a whole section of your livelihood when you’re that young that’s now being affected.”

    Joan Meiners is a climate news reporter for the Arizona Republic. Her five-part series was situated at the intersection of Phoenix’s extreme heat and housing crises. “I found that we are building three times more sprawling, single-family detached homes … as the number of apartment building units,” she says. “And with an affordability crisis, with a climate crisis, we really need to rethink that. The good news, which I also found through research for this series … is that Arizona doesn’t have a statewide building code, so each municipality decides on what they’re going to require builders to follow … and there’s a lot that different municipalities can do just by showing up to their city council meetings [and] revising the building codes.”

    For The Maine Monitor, freelance journalist Annie Ropeik generated a four-part series, called “Hooked on Heating Oil,” on how Maine came to rely on oil for home heating more than any other state. When asked about solutions, Ropeik says, “Access to fossil fuel alternatives was really the central equity issue that I was looking at in my project, beyond just, ‘Maine is really relying on heating oil, that obviously has climate impacts, it’s really expensive.’ What does that mean for people in different financial situations, and what does that access to solutions look like for those different communities? What are the barriers there and how can we address those?”

    Energy and environment reporter Mike Tony created a four-part series in The Charleston Gazette-Mail on West Virginia’s flood vulnerabilities and the state’s lack of climate action. On connecting with audiences, Tony says, “The idea was to pick a topic like flooding that really affects the whole state, and from there, use that as a sort of an inroad to collect perspectives from West Virginians on how it’s affecting them. And then use that as a springboard to scrutinizing the climate politics that are precluding more aggressive action.”

    Finally, Ryan Van Velzer, Louisville Public Media’s energy and environment reporter, covered the decline of Kentucky’s fossil fuel industry and offered solutions for a sustainable future in a four-part series titled, “Coal’s Dying Light.” For him, it was “really difficult to convince people that climate change is real when the economy is fundamentally intertwined with fossil fuels. To a lot of these people, climate change, and the changes necessary to mitigate climate change, can cause real and perceived economic harm to these communities.” 

    With these projects in mind, someone’s gate to caring about climate change is probably nearby — in their own home, community, or greater region. 

    It’s likely closer than they think. 

    To learn more about the next fellowship cohort — which will support projects that report on climate solutions being implemented locally and how they reduce emissions while simultaneously solving pertinent local issues — sign up for the MIT Environmental Solutions Initiative newsletter. Questions about the fellowship can be directed to Laur Hesse Fisher at climate@mit.edu. More

  • in

    Reflecting on COP28 — and humanity’s progress toward meeting global climate goals

    With 85,000 delegates, the 2023 United Nations climate change conference, known as COP28, was the largest U.N. climate conference in history. It was held at the end of the hottest year in recorded history. And after 12 days of negotiations, from Nov. 30 to Dec. 12, it produced a decision that included, for the first time, language calling for “transitioning away from fossil fuels,” though it stopped short of calling for their complete phase-out.

    U.N. Climate Change Executive Secretary Simon Stiell said the outcome in Dubai, United Arab Emirates, COP28’s host city, signaled “the beginning of the end” of the fossil fuel era. 

    COP stands for “conference of the parties” to the U.N. Framework Convention on Climate Change, held this year for the 28th time. Through the negotiations — and the immense conference and expo that takes place alongside them — a delegation of faculty, students, and staff from MIT was in Dubai to observe the negotiations, present new climate technologies, speak on panels, network, and conduct research.

    On Jan. 17, the MIT Center for International Studies (CIS) hosted a panel discussion with MIT delegates who shared their reflections on the experience. Asking what’s going on at COP is “like saying, ‘What’s going on in the city of Boston today?’” quipped Evan Lieberman, the Total Professor of Political Science and Contemporary Africa, director of CIS, and faculty director of MIT International Science and Technology Initiatives (MISTI). “The value added that all of us can provide for the MIT community is [to share] what we saw firsthand and how we experienced it.” 

    Phase-out, phase down, transition away?

    In the first week of COP28, over 100 countries issued a joint statement that included a call for “the global phase out of unabated fossil fuels.” The question of whether the COP28 decision — dubbed the “UAE Consensus” — would include this phase-out language animated much of the discussion in the days and weeks leading up to COP28. 

    Ultimately, the decision called for “transitioning away from fossil fuels in energy systems, in a just, orderly and equitable manner.” It also called for “accelerating efforts towards the phase down of unabated coal power,” referring to the combustion of coal without efforts to capture and store its emissions.

    In Dubai to observe the negotiations, graduate student Alessandra Fabbri said she was “confronted” by the degree to which semantic differences could impose significant ramifications — for example, when negotiators referred to a “just transition,” or to “developed vs. developing nations” — particularly where evolution in recent scholarship has produced more nuanced understandings of the terms.

    COP28 also marked the conclusion of the first global stocktake, a core component of the 2015 Paris Agreement. The effort every five years to assess the world’s progress in responding to climate change is intended as a basis for encouraging countries to strengthen their climate goals over time, a process often referred to as the Paris Agreement’s “ratchet mechanism.” 

    The technical report of the first global stocktake, published in September 2023, found that while the world has taken actions that have reduced forecasts of future warming, they are not sufficient to meet the goals of the Paris Agreement, which aims to limit global average temperature increase to “well below” 2 degrees Celsius, while pursuing efforts to limit the increase to 1.5 degrees above pre-industrial levels.

    “Despite minor, punctual advancements in climate action, parties are far from being on track to meet the long-term goals of the Paris Agreement,” said Fabbri, a graduate student in the School of Architecture and Planning and a fellow in MIT’s Leventhal Center for Advanced Urbanism. Citing a number of persistent challenges, including some parties’ fears that rapid economic transition may create or exacerbate vulnerabilities, she added, “There is a noted lack of accountability among certain countries in adhering to their commitments and responsibilities under international climate agreements.” 

    Climate and trade

    COP28 was the first climate summit to formally acknowledge the importance of international trade by featuring an official “Trade Day” on Dec. 4. Internationally traded goods account for about a quarter of global greenhouse gas emissions, raising complex questions of accountability and concerns about offshoring of industrial manufacturing, a phenomenon known as “emissions leakage.” Addressing the nexus of climate and trade is therefore considered essential for successful decarbonization, and a growing number of countries are leveraging trade policies — such as carbon fees applied to imported goods — to secure climate benefits. 

    Members of the MIT delegation participated in several related activities, sharing research and informing decision-makers. Catherine Wolfram, professor of applied economics in the MIT Sloan School of Management, and Michael Mehling, deputy director of the MIT Center for Energy and Environmental Policy Research (CEEPR), presented options for international cooperation on such trade policies at side events, including ones hosted by the World Trade Organization and European Parliament. 

    “While COPs are often criticized for highlighting statements that don’t have any bite, they are also tremendous opportunities to get people from around the world who care about climate and think deeply about these issues in one place,” said Wolfram.

    Climate and health

    For the first time in the conference’s nearly 30-year history, COP28 included a thematic “Health Day” that featured talks on the relationship between climate and health. Researchers from MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) have been testing policy solutions in this area for years through research funds such as the King Climate Action Initiative (K-CAI). 

    “An important but often-neglected area where climate action can lead to improved health is combating air pollution,” said Andre Zollinger, K-CAI’s senior policy manager. “COP28’s announcement on reducing methane leaks is an important step because action in this area could translate to relatively quick, cost-effective ways to curb climate change while improving air quality, especially for people living near these industrial sites.” K-CAI has an ongoing project in Colorado investigating the use of machine learning to predict leaks and improve the framework for regulating industrial methane emissions, Zollinger noted.

    This was J-PAL’s third time at COP, which Zollinger said typically presented an opportunity for researchers to share new findings and analysis with government partners, nongovernmental organizations, and companies. This year, he said, “We have [also] been working with negotiators in the [Middle East and North Africa] region in the months preceding COP to plug them into the latest evidence on water conservation, on energy access, on different challenging areas of adaptation that could be useful for them during the conference.”

    Sharing knowledge, learning from others

    MIT student Runako Gentles described COP28 as a “springboard” to greater impact. A senior from Jamaica studying civil and environmental engineering, Gentles said it was exciting to introduce himself as an MIT undergraduate to U.N. employees and Jamaican delegates in Dubai. “There’s a lot of talk on mitigation and cutting carbon emissions, but there needs to be much more going into climate adaptation, especially for small-island developing states like those in the Caribbean,” he said. “One of the things I can do, while I still try to finish my degree, is communicate — get the story out there to raise awareness.”

    At an official side event at COP28 hosted by MIT, Pennsylvania State University, and the American Geophysical Union, Maria T. Zuber, MIT’s vice president for research, stressed the importance of opportunities to share knowledge and learn from people around the world.

    “The reason this two-way learning is so important for us is simple: The ideas we come up with in a university setting, whether they’re technological or policy or any other kind of innovations — they only matter in the practical world if they can be put to good use and scaled up,” said Zuber. “And the only way we can know that our work has practical relevance for addressing climate is by working hand-in-hand with communities, industries, governments, and others.”

    Marcela Angel, research program director at the Environmental Solutions Initiative, and Sergey Paltsev, deputy director of MIT’s Joint Program on the Science and Policy of Global Change, also spoke at the event, which was moderated by Bethany Patten, director of policy and engagement for sustainability at the MIT Sloan School of Management.  More

  • in

    MIT researchers map the energy transition’s effects on jobs

    A new analysis by MIT researchers shows the places in the U.S. where jobs are most linked to fossil fuels. The research could help policymakers better identify and support areas affected over time by a switch to renewable energy.

    While many of the places most potentially affected have intensive drilling and mining operations, the study also measures how areas reliant on other industries, such as heavy manufacturing, could experience changes. The research examines the entire U.S. on a county-by-county level.

    “Our result is that you see a higher carbon footprint for jobs in places that drill for oil, mine for coal, and drill for natural gas, which is evident in our maps,” says Christopher Knittel, an economist at the MIT Sloan School of Management and co-author of a new paper detailing the findings. “But you also see high carbon footprints in areas where we do a lot of manufacturing, which is more likely to be missed by policymakers when examining how the transition to a zero-carbon economy will affect jobs.”

    So, while certain U.S. areas known for fossil-fuel production would certainly be affected — including west Texas, the Powder River Basin of Montana and Wyoming, parts of Appalachia, and more — a variety of industrial areas in the Great Plains and Midwest could see employment evolve as well.

    The paper, “Assessing the distribution of employment vulnerability to the energy transition using employment carbon footprints,” is published this week in Proceedings of the National Academy of Sciences. The authors are Kailin Graham, a master’s student in MIT’s Technology and Policy Program and graduate research assistant at MIT’s Center for Energy and Environmental Policy Research; and Knittel, who is the George P. Shultz Professor at MIT Sloan.

    “Our results are unique in that we cover close to the entire U.S. economy and consider the impacts on places that produce fossil fuels but also on places that consume a lot of coal, oil, or natural gas for energy,” says Graham. “This approach gives us a much more complete picture of where communities might be affected and how support should be targeted.”

    Adjusting the targets

    The current study stems from prior research Knittel has conducted, measuring carbon footprints at the household level across the U.S. The new project takes a conceptually related approach, but for jobs in a given county. To conduct the study, the researchers used several data sources measuring energy consumption by businesses, as well as detailed employment data from the U.S. Census Bureau.

    The study takes advantage of changes in energy supply and demand over time to estimate how strongly a full range of jobs, not just those in energy production, are linked to use of fossil fuels. The sectors accounted for in the study comprise 86 percent of U.S. employment, and 94 percent of U.S. emissions apart from the transportation sector.

    The Inflation Reduction Act, passed by Congress and signed into law by President Joe Biden in August 2022, is the first federal legislation seeking to provide an economic buffer for places affected by the transition away from fossil fuels. The act provides expanded tax credits for economic projects located in “energy community” areas — defined largely as places with high fossil-fuel industry employment or tax revenue and with high unemployment. Areas with recently closed or downsized coal mines or power plants also qualify.

    Graham and Knittel measured the “employment carbon footprint” (ECF) of each county in the U.S., producing new results. Out of more than 3,000 counties in the U.S., the researchers found that 124 are at the 90th percentile or above in ECF terms, while not qualifying for Inflation Reduction Act assistance. Another 79 counties are eligible for Inflation Reduction Act assistance, while being in the bottom 20 percent nationally in ECF terms.

    Those may not seem like colossal differences, but the findings identify real communities potentially being left out of federal policy, and highlight the need for new targeting of such programs. The research by Graham and Knittel offers a precise way to assess the industrial composition of U.S. counties, potentially helping to target economic assistance programs.

    “The impact on jobs of the energy transition is not just going to be where oil and natural gas are drilled, it’s going to be all the way up and down the value chain of things we make in the U.S.,” Knittel says. “That’s a more extensive, but still focused, problem.”

    Graham adds: “It’s important that policymakers understand these economy-wide employment impacts. Our aim in providing these data is to help policymakers incorporate these considerations into future policies like the Inflation Reduction Act.”

    Adapting policy

    Graham and Knittel are still evaluating what the best policy measures might be to help places in the U.S. adapt to a move away from fossil fuels.

    “What we haven’t necessarily closed the loop on is the right way to build a policy that takes account of these factors,” Knittel says. “The Inflation Reduction Act is the first policy to think about a [fair] energy transition because it has these subsidies for energy-dependent counties.” But given enough political backing, there may be room for additional policy measures in this area.

    One thing clearly showing through in the study’s data is that many U.S. counties are in a variety of situations, so there may be no one-size-fits-all approach to encouraging economic growth while making a switch to clean energy. What suits west Texas or Wyoming best may not work for more manufacturing-based local economies. And even among primary energy-production areas, there may be distinctions, among those drilling for oil or natural gas and those producing coal, based on the particular economics of those fuels. The study includes in-depth data about each county, characterizing its industrial portfolio, which may help tailor approaches to a range of economic situations.

    “The next step is using this data more specifically to design policies to protect these communities,” Knittel says. More