More stories

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Power when the sun doesn’t shine

    In 2016, at the huge Houston energy conference CERAWeek, MIT materials scientist Yet-Ming Chiang found himself talking to a Tesla executive about a thorny problem: how to store the output of solar panels and wind turbines for long durations.        

    Chiang, the Kyocera Professor of Materials Science and Engineering, and Mateo Jaramillo, a vice president at Tesla, knew that utilities lacked a cost-effective way to store renewable energy to cover peak levels of demand and to bridge the gaps during windless and cloudy days. They also knew that the scarcity of raw materials used in conventional energy storage devices needed to be addressed if renewables were ever going to displace fossil fuels on the grid at scale.

    Energy storage technologies can facilitate access to renewable energy sources, boost the stability and reliability of power grids, and ultimately accelerate grid decarbonization. The global market for these systems — essentially large batteries — is expected to grow tremendously in the coming years. A study by the nonprofit LDES (Long Duration Energy Storage) Council pegs the long-duration energy storage market at between 80 and 140 terawatt-hours by 2040. “That’s a really big number,” Chiang notes. “Every 10 people on the planet will need access to the equivalent of one EV [electric vehicle] battery to support their energy needs.”

    In 2017, one year after they met in Houston, Chiang and Jaramillo joined forces to co-found Form Energy in Somerville, Massachusetts, with MIT graduates Marco Ferrara SM ’06, PhD ’08 and William Woodford PhD ’13, and energy storage veteran Ted Wiley.

    “There is a burgeoning market for electrical energy storage because we want to achieve decarbonization as fast and as cost-effectively as possible,” says Ferrara, Form’s senior vice president in charge of software and analytics.

    Investors agreed. Over the next six years, Form Energy would raise more than $800 million in venture capital.

    Bridging gaps

    The simplest battery consists of an anode, a cathode, and an electrolyte. During discharge, with the help of the electrolyte, electrons flow from the negative anode to the positive cathode. During charge, external voltage reverses the process. The anode becomes the positive terminal, the cathode becomes the negative terminal, and electrons move back to where they started. Materials used for the anode, cathode, and electrolyte determine the battery’s weight, power, and cost “entitlement,” which is the total cost at the component level.

    During the 1980s and 1990s, the use of lithium revolutionized batteries, making them smaller, lighter, and able to hold a charge for longer. The storage devices Form Energy has devised are rechargeable batteries based on iron, which has several advantages over lithium. A big one is cost.

    Chiang once declared to the MIT Club of Northern California, “I love lithium-ion.” Two of the four MIT spinoffs Chiang founded center on innovative lithium-ion batteries. But at hundreds of dollars a kilowatt-hour (kWh) and with a storage capacity typically measured in hours, lithium-ion was ill-suited for the use he now had in mind.

    The approach Chiang envisioned had to be cost-effective enough to boost the attractiveness of renewables. Making solar and wind energy reliable enough for millions of customers meant storing it long enough to fill the gaps created by extreme weather conditions, grid outages, and when there is a lull in the wind or a few days of clouds.

    To be competitive with legacy power plants, Chiang’s method had to come in at around $20 per kilowatt-hour of stored energy — one-tenth the cost of lithium-ion battery storage.

    But how to transition from expensive batteries that store and discharge over a couple of hours to some as-yet-undefined, cheap, longer-duration technology?

    “One big ball of iron”

    That’s where Ferrara comes in. Ferrara has a PhD in nuclear engineering from MIT and a PhD in electrical engineering and computer science from the University of L’Aquila in his native Italy. In 2017, as a research affiliate at the MIT Department of Materials Science and Engineering, he worked with Chiang to model the grid’s need to manage renewables’ intermittency.

    How intermittent depends on where you are. In the United States, for instance, there’s the windy Great Plains; the sun-drenched, relatively low-wind deserts of Arizona, New Mexico, and Nevada; and the often-cloudy Pacific Northwest.

    Ferrara, in collaboration with Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society and her MIT team, modeled four representative locations in the United States and concluded that energy storage with capacity costs below roughly $20/kWh and discharge durations of multiple days would allow a wind-solar mix to provide cost-competitive, firm electricity in resource-abundant locations.

    Now that they had a time frame, they turned their attention to materials. At the price point Form Energy was aiming for, lithium was out of the question. Chiang looked at plentiful and cheap sulfur. But a sulfur, sodium, water, and air battery had technical challenges.

    Thomas Edison once used iron as an electrode, and iron-air batteries were first studied in the 1960s. They were too heavy to make good transportation batteries. But this time, Chiang and team were looking at a battery that sat on the ground, so weight didn’t matter. Their priorities were cost and availability.

    “Iron is produced, mined, and processed on every continent,” Chiang says. “The Earth is one big ball of iron. We wouldn’t ever have to worry about even the most ambitious projections of how much storage that the world might use by mid-century.” If Form ever moves into the residential market, “it’ll be the safest battery you’ve ever parked at your house,” Chiang laughs. “Just iron, air, and water.”

    Scientists call it reversible rusting. While discharging, the battery takes in oxygen and converts iron to rust. Applying an electrical current converts the rusty pellets back to iron, and the battery “breathes out” oxygen as it charges. “In chemical terms, you have iron, and it becomes iron hydroxide,” Chiang says. “That means electrons were extracted. You get those electrons to go through the external circuit, and now you have a battery.”

    Form Energy’s battery modules are approximately the size of a washer-and-dryer unit. They are stacked in 40-foot containers, and several containers are electrically connected with power conversion systems to build storage plants that can cover several acres.

    The right place at the right time

    The modules don’t look or act like anything utilities have contracted for before.

    That’s one of Form’s key challenges. “There is not widespread knowledge of needing these new tools for decarbonized grids,” Ferrara says. “That’s not the way utilities have typically planned. They’re looking at all the tools in the toolkit that exist today, which may not contemplate a multi-day energy storage asset.”

    Form Energy’s customers are largely traditional power companies seeking to expand their portfolios of renewable electricity. Some are in the process of decommissioning coal plants and shifting to renewables.

    Ferrara’s research pinpointing the need for very low-cost multi-day storage provides key data for power suppliers seeking to determine the most cost-effective way to integrate more renewable energy.

    Using the same modeling techniques, Ferrara and team show potential customers how the technology fits in with their existing system, how it competes with other technologies, and how, in some cases, it can operate synergistically with other storage technologies.

    “They may need a portfolio of storage technologies to fully balance renewables on different timescales of intermittency,” he says. But other than the technology developed at Form, “there isn’t much out there, certainly not within the cost entitlement of what we’re bringing to market.”  Thanks to Chiang and Jaramillo’s chance encounter in Houston, Form has a several-year lead on other companies working to address this challenge. 

    In June 2023, Form Energy closed its biggest deal to date for a single project: Georgia Power’s order for a 15-megawatt/1,500-megawatt-hour system. That order brings Form’s total amount of energy storage under contracts with utility customers to 40 megawatts/4 gigawatt-hours. To meet the demand, Form is building a new commercial-scale battery manufacturing facility in West Virginia.

    The fact that Form Energy is creating jobs in an area that lost more than 10,000 steel jobs over the past decade is not lost on Chiang. “And these new jobs are in clean tech. It’s super exciting to me personally to be doing something that benefits communities outside of our traditional technology centers.

    “This is the right time for so many reasons,” Chiang says. He says he and his Form Energy co-founders feel “tremendous urgency to get these batteries out into the world.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Anushree Chaudhuri: Involving local communities in renewable energy planning

    Anushree Chaudhuri has a history of making bold decisions. In fifth grade, she biked across her home state of California with little prior experience. In her first year at MIT, she advocated for student recommendations in the preparation of the Institute’s Climate Action Plan for the Decade. And recently, she led a field research project throughout California to document the perspectives of rural and Indigenous populations affected by climate change and clean energy projects.

    “It doesn’t matter who you are or how young you are, you can get involved with something and inspire others to do so,” the senior says.

    Initially a materials science and engineering major, Chaudhuri was quickly drawn to environmental policy issues and later decided to double-major in urban studies and planning and in economics. Chaudhuri will receive her bachelor’s degrees this month, followed by a master’s degree in city planning in the spring.

    The importance of community engagement in policymaking has become one of Chaudhuri’s core interests. A 2024 Marshall Scholar, she is headed to the U.K. next year to pursue a PhD related to environment and development. She hopes to build on her work in California and continue to bring attention to impacts that energy transitions can have on local communities, which tend to be rural and low-income. Addressing resistance to these projects can be challenging, but “ignoring it leaves these communities in the dust and widens the urban-rural divide,” she says.

    Silliness and sustainability 

    Chaudhuri classifies her many activities into two groups: those that help her unwind, like her living community, Conner Two, and those that require intensive deliberation, like her sustainability-related organizing.

    Conner Two, in the Burton-Conner residence hall, is where Chaudhuri feels most at home on campus. She describes the group’s activities as “silly” and emphasizes their love of jokes, even in the floor’s nickname, “the British Floor,” which is intentionally absurd, as the residents are rarely British.

    Chaudhuri’s first involvement with sustainability issues on campus was during the preparation of MIT’s Fast Forward Climate Action Plan in the 2020-2021 academic year. As a co-lead of one of several student working groups, she helped organize key discussions between the administration, climate experts, and student government to push for six main goals in the plan, including an ethical investing framework. Being involved with a significant student movement so early on in her undergraduate career was a learning opportunity for Chaudhuri and impressed upon her that young people can play critical roles in making far-reaching structural changes.

    The experience also made her realize how many organizations on campus shared similar goals even if their perspectives varied, and she saw the potential for more synergy among them.

    Chaudhuri went on to co-lead the Student Sustainability Coalition to help build community across the sustainability-related organizations on campus and create a centralized system that would make it easier for outsiders and group members to access information and work together. Through the coalition, students have collaborated on efforts including campus events, and off-campus matters such as the Cambridge Green New Deal hearings.

    Another benefit to such a network: It creates a support system that recognizes even small-scale victories. “Community is so important to avoid burnout when you’re working on something that can be very frustrating and an uphill battle like negotiating with leadership or seeking policy changes,” Chaudhuri says.

    Fieldwork

    For the past year, Chaudhuri has been doing independent research in California with the support of several advisory organizations to host conversations with groups affected by renewable energy projects, which, as she has documented, are often concentrated in rural, low-income, and Indigenous communities. The introduction of renewable energy facilities, such as wind and solar farms, can perpetuate existing inequities if they ignore serious community concerns, Chaudhuri says.

    As state or federal policymakers and private developers carry out the permitting process for these projects, “they can repeat histories of extraction, sometimes infringing on the rights of a local or Tribal government to decide what happens with their land,” she says.

    In her site visits, she is documenting community opposition to controversial solar and wind proposals and collecting oral histories. Doing fieldwork for the first time as an outsider was difficult for Chaudhuri, as she dealt with distrust, unpredictability, and needing to be completely flexible for her sources. “A lot of it was just being willing to drop everything and go and be a little bit adventurous and take some risks,” she says.

    Role models and reading

    Chaudhuri is quick to credit many of the role models and other formative influences in her life.

    After working on the Climate Action Plan, Chaudhuri attended a public narrative workshop at Harvard University led by Marshall Ganz, a grassroots community organizer who worked with Cesar Chavez and on the 2008 Obama presidential campaign. “That was a big inspiration and kind of shaped how I viewed leadership in, for example, campus advocacy, but also in other projects and internships.”

    Reading has also influenced Chaudhuri’s perspective on community organizing, “After the Climate Action Plan campaign, I realized that a lot of what made the campaign successful or not could track well with organizing and social change theories, and histories of social movements. So, that was a good experience for me, being able to critically reflect on it and tie it into these other things I was learning about.”

    Since beginning her studies at MIT, Chaudhuri has become especially interested in social theory and political philosophy, starting with ancient forms of Western and Eastern ethic, and up to 20th and 21st century philosophers who inspire her. Chaudhuri cites Amartya Sen and Olúfẹ́mi Táíwò as particularly influential. “I think [they’ve] provided a really compelling framework to guide a lot of my own values,” she says.

    Another role model is Brenda Mallory, the current chair of the U.S. Council on Environmental Quality, who Chaudhuri was grateful to meet at the United Nations COP27 Climate Conference. As an intern at the U.S. Department of Energy, Chaudhuri worked within a team on implementing the federal administration’s Justice40 initiative, which commits 40 percent of federal climate investments to disadvantaged communities. This initiative was largely directed by Mallory, and Chaudhuri admires how Mallory was able to make an impact at different levels of government through her leadership. Chaudhuri hopes to follow in Mallory’s footsteps someday, as a public official committed to just policies and programs.

     “Good leaders are those who empower good leadership in others,” Chaudhuri says. More

  • in

    Accelerated climate action needed to sharply reduce current risks to life and life-support systems

    Hottest day on record. Hottest month on record. Extreme marine heatwaves. Record-low Antarctic sea-ice.

    While El Niño is a short-term factor in this year’s record-breaking heat, human-caused climate change is the long-term driver. And as global warming edges closer to 1.5 degrees Celsius — the aspirational upper limit set in the Paris Agreement in 2015 — ushering in more intense and frequent heatwaves, floods, wildfires, and other climate extremes much sooner than many expected, current greenhouse gas emissions-reduction policies are far too weak to keep the planet from exceeding that threshold. In fact, on roughly one-third of days in 2023, the average global temperature was at least 1.5 C higher than pre-industrial levels. Faster and bolder action will be needed — from the in-progress United Nations Climate Change Conference (COP28) and beyond — to stabilize the climate and minimize risks to human (and nonhuman) lives and the life-support systems (e.g., food, water, shelter, and more) upon which they depend.

    Quantifying the risks posed by simply maintaining existing climate policies — and the benefits (i.e., avoided damages and costs) of accelerated climate action aligned with the 1.5 C goal — is the central task of the 2023 Global Change Outlook, recently released by the MIT Joint Program on the Science and Policy of Global Change.

    Based on a rigorous, integrated analysis of population and economic growth, technological change, Paris Agreement emissions-reduction pledges (Nationally Determined Contributions, or NDCs), geopolitical tensions, and other factors, the report presents the MIT Joint Program’s latest projections for the future of the earth’s energy, food, water, and climate systems, as well as prospects for achieving the Paris Agreement’s short- and long-term climate goals.

    The 2023 Global Change Outlook performs its risk-benefit analysis by focusing on two scenarios. The first, Current Trends, assumes that Paris Agreement NDCs are implemented through the year 2030, and maintained thereafter. While this scenario represents an unprecedented global commitment to limit greenhouse gas emissions, it neither stabilizes climate nor limits climate change. The second scenario, Accelerated Actions, extends from the Paris Agreement’s initial NDCs and aligns with its long-term goals. This scenario aims to limit and stabilize human-induced global climate warming to 1.5 C by the end of this century with at least a 50 percent probability. Uncertainty is quantified using 400-member ensembles of projections for each scenario.

    This year’s report also includes a visualization tool that enables a higher-resolution exploration of both scenarios.

    Energy

    Between 2020 and 2050, population and economic growth are projected to drive continued increases in energy needs and electrification. Successful achievement of current Paris Agreement pledges will reinforce a shift away from fossil fuels, but additional actions will be required to accelerate the energy transition needed to cap global warming at 1.5 C by 2100.

    During this 30-year period under the Current Trends scenario, the share of fossil fuels in the global energy mix drops from 80 percent to 70 percent. Variable renewable energy (wind and solar) is the fastest growing energy source with more than an 8.6-fold increase. In the Accelerated Actions scenario, the share of low-carbon energy sources grows from 20 percent to slightly more than 60 percent, a much faster growth rate than in the Current Trends scenario; wind and solar energy undergo more than a 13.3-fold increase.

    While the electric power sector is expected to successfully scale up (with electricity production increasing by 73 percent under Current Trends, and 87 percent under Accelerated Actions) to accommodate increased demand (particularly for variable renewables), other sectors face stiffer challenges in their efforts to decarbonize.

    “Due to a sizeable need for hydrocarbons in the form of liquid and gaseous fuels for sectors such as heavy-duty long-distance transport, high-temperature industrial heat, agriculture, and chemical production, hydrogen-based fuels and renewable natural gas remain attractive options, but the challenges related to their scaling opportunities and costs must be resolved,” says MIT Joint Program Deputy Director Sergey Paltsev, a lead author of the 2023 Global Change Outlook.

    Water, food, and land

    With a global population projected to reach 9.9 billion by 2050, the Current Trends scenario indicates that more than half of the world’s population will experience pressures to its water supply, and that three of every 10 people will live in water basins where compounding societal and environmental pressures on water resources will be experienced. Population projections under combined water stress in all scenarios reveal that the Accelerated Actions scenario can reduce approximately 40 million of the additional 570 million people living in water-stressed basins at mid-century.

    Under the Current Trends scenario, agriculture and food production will keep growing. This will increase pressure for land-use change, water use, and use of energy-intensive inputs, which will also lead to higher greenhouse gas emissions. Under the Accelerated Actions scenario, less agricultural and food output is observed by 2050 compared to the Current Trends scenario, since this scenario affects economic growth and increases production costs. Livestock production is more greenhouse gas emissions-intensive than crop and food production, which, under carbon-pricing policies, drives demand downward and increases costs and prices. Such impacts are transmitted to the food sector and imply lower consumption of livestock-based products.

    Land-use changes in the Accelerated Actions scenario are similar to those in the Current Trends scenario by 2050, except for land dedicated to bioenergy production. At the world level, the Accelerated Actions scenario requires cropland area to increase by 1 percent and pastureland to decrease by 4.2 percent, but land use for bioenergy must increase by 44 percent.

    Climate trends

    Under the Current Trends scenario, the world is likely (more than 50 percent probability) to exceed 2 C global climate warming by 2060, 2.8 C by 2100, and 3.8 C by 2150. Our latest climate-model information indicates that maximum temperatures will likely outpace mean temperature trends over much of North and South America, Europe, northern and southeast Asia, and southern parts of Africa and Australasia. So as human-forced climate warming intensifies, these regions are expected to experience more pronounced record-breaking extreme heat events.

    Under the Accelerated Actions scenario, global temperature will continue to rise through the next two decades. But by 2050, global temperature will stabilize, and then slightly decline through the latter half of the century.

    “By 2100, the Accelerated Actions scenario indicates that the world can be virtually assured of remaining below 2 C of global warming,” says MIT Joint Program Deputy Director C. Adam Schlosser, a lead author of the report. “Nevertheless, additional policy mechanisms must be designed with more comprehensive targets that also support a cleaner environment, sustainable resources, as well as improved and equitable human health.”

    The Accelerated Actions scenario not only stabilizes global precipitation increase (by 2060), but substantially reduces the magnitude and potential range of increases to almost one-third of Current Trends global precipitation changes. Any global increase in precipitation heightens flood risk worldwide, so policies aligned with the Accelerated Actions scenario would considerably reduce that risk.

    Prospects for meeting Paris Agreement climate goals

    Numerous countries and regions are progressing in fulfilling their Paris Agreement pledges. Many have declared more ambitious greenhouse gas emissions-mitigation goals, while financing to assist the least-developed countries in sustainable development is not forthcoming at the levels needed. In this year’s Global Stocktake Synthesis Report, the U.N. Framework Convention on Climate Change evaluated emissions reductions communicated by the parties of the Paris Agreement and concluded that global emissions are not on track to fulfill the most ambitious long-term global temperature goals of the Paris Agreement (to keep warming well below 2 C — and, ideally, 1.5 C — above pre-industrial levels), and there is a rapidly narrowing window to raise ambition and implement existing commitments in order to achieve those targets. The Current Trends scenario arrives at the same conclusion.

    The 2023 Global Change Outlook finds that both global temperature targets remain achievable, but require much deeper near-term emissions reductions than those embodied in current NDCs.

    Reducing climate risk

    This report explores two well-known sets of risks posed by climate change. Research highlighted indicates that elevated climate-related physical risks will continue to evolve by mid-century, along with heightened transition risks that arise from shifts in the political, technological, social, and economic landscapes that are likely to occur during the transition to a low-carbon economy.

    “Our Outlook shows that without aggressive actions the world will surpass critical greenhouse gas concentration thresholds and climate targets in the coming decades,” says MIT Joint Program Director Ronald Prinn. “While the costs of inaction are getting higher, the costs of action are more manageable.” More

  • in

    MIT design would harness 40 percent of the sun’s heat to produce clean hydrogen fuel

    MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

    In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce “solar thermochemical hydrogen.” The system harnesses the sun’s heat to directly split water and generate hydrogen — a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

    Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a “grey” energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

    In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun’s heat to generate that much more hydrogen. The increase in efficiency could drive down the system’s overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

    “We’re thinking of hydrogen as the fuel of the future, and there’s a need to generate it cheaply and at scale,” says the study’s lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. “We’re trying to achieve the Department of Energy’s goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen.”

    Ghoniem’s study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

    Solar stations

    Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) — a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver’s heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

    At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal “oxidation” is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

    The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

    Each reactor would first pass through a hot station, where it would be exposed to the sun’s heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor’s metal. That metal would then be in a “reduced” state — ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

    Rust and rails

    Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system’s efficiency is too low to be practical.

    A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

    To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

    These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would  run continuously and would enerate separate streams of pure hydrogen and oxygen.

    The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

    “We have to think of every bit of energy in the system, and how to use it, to minimize the cost,” Ghoniem says. “And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun’s heat to produce hydrogen.”

    “If this can be realized, it could drastically change our energy future — namely, enabling hydrogen production, 24/7,” says Christopher Muhich, an assistant professor of chemical engineering at Arizona State University, who was not involved in the research. “The ability to make hydrogen is the linchpin to producing liquid fuels from sunlight.”

    In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

    “When fully implemented, this system would be housed in a little building in the middle of a solar field,” Patankar explains. “Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production.”

    This work was supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. More

  • in

    Tracking US progress on the path to a decarbonized economy

    Investments in new technologies and infrastucture that help reduce greenhouse gas emissions — everything from electric vehicles to heat pumps — are growing rapidly in the United States. Now, a new database enables these investments to be comprehensively monitored in real-time, thereby helping to assess the efficacy of policies designed to spur clean investments and address climate change.

    The Clean Investment Monitor (CIM), developed by a team at MIT’s Center for Energy and Environmental Policy Research (CEEPR) led by Institute Innovation Fellow Brian Deese and in collaboration with the Rhodium Group, an independent research firm, provides a timely and methodologically consistent tracking of all announced public and private investments in the manufacture and deployment of clean technologies and infrastructure in the U.S. The CIM offers a means of assessing the country’s progress in transitioning to a cleaner economy and reducing greenhouse gas emissions.

    In the year from July 1, 2022, to June 30, 2023, data from the CIM show, clean investments nationwide totaled $213 billion. To put that figure in perspective, 18 states in the U.S. have GDPs each lower than $213 billion.

    “As clean technology becomes a larger and larger sector in the United States, its growth will have far-reaching implications — for our economy, for our leadership in innovation, and for reducing our greenhouse gas emissions,” says Deese, who served as the director of the White House National Economic Council from January 2021 to February 2023. “The Clean Investment Monitor is a tool designed to help us understand and assess this growth in a real-time, comprehensive way. Our hope is that the CIM will enhance research and improve public policies designed to accelerate the clean energy transition.”

    Launched on Sept. 13, the CIM shows that the $213 billion invested over the last year reflects a 37 percent increase from the $155 billion invested in the previous 12-month period. According to CIM data, the fastest growth has been in the manufacturing sector, where investment grew 125 percent year-on-year, particularly in electric vehicle and solar manufacturing.

    Beyond manufacturing, the CIM also provides data on investment in clean energy production, such as solar, wind, and nuclear; industrial decarbonization, such as sustainable aviation fuels; and retail investments by households and businesses in technologies like heat pumps and zero-emission vehicles. The CIM’s data goes back to 2018, providing a baseline before the passage of the legislation in 2021 and 2022.

    “We’re really excited to bring MIT’s analytical rigor to bear to help develop the Clean Investment Monitor,” says Christopher Knittel, the George P. Shultz Professor of Energy Economics at the MIT Sloan School of Management and CEEPR’s faculty director. “Bolstered by Brian’s keen understanding of the policy world, this tool is poised to become the go-to reference for anyone looking to understand clean investment flows and what drives them.”

    In 2021 and 2022, the U.S. federal government enacted a series of new laws that together aimed to catalyze the largest-ever national investment in clean energy technologies and related infrastructure. The Clean Investment Monitor can also be used to track how well the legislation is living up to expectations.

    The three pieces of federal legislation — the Infrastructure Investment and Jobs Act, enacted in 2021, and the Inflation Reduction Act (IRA) and the CHIPS and Science Act, both enacted in 2022 — provide grants, loans, loan guarantees, and tax incentives to spur investments in technologies that reduce greenhouse gas emissions.

    The effectiveness of the legislation in hastening the U.S. transition to a clean economy will be crucial in determining whether the country reaches its goal of reducing greenhouse gas emissions by 50 percent to 52 percent below 2005 levels in 2030. An analysis earlier this year estimated that the IRA will lead to a 43 percent to 48 percent decline in economywide emissions below 2005 levels by 2035, compared with 27 percent to 35 percent in a reference scenario without the law’s provisions, helping bring the U.S. goal closer in reach.

    The Clean Investment Monitor is available at cleaninvestmentmonitor.org. More

  • in

    Alumnus’ thermal battery helps industry eliminate fossil fuels

    The explosion of renewable energy projects around the globe is leading to a saturation problem. As more renewable power contributes to the grid, the value of electricity is plummeting during the times of day when wind and solar hit peak productivity. The problem is limiting renewable energy investments in some of the sunniest and windiest places in the world.

    Now Antora Energy, co-founded by David Bierman SM ’14, PhD ’17, is addressing the intermittent nature of wind and solar with a low-cost, highly efficient thermal battery that stores electricity as heat to allow manufacturers and other energy-hungry businesses to eliminate their use of fossil fuels.

    “We take electricity when it’s cheapest, meaning when wind gusts are strongest and the sun is shining brightest,” Bierman explains. “We run that electricity through a resistive heater to drive up the temperature of a very inexpensive material — we use carbon blocks, which are extremely stable, produced at incredible scales, and are some of the cheapest materials on Earth. When you need to pull energy from the battery, you open a large shutter to extract thermal radiation, which is used to generate process heat or power using our thermophotovoltaic, or TPV, technology. The end result is a zero-carbon, flexible, combined heat and power system for industry.”

    Antora’s battery could dramatically expand the application of renewable energy by enabling its use in industry, a sector of the U.S. economy that accounted for nearly a quarter of all greenhouse gas emissions in 2021.

    Antora says it is able to deliver on the long-sought promise of heat-to-power TPV technology because it has achieved new levels of efficiency and scalability with its cells. Earlier this year, Antora opened a new manufacturing facility that will be capable of producing 2 megawatts of its TPV cells each year — which the company says makes it the largest TPV production facility in the world.

    Antora’s thermal battery manufacturing facilities and demonstration unit are located in sun-soaked California, where renewables make up close to a third of all electricity. But Antora’s team says its technology holds promise in other regions as increasingly large renewable projects connect to grids across the globe.

    “We see places today [with high renewables] as a sign of where things are going,” Bierman says. “If you look at the tailwinds we have in the renewable industry, there’s a sense of inevitability about solar and wind, which will need to be deployed at incredible scales to avoid a climate catastrophe. We’ll see terawatts and terawatts of new additions of these renewables, so what you see today in California or Texas or Kansas, with significant periods of renewable overproduction, is just the tip of the iceberg.”

    Bierman has been working on thermal energy storage and thermophotovoltaics since his time at MIT, and Antora’s ties to MIT are especially strong because its progress is the result of two MIT startups becoming one.

    Alumni join forces

    Bierman did his masters and doctoral work in MIT’s Department of Mechanical Engineering, where he worked on solid-state solar thermal energy conversion systems. In 2016, while taking course 15.366 (Climate and Energy Ventures), he met Jordan Kearns SM ’17, then a graduate student in the Technology and Policy Program and the Department of Nuclear Science and Engineering. The two were studying renewable energy when they began to think about the intermittent nature of wind and solar as an opportunity rather than a problem.

    “There are already places in the U.S. where we have more wind and solar at times than we know what to do with,” Kearns says. “That is an opportunity for not only emissions reductions but also for reducing energy costs. What’s the application? I don’t think the overproduction of energy was being talked about as much as the intermittency problem.”

    Kearns did research through the MIT Energy Initiative and the researchers received support from MIT’s Venture Mentoring Service and the MIT Sandbox Innovation Fund to further explore ways to capitalize on fluctuating power prices.

    Kearns officially founded a company called Medley Thermal in 2017 to help companies that use natural gas switch to energy produced by renewables when the price was right. To accomplish that, he combined an off-the-shelf electric boiler with novel control software so the companies could switch energy sources seamlessly from fossil fuel to electricity at especially windy or sunny times. Medley went on to become a finalist for the MIT Clean Energy Prize, and Kearns wanted Bierman to join him as a co-founder, but Bierman had received a fellowship to commercialize a thermal energy storage solution and decided to pursue that after graduation.

    The split ended up working out for both alumni. In the ensuing years, Kearns led Medley Thermal through a number of projects in which gradually larger companies switched from relying on natural gas or propane sources to renewable electricity from the grid. The work culminated in an installment at the Jay Peak resort in Vermont that Kearns says is one of the largest projects in the U.S. using renewable energy to produce heat. The project is expected to reduce about 2,500 tons of carbon dioxide per year.

    Bierman, meanwhile, further developed a thermal energy storage solution for industrial decarbonization, which works by using renewable electricity to heat blocks of carbon, which are stored in insulation to retain energy for long periods of time. The heat from those blocks can then be used to deliver electricity or heat to customers, at temperatures that can exceed 1,500 C. When Antora raised a $50 million Series A funding round last year, Bierman asked Kearns if he could buy out Medley’s team, and the researchers finally became co-workers.

    “Antora and Medley Thermal have a similar value prop: There’s low-cost electricity, and we want to connect that to the industrial sector,” Kearns explains. “But whereas Medley used renewables on an as-available basis, and then when the winds stop we went back to burning fossil fuel with a boiler, Antora has a thermal battery that takes in the electricity, converts it to heat, but also stores it as heat so even when the wind stops blowing we have a reservoir of heat that we can continue to pull from to make steam or power or whatever the facility needs. So, we can now further reduce energy costs by offsetting more fuel and offer a 100 percent clean energy solution.”

    United we scale

    Today, Kearns runs the project development arm of Antora.

    “There are other, much larger projects in the pipeline,” Kearns says. “The Jay Peak project is about 3 megawatts of power, but some of the ones we’re working on now are 30, 60 megawatt projects. Those are more industrial focused, and they’re located in places where we have a strong industrial base and an abundance of renewables, everywhere from Texas to Kansas to the Dakotas — that heart of the country that our team lovingly calls the Wind Belt.”

    Antora’s future projects will be with companies in the chemicals, mining, food and beverage, and oil and gas industries. Some of those projects are expected to come online as early as 2025.          

    The company’s scaling strategy is centered on the inexpensive production process for its batteries.

    “We constantly ask ourselves, ‘What is the best product we can make here?’” Bierman says. “We landed on a compact, containerized, modular system that gets shipped to sites and is easily integrated into industrial processes. It means we don’t have huge construction projects, timelines, and budget overruns. Instead, it’s all about scaling up the factory that builds these thermal batteries and just churning them out.”

    It was a winding journey for Kearns and Bierman, but they now believe they’re positioned to help huge companies become carbon-free while promoting the growth of the solar and wind industries.

    “The more I dig into this, the more shocked I am at how important a piece of the decarbonization puzzle this is today,” Bierman says. “The need has become super real since we first started talking about this in 2016. The economic opportunity has grown, but more importantly the awareness from industries that they need to decarbonize is totally different. Antora can help with that, so we’re scaling up as rapidly as possible to meet the demand we see in the market.” More

  • in

    To improve solar and other clean energy tech, look beyond hardware

    To continue reducing the costs of solar energy and other clean energy technologies, scientists and engineers will likely need to focus, at least in part, on improving technology features that are not based on hardware, according to MIT researchers. They describe this finding and the mechanisms behind it today in Nature Energy.

    While the cost of installing a solar energy system has dropped by more than 99 percent since 1980, this new analysis shows that “soft technology” features, such as the codified permitting practices, supply chain management techniques, and system design processes that go into deploying a solar energy plant, contributed only 10 to 15 percent of total cost declines. Improvements to hardware features were responsible for the lion’s share.

    But because soft technology is increasingly dominating the total costs of installing solar energy systems, this trend threatens to slow future cost savings and hamper the global transition to clean energy, says the study’s senior author, Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society (IDSS).

    Trancik’s co-authors include lead author Magdalena M. Klemun, a former IDSS graduate student and postdoc who is now an assistant professor at the Hong Kong University of Science and Technology; Goksin Kavlak, a former IDSS graduate student and postdoc who is now an associate at the Brattle Group; and James McNerney, a former IDSS postdoc and now senior research fellow at the Harvard Kennedy School.

    The team created a quantitative model to analyze the cost evolution of solar energy systems, which captures the contributions of both hardware technology features and soft technology features.

    The framework shows that soft technology hasn’t improved much over time — and that soft technology features contributed even less to overall cost declines than previously estimated.

    Their findings indicate that to reverse this trend and accelerate cost declines, engineers could look at making solar energy systems less reliant on soft technology to begin with, or they could tackle the problem directly by improving inefficient deployment processes.  

    “Really understanding where the efficiencies and inefficiencies are, and how to address those inefficiencies, is critical in supporting the clean energy transition. We are making huge investments of public dollars into this, and soft technology is going to be absolutely essential to making those funds count,” says Trancik.

    “However,” Klemun adds, “we haven’t been thinking about soft technology design as systematically as we have for hardware. That needs to change.”

    The hard truth about soft costs

    Researchers have observed that the so-called “soft costs” of building a solar power plant — the costs of designing and installing the plant — are becoming a much larger share of total costs. In fact, the share of soft costs now typically ranges from 35 to 64 percent.

    “We wanted to take a closer look at where these soft costs were coming from and why they weren’t coming down over time as quickly as the hardware costs,” Trancik says.

    In the past, scientists have modeled the change in solar energy costs by dividing total costs into additive components — hardware components and nonhardware components — and then tracking how these components changed over time.

    “But if you really want to understand where those rates of change are coming from, you need to go one level deeper to look at the technology features. Then things split out differently,” Trancik says.

    The researchers developed a quantitative approach that models the change in solar energy costs over time by assigning contributions to the individual technology features, including both hardware features and soft technology features.

    For instance, their framework would capture how much of the decline in system installation costs — a soft cost — is due to standardized practices of certified installers — a soft technology feature. It would also capture how that same soft cost is affected by increased photovoltaic module efficiency — a hardware technology feature.

    With this approach, the researchers saw that improvements in hardware had the greatest impacts on driving down soft costs in solar energy systems. For example, the efficiency of photovoltaic modules doubled between 1980 and 2017, reducing overall system costs by 17 percent. But about 40 percent of that overall decline could be attributed to reductions in soft costs tied to improved module efficiency.

    The framework shows that, while hardware technology features tend to improve many cost components, soft technology features affect only a few.

    “You can see this structural difference even before you collect data on how the technologies have changed over time. That’s why mapping out a technology’s network of cost dependencies is a useful first step to identify levers of change, for solar PV and for other technologies as well,” Klemun notes.  

    Static soft technology

    The researchers used their model to study several countries, since soft costs can vary widely around the world. For instance, solar energy soft costs in Germany are about 50 percent less than those in the U.S.

    The fact that hardware technology improvements are often shared globally led to dramatic declines in costs over the past few decades across locations, the analysis showed. Soft technology innovations typically aren’t shared across borders. Moreover, the team found that countries with better soft technology performance 20 years ago still have better performance today, while those with worse performance didn’t see much improvement.

    This country-by-country difference could be driven by regulation and permitting processes, cultural factors, or by market dynamics such as how firms interact with each other, Trancik says.

    “But not all soft technology variables are ones that you would want to change in a cost-reducing direction, like lower wages. So, there are other considerations, beyond just bringing the cost of the technology down, that we need to think about when interpreting these results,” she says.

    Their analysis points to two strategies for reducing soft costs. For one, scientists could focus on developing hardware improvements that make soft costs more dependent on hardware technology variables and less on soft technology variables, such as by creating simpler, more standardized equipment that could reduce on-site installation time.

    Or researchers could directly target soft technology features without changing hardware, perhaps by creating more efficient workflows for system installation or automated permitting platforms.

    “In practice, engineers will often pursue both approaches, but separating the two in a formal model makes it easier to target innovation efforts by leveraging specific relationships between technology characteristics and costs,” Klemun says.

    “Often, when we think about information processing, we are leaving out processes that still happen in a very low-tech way through people communicating with one another. But it is just as important to think about that as a technology as it is to design fancy software,” Trancik notes.

    In the future, she and her collaborators want to apply their quantitative model to study the soft costs related to other technologies, such as electrical vehicle charging and nuclear fission. They are also interested in better understanding the limits of soft technology improvement, and how one could design better soft technology from the outset.

    This research is funded by the U.S. Department of Energy Solar Energy Technologies Office. More