More stories

  • in

    Reducing carbon emissions from residential heating: A pathway forward

    In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.A two-part modeling approachTo analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.The New England case study: A challenge for electrificationAs a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.Lessons learnedReplacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole. More

  • in

    Smart carbon dioxide removal yields economic and environmental benefits

    Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products. To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts. More

  • in

    Toward sustainable decarbonization of aviation in Latin America

    According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.SAF emissions, costs, and sourcesUnder an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.RecommendationsThe researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”Financial support for this study was provided by LATAM Airlines and Airbus. More

  • in

    The multifaceted challenge of powering AI

    Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.Unexpected demand brings unexpected solutionsSeveral companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.An unprecedented surge in the demand for electricity“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.Finding clean electricity sourcesTo further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.Reducing electricity demandOther approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.What about the neighbors?A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.MIT’s contributionsAt MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system. More

  • in

    The role of modeling in the energy transition

    Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what’s going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”Dialogue, not forecastsEIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”Modeling uncertaintyThe key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”The future of energy modelingLooking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”  More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More

  • in

    New climate chemistry model finds “non-negligible” impacts of potential hydrogen fuel leakage

    As the world looks for ways to stop climate change, much discussion focuses on using hydrogen instead of fossil fuels, which emit climate-warming greenhouse gases (GHGs) when they’re burned. The idea is appealing. Burning hydrogen doesn’t emit GHGs to the atmosphere, and hydrogen is well-suited for a variety of uses, notably as a replacement for natural gas in industrial processes, power generation, and home heating.But while burning hydrogen won’t emit GHGs, any hydrogen that’s leaked from pipelines or storage or fueling facilities can indirectly cause climate change by affecting other compounds that are GHGs, including tropospheric ozone and methane, with methane impacts being the dominant effect. A much-cited 2022 modeling study analyzing hydrogen’s effects on chemical compounds in the atmosphere concluded that these climate impacts could be considerable. With funding from the MIT Energy Initiative’s Future Energy Systems Center, a team of MIT researchers took a more detailed look at the specific chemistry that poses the risks of using hydrogen as a fuel if it leaks.The researchers developed a model that tracks many more chemical reactions that may be affected by hydrogen and includes interactions among chemicals. Their open-access results, published Oct. 28 in Frontiers in Energy Research, showed that while the impact of leaked hydrogen on the climate wouldn’t be as large as the 2022 study predicted — and that it would be about a third of the impact of any natural gas that escapes today — leaked hydrogen will impact the climate. Leak prevention should therefore be a top priority as the hydrogen infrastructure is built, state the researchers.Hydrogen’s impact on the “detergent” that cleans our atmosphereGlobal three-dimensional climate-chemistry models using a large number of chemical reactions have also been used to evaluate hydrogen’s potential climate impacts, but results vary from one model to another, motivating the MIT study to analyze the chemistry. Most studies of the climate effects of using hydrogen consider only the GHGs that are emitted during the production of the hydrogen fuel. Different approaches may make “blue hydrogen” or “green hydrogen,” a label that relates to the GHGs emitted. Regardless of the process used to make the hydrogen, the fuel itself can threaten the climate. For widespread use, hydrogen will need to be transported, distributed, and stored — in short, there will be many opportunities for leakage. The question is, What happens to that leaked hydrogen when it reaches the atmosphere? The 2022 study predicting large climate impacts from leaked hydrogen was based on reactions between pairs of just four chemical compounds in the atmosphere. The results showed that the hydrogen would deplete a chemical species that atmospheric chemists call the “detergent of the atmosphere,” explains Candice Chen, a PhD candidate in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “It goes around zapping greenhouse gases, pollutants, all sorts of bad things in the atmosphere. So it’s cleaning our air.” Best of all, that detergent — the hydroxyl radical, abbreviated as OH — removes methane, which is an extremely potent GHG in the atmosphere. OH thus plays an important role in slowing the rate at which global temperatures rise. But any hydrogen leaked to the atmosphere would reduce the amount of OH available to clean up methane, so the concentration of methane would increase.However, chemical reactions among compounds in the atmosphere are notoriously complicated. While the 2022 study used a “four-equation model,” Chen and her colleagues — Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry; and Kane Stone, a research scientist in EAPS — developed a model that includes 66 chemical reactions. Analyses using their 66-equation model showed that the four-equation system didn’t capture a critical feedback involving OH — a feedback that acts to protect the methane-removal process.Here’s how that feedback works: As the hydrogen decreases the concentration of OH, the cleanup of methane slows down, so the methane concentration increases. However, that methane undergoes chemical reactions that can produce new OH radicals. “So the methane that’s being produced can make more of the OH detergent,” says Chen. “There’s a small countering effect. Indirectly, the methane helps produce the thing that’s getting rid of it.” And, says Chen, that’s a key difference between their 66-equation model and the four-equation one. “The simple model uses a constant value for the production of OH, so it misses that key OH-production feedback,” she says.To explore the importance of including that feedback effect, the MIT researchers performed the following analysis: They assumed that a single pulse of hydrogen was injected into the atmosphere and predicted the change in methane concentration over the next 100 years, first using four-equation model and then using the 66-equation model. With the four-equation system, the additional methane concentration peaked at nearly 2 parts per billion (ppb); with the 66-equation system, it peaked at just over 1 ppb.Because the four-equation analysis assumes only that the injected hydrogen destroys the OH, the methane concentration increases unchecked for the first 10 years or so. In contrast, the 66-equation analysis goes one step further: the methane concentration does increase, but as the system re-equilibrates, more OH forms and removes methane. By not accounting for that feedback, the four-equation analysis overestimates the peak increase in methane due to the hydrogen pulse by about 85 percent. Spread over time, the simple model doubles the amount of methane that forms in response to the hydrogen pulse.Chen cautions that the point of their work is not to present their result as “a solid estimate” of the impact of hydrogen. Their analysis is based on a simple “box” model that represents global average conditions and assumes that all the chemical species present are well mixed. Thus, the species can vary over time — that is, they can be formed and destroyed — but any species that are present are always perfectly mixed. As a result, a box model does not account for the impact of, say, wind on the distribution of species. “The point we’re trying to make is that you can go too simple,” says Chen. “If you’re going simpler than what we’re representing, you will get further from the right answer.” She goes on to note, “The utility of a relatively simple model like ours is that all of the knobs and levers are very clear. That means you can explore the system and see what affects a value of interest.”Leaked hydrogen versus leaked natural gas: A climate comparisonBurning natural gas produces fewer GHG emissions than does burning coal or oil; but as with hydrogen, any natural gas that’s leaked from wells, pipelines, and processing facilities can have climate impacts, negating some of the perceived benefits of using natural gas in place of other fossil fuels. After all, natural gas consists largely of methane, the highly potent GHG in the atmosphere that’s cleaned up by the OH detergent. Given its potency, even small leaks of methane can have a large climate impact.So when thinking about replacing natural gas fuel — essentially methane — with hydrogen fuel, it’s important to consider how the climate impacts of the two fuels compare if and when they’re leaked. The usual way to compare the climate impacts of two chemicals is using a measure called the global warming potential, or GWP. The GWP combines two measures: the radiative forcing of a gas — that is, its heat-trapping ability — with its lifetime in the atmosphere. Since the lifetimes of gases differ widely, to compare the climate impacts of two gases, the convention is to relate the GWP of each one to the GWP of carbon dioxide. But hydrogen and methane leakage cause increases in methane, and that methane decays according to its lifetime. Chen and her colleagues therefore realized that an unconventional procedure would work: they could compare the impacts of the two leaked gases directly. What they found was that the climate impact of hydrogen is about three times less than that of methane (on a per mass basis). So switching from natural gas to hydrogen would not only eliminate combustion emissions, but also potentially reduce the climate effects, depending on how much leaks.Key takeawaysIn summary, Chen highlights some of what she views as the key findings of the study. First on her list is the following: “We show that a really simple four-equation system is not what should be used to project out the atmospheric response to more hydrogen leakages in the future.” The researchers believe that their 66-equation model is a good compromise for the number of chemical reactions to include. It generates estimates for the GWP of methane “pretty much in line with the lower end of the numbers that most other groups are getting using much more sophisticated climate chemistry models,” says Chen. And it’s sufficiently transparent to use in exploring various options for protecting the climate. Indeed, the MIT researchers plan to use their model to examine scenarios that involve replacing other fossil fuels with hydrogen to estimate the climate benefits of making the switch in coming decades.The study also demonstrates a valuable new way to compare the greenhouse effects of two gases. As long as their effects exist on similar time scales, a direct comparison is possible — and preferable to comparing each with carbon dioxide, which is extremely long-lived in the atmosphere. In this work, the direct comparison generates a simple look at the relative climate impacts of leaked hydrogen and leaked methane — valuable information to take into account when considering switching from natural gas to hydrogen.Finally, the researchers offer practical guidance for infrastructure development and use for both hydrogen and natural gas. Their analyses determine that hydrogen fuel itself has a “non-negligible” GWP, as does natural gas, which is mostly methane. Therefore, minimizing leakage of both fuels will be necessary to achieve net-zero carbon emissions by 2050, the goal set by both the European Commission and the U.S. Department of State. Their paper concludes, “If used nearly leak-free, hydrogen is an excellent option. Otherwise, hydrogen should only be a temporary step in the energy transition, or it must be used in tandem with carbon-removal steps [elsewhere] to counter its warming effects.” More

  • in

    Is there enough land on Earth to fight climate change and feed the world?

    Capping global warming at 1.5 degrees Celsius is a tall order. Achieving that goal will not only require a massive reduction in greenhouse gas emissions from human activities, but also a substantial reallocation of land to support that effort and sustain the biosphere, including humans. More land will be needed to accommodate a growing demand for bioenergy and nature-based carbon sequestration while ensuring sufficient acreage for food production and ecological sustainability.The expanding role of land in a 1.5 C world will be twofold — to remove carbon dioxide from the atmosphere and to produce clean energy. Land-based carbon dioxide removal strategies include bioenergy with carbon capture and storage; direct air capture; and afforestation/reforestation and other nature-based solutions. Land-based clean energy production includes wind and solar farms and sustainable bioenergy cropland. Any decision to allocate more land for climate mitigation must also address competing needs for long-term food security and ecosystem health.Land-based climate mitigation choices vary in terms of costs — amount of land required, implications for food security, impact on biodiversity and other ecosystem services — and benefits — potential for sequestering greenhouse gases and producing clean energy.Now a study in the journal Frontiers in Environmental Science provides the most comprehensive analysis to date of competing land-use and technology options to limit global warming to 1.5 C. Led by researchers at the MIT Center for Sustainability Science and Strategy (CS3), the study applies the MIT Integrated Global System Modeling (IGSM) framework to evaluate costs and benefits of different land-based climate mitigation options in Sky2050, a 1.5 C climate-stabilization scenario developed by Shell.Under this scenario, demand for bioenergy and natural carbon sinks increase along with the need for sustainable farming and food production. To determine if there’s enough land to meet all these growing demands, the research team uses the global hectare (gha) — an area of 10,000 square meters, or 2.471 acres — as the standard unit of measurement, and current estimates of the Earth’s total habitable land area (about 10 gha) and land area used for food production and bioenergy (5 gha).The team finds that with transformative changes in policy, land management practices, and consumption patterns, global land is sufficient to provide a sustainable supply of food and ecosystem services throughout this century while also reducing greenhouse gas emissions in alignment with the 1.5 C goal. These transformative changes include policies to protect natural ecosystems; stop deforestation and accelerate reforestation and afforestation; promote advances in sustainable agriculture technology and practice; reduce agricultural and food waste; and incentivize consumers to purchase sustainably produced goods.If such changes are implemented, 2.5–3.5 gha of land would be used for NBS practices to sequester 3–6 gigatonnes (Gt) of CO2 per year, and 0.4–0.6 gha of land would be allocated for energy production — 0.2–0.3 gha for bioenergy and 0.2–0.35 gha for wind and solar power generation.“Our scenario shows that there is enough land to support a 1.5 degree C future as long as effective policies at national and global levels are in place,” says CS3 Principal Research Scientist Angelo Gurgel, the study’s lead author. “These policies must not only promote efficient use of land for food, energy, and nature, but also be supported by long-term commitments from government and industry decision-makers.” More