More stories

  • in

    How can India decarbonize its coal-dependent electric power system?

    As the world struggles to reduce climate-warming carbon emissions, India has pledged to do its part, and its success is critical: In 2023, India was the third-largest carbon emitter worldwide. The Indian government has committed to having net-zero carbon emissions by 2070.To fulfill that promise, India will need to decarbonize its electric power system, and that will be a challenge: Fully 60 percent of India’s electricity comes from coal-burning power plants that are extremely inefficient. To make matters worse, the demand for electricity in India is projected to more than double in the coming decade due to population growth and increased use of air conditioning, electric cars, and so on.Despite having set an ambitious target, the Indian government has not proposed a plan for getting there. Indeed, as in other countries, in India the government continues to permit new coal-fired power plants to be built, and aging plants to be renovated and their retirement postponed.To help India define an effective — and realistic — plan for decarbonizing its power system, key questions must be addressed. For example, India is already rapidly developing carbon-free solar and wind power generators. What opportunities remain for further deployment of renewable generation? Are there ways to retrofit or repurpose India’s existing coal plants that can substantially and affordably reduce their greenhouse gas emissions? And do the responses to those questions differ by region?With funding from IHI Corp. through the MIT Energy Initiative (MITEI), Yifu Ding, a postdoc at MITEI, and her colleagues set out to answer those questions by first using machine learning to determine the efficiency of each of India’s current 806 coal plants, and then investigating the impacts that different decarbonization approaches would have on the mix of power plants and the price of electricity in 2035 under increasingly stringent caps on emissions.First step: Develop the needed datasetAn important challenge in developing a decarbonization plan for India has been the lack of a complete dataset describing the current power plants in India. While other studies have generated plans, they haven’t taken into account the wide variation in the coal-fired power plants in different regions of the country. “So, we first needed to create a dataset covering and characterizing all of the operating coal plants in India. Such a dataset was not available in the existing literature,” says Ding.Making a cost-effective plan for expanding the capacity of a power system requires knowing the efficiencies of all the power plants operating in the system. For this study, the researchers used as their metric the “station heat rate,” a standard measurement of the overall fuel efficiency of a given power plant. The station heat rate of each plant is needed in order to calculate the fuel consumption and power output of that plant as plans for capacity expansion are being developed.Some of the Indian coal plants’ efficiencies were recorded before 2022, so Ding and her team used machine-learning models to predict the efficiencies of all the Indian coal plants operating now. In 2024, they created and posted online the first comprehensive, open-sourced dataset for all 806 power plants in 30 regions of India. The work won the 2024 MIT Open Data Prize. This dataset includes each plant’s power capacity, efficiency, age, load factor (a measure indicating how much of the time it operates), water stress, and more.In addition, they categorized each plant according to its boiler design. A “supercritical” plant operates at a relatively high temperature and pressure, which makes it thermodynamically efficient, so it produces a lot of electricity for each unit of heat in the fuel. A “subcritical” plant runs at a lower temperature and pressure, so it’s less thermodynamically efficient. Most of the Indian coal plants are still subcritical plants running at low efficiency.Next step: Investigate decarbonization optionsEquipped with their detailed dataset covering all the coal power plants in India, the researchers were ready to investigate options for responding to tightening limits on carbon emissions. For that analysis, they turned to GenX, a modeling platform that was developed at MITEI to help guide decision-makers as they make investments and other plans for the future of their power systems.Ding built a GenX model based on India’s power system in 2020, including details about each power plant and transmission network across 30 regions of the country. She also entered the coal price, potential resources for wind and solar power installations, and other attributes of each region. Based on the parameters given, the GenX model would calculate the lowest-cost combination of equipment and operating conditions that can fulfill a defined future level of demand while also meeting specified policy constraints, including limits on carbon emissions. The model and all data sources were also released as open-source tools for all viewers to use.Ding and her colleagues — Dharik Mallapragada, a former principal research scientist at MITEI who is now an assistant professor of chemical and biomolecular energy at NYU Tandon School of Engineering and a MITEI visiting scientist; and Robert J. Stoner, the founding director of the MIT Tata Center for Technology and Design and former deputy director of MITEI for science and technology — then used the model to explore options for meeting demands in 2035 under progressively tighter carbon emissions caps, taking into account region-to-region variations in the efficiencies of the coal plants, the price of coal, and other factors. They describe their methods and their findings in a paper published in the journal Energy for Sustainable Development.In separate runs, they explored plans involving various combinations of current coal plants, possible new renewable plants, and more, to see their outcome in 2035. Specifically, they assumed the following four “grid-evolution scenarios:”Baseline: The baseline scenario assumes limited onshore wind and solar photovoltaics development and excludes retrofitting options, representing a business-as-usual pathway.High renewable capacity: This scenario calls for the development of onshore wind and solar power without any supply chain constraints.Biomass co-firing: This scenario assumes the baseline limits on renewables, but here all coal plants — both subcritical and supercritical — can be retrofitted for “co-firing” with biomass, an approach in which clean-burning biomass replaces some of the coal fuel. Certain coal power plants in India already co-fire coal and biomass, so the technology is known.Carbon capture and sequestration plus biomass co-firing: This scenario is based on the same assumptions as the biomass co-firing scenario with one addition: All of the high-efficiency supercritical plants are also retrofitted for carbon capture and sequestration (CCS), a technology that captures and removes carbon from a power plant’s exhaust stream and prepares it for permanent disposal. Thus far, CCS has not been used in India. This study specifies that 90 percent of all carbon in the power plant exhaust is captured.Ding and her team investigated power system planning under each of those grid-evolution scenarios and four assumptions about carbon caps: no cap, which is the current situation; 1,000 million tons (Mt) of carbon dioxide (CO2) emissions, which reflects India’s announced targets for 2035; and two more-ambitious targets, namely 800 Mt and 500 Mt. For context, CO2 emissions from India’s power sector totaled about 1,100 Mt in 2021. (Note that transmission network expansion is allowed in all scenarios.)Key findingsAssuming the adoption of carbon caps under the four scenarios generated a vast array of detailed numerical results. But taken together, the results show interesting trends in the cost-optimal mix of generating capacity and the cost of electricity under the different scenarios.Even without any limits on carbon emissions, most new capacity additions will be wind and solar generators — the lowest-cost option for expanding India’s electricity-generation capacity. Indeed, this is observed to be the case now in India. However, the increasing demand for electricity will still require some new coal plants to be built. Model results show a 10 to 20 percent increase in coal plant capacity by 2035 relative to 2020.Under the baseline scenario, renewables are expanded up to the maximum allowed under the assumptions, implying that more deployment would be economical. More coal capacity is built, and as the cap on emissions tightens, there is also investment in natural gas power plants, as well as batteries to help compensate for the now-large amount of intermittent solar and wind generation. When a 500 Mt cap on carbon is imposed, the cost of electricity generation is twice as high as it was with no cap.The high renewable capacity scenario reduces the development of new coal capacity and produces the lowest electricity cost of the four scenarios. Under the most stringent cap — 500 Mt — onshore wind farms play an important role in bringing the cost down. “Otherwise, it’ll be very expensive to reach such stringent carbon constraints,” notes Ding. “Certain coal plants that remain run only a few hours per year, so are inefficient as well as financially unviable. But they still need to be there to support wind and solar.” She explains that other backup sources of electricity, such as batteries, are even more costly. The biomass co-firing scenario assumes the same capacity limit on renewables as in the baseline scenario, and the results are much the same, in part because the biomass replaces such a low fraction — just 20 percent — of the coal in the fuel feedstock. “This scenario would be most similar to the current situation in India,” says Ding. “It won’t bring down the cost of electricity, so we’re basically saying that adding this technology doesn’t contribute effectively to decarbonization.”But CCS plus biomass co-firing is a different story. It also assumes the limits on renewables development, yet it is the second-best option in terms of reducing costs. Under the 500 Mt cap on CO2 emissions, retrofitting for both CCS and biomass co-firing produces a 22 percent reduction in the cost of electricity compared to the baseline scenario. In addition, as the carbon cap tightens, this option reduces the extent of deployment of natural gas plants and significantly improves overall coal plant utilization. That increased utilization “means that coal plants have switched from just meeting the peak demand to supplying part of the baseline load, which will lower the cost of coal generation,” explains Ding.Some concernsWhile those trends are enlightening, the analyses also uncovered some concerns for India to consider, in particular, with the two approaches that yielded the lowest electricity costs.The high renewables scenario is, Ding notes, “very ideal.” It assumes that there will be little limiting the development of wind and solar capacity, so there won’t be any issues with supply chains, which is unrealistic. More importantly, the analyses showed that implementing the high renewables approach would create uneven investment in renewables across the 30 regions. Resources for onshore and offshore wind farms are mainly concentrated in a few regions in western and southern India. “So all the wind farms would be put in those regions, near where the rich cities are,” says Ding. “The poorer cities on the eastern side, where the coal power plants are, will have little renewable investment.”So the approach that’s best in terms of cost is not best in terms of social welfare, because it tends to benefit the rich regions more than the poor ones. “It’s like [the government will] need to consider the trade-off between energy justice and cost,” says Ding. Enacting state-level renewable generation targets could encourage a more even distribution of renewable capacity installation. Also, as transmission expansion is planned, coordination among power system operators and renewable energy investors in different regions could help in achieving the best outcome.CCS plus biomass co-firing — the second-best option for reducing prices — solves the equity problem posed by high renewables, and it assumes a more realistic level of renewable power adoption. However, CCS hasn’t been used in India, so there is no precedent in terms of costs. The researchers therefore based their cost estimates on the cost of CCS in China and then increased the required investment by 10 percent, the “first-of-a-kind” index developed by the U.S. Energy Information Administration. Based on those costs and other assumptions, the researchers conclude that coal plants with CCS could come into use by 2035 when the carbon cap for power generation is less than 1,000 Mt.But will CCS actually be implemented in India? While there’s been discussion about using CCS in heavy industry, the Indian government has not announced any plans for implementing the technology in coal-fired power plants. Indeed, India is currently “very conservative about CCS,” says Ding. “Some researchers say CCS won’t happen because it’s so expensive, and as long as there’s no direct use for the captured carbon, the only thing you can do is put it in the ground.” She adds, “It’s really controversial to talk about whether CCS will be implemented in India in the next 10 years.”Ding and her colleagues hope that other researchers and policymakers — especially those working in developing countries — may benefit from gaining access to their datasets and learning about their methods. Based on their findings for India, she stresses the importance of understanding the detailed geographical situation in a country in order to design plans and policies that are both realistic and equitable. More

  • in

    Hundred-year storm tides will occur every few decades in Bangladesh, scientists report

    Tropical cyclones are hurricanes that brew over the tropical ocean and can travel over land, inundating coastal regions. The most extreme cyclones can generate devastating storm tides — seawater that is heightened by the tides and swells onto land, causing catastrophic flood events in coastal regions. A new study by MIT scientists finds that, as the planet warms, the recurrence of destructive storm tides will increase tenfold for one of the hardest-hit regions of the world.In a study appearing today in One Earth, the scientists report that, for the highly populated coastal country of Bangladesh, what was once a 100-year event could now strike every 10 years — or more often — by the end of the century. In a future where fossil fuels continue to burn as they do today, what was once considered a catastrophic, once-in-a-century storm tide will hit Bangladesh, on average, once per decade. And the kind of storm tides that have occurred every decade or so will likely batter the country’s coast more frequently, every few years.Bangladesh is one of the most densely populated countries in the world, with more than 171 million people living in a region roughly the size of New York state. The country has been historically vulnerable to tropical cyclones, as it is a low-lying delta that is easily flooded by storms and experiences a seasonal monsoon. Some of the most destructive floods in the world have occurred in Bangladesh, where it’s been increasingly difficult for agricultural economies to recover.The study also finds that Bangladesh will likely experience tropical cyclones that overlap with the months-long monsoon season. Until now, cyclones and the monsoon have occurred at separate times during the year. But as the planet warms, the scientists’ modeling shows that cyclones will push into the monsoon season, causing back-to-back flooding events across the country.“Bangladesh is very active in preparing for climate hazards and risks, but the problem is, everything they’re doing is more or less based on what they’re seeing in the present climate,” says study co-author Sai Ravela, principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “We are now seeing an almost tenfold rise in the recurrence of destructive storm tides almost anywhere you look in Bangladesh. This cannot be ignored. So, we think this is timely, to say they have to pause and revisit how they protect against these storms.”Ravela’s co-authors are Jiangchao Qiu, a postdoc in EAPS, and Kerry Emanuel, professor emeritus of atmospheric science at MIT.Height of tidesIn recent years, Bangladesh has invested significantly in storm preparedness, for instance in improving its early-warning system, fortifying village embankments, and increasing access to community shelters. But such preparations have generally been based on the current frequency of storms.In this new study, the MIT team aimed to provide detailed projections of extreme storm tide hazards, which are flooding events where tidal effects amplify cyclone-induced storm surge, in Bangladesh under various climate-warming scenarios and sea-level rise projections.“A lot of these events happen at night, so tides play a really strong role in how much additional water you might get, depending on what the tide is,” Ravela explains.To evaluate the risk of storm tide, the team first applied a method of physics-based downscaling, which Emanuel’s group first developed over 20 years ago and has been using since to study hurricane activity in different parts of the world. The technique involves a low-resolution model of the global ocean and atmosphere that is embedded with a finer-resolution model that simulates weather patterns as detailed as a single hurricane. The researchers then scatter hurricane “seeds” in a region of interest and run the model forward to observe which seeds grow and make landfall over time.To the downscaled model, the researchers incorporated a hydrodynamical model, which simulates the height of a storm surge, given the pattern and strength of winds at the time of a given storm. For any given simulated storm, the team also tracked the tides, as well as effects of sea level rise, and incorporated this information into a numerical model that calculated the storm tide, or the height of the water, with tidal effects as a storm makes landfall.Extreme overlapWith this framework, the scientists simulated tens of thousands of potential tropical cyclones near Bangladesh, under several future climate scenarios, ranging from one that resembles the current day to one in which the world experiences further warming as a result of continued fossil fuel burning. For each simulation, they recorded the maximum storm tides along the coast of Bangladesh and noted the frequency of storm tides of various heights in a given climate scenario.“We can look at the entire bucket of simulations and see, for this storm tide of say, 3 meters, we saw this many storms, and from that you can figure out the relative frequency of that kind of storm,” Qiu says. “You can then invert that number to a return period.”A return period is the time it takes for a storm of a particular type to make landfall again. A storm that is considered a “100-year event” is typically more powerful and destructive, and in this case, creates more extreme storm tides, and therefore more catastrophic flooding, compared to a 10-year event.From their modeling, Ravela and his colleagues found that under a scenario of increased global warming, the storms that previously were considered 100-year events, producing the highest storm tide values, can recur every decade or less by late-century. They also observed that, toward the end of this century, tropical cyclones in Bangladesh will occur across a broader seasonal window, potentially overlapping in certain years with the seasonal monsoon season.“If the monsoon rain has come in and saturated the soil, a cyclone then comes in and it makes the problem much worse,” Ravela says. “People won’t have any reprieve between the extreme storm and the monsoon. There are so many compound and cascading effects between the two. And this only emerges because warming happens.”Ravela and his colleagues are using their modeling to help experts in Bangladesh better evaluate and prepare for a future of increasing storm risk. And he says that the climate future for Bangladesh is in some ways not unique to this part of the world.“This climate change story that is playing out in Bangladesh in a certain way will be playing out in a different way elsewhere,” Ravela notes. “Maybe where you are, the story is about heat stress, or amplifying droughts, or wildfires. The peril is different. But the underlying catastrophe story is not that different.”This research is supported in part by the MIT Climate Resilience Early Warning Systems Climate Grand Challenges project, the Jameel Observatory JO-CREWSNet project; MIT Weather and Climate Extremes Climate Grand Challenges project; and Schmidt Sciences, LLC.  More

  • in

    Reducing carbon emissions from residential heating: A pathway forward

    In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.A two-part modeling approachTo analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.The New England case study: A challenge for electrificationAs a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.Lessons learnedReplacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole. More

  • in

    Unlocking the secrets of fusion’s core with AI-enhanced simulations

    Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.The biggest and best of what’s never been builtForty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.“Just dropped in to see what condition my condition was in”Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”  More

  • in

    Smart carbon dioxide removal yields economic and environmental benefits

    Last year the Earth exceeded 1.5 degrees Celsius of warming above preindustrial times, a threshold beyond which wildfires, droughts, floods, and other climate impacts are expected to escalate in frequency, intensity, and lethality. To cap global warming at 1.5 C and avert that scenario, the nearly 200 signatory nations of the Paris Agreement on climate change will need to not only dramatically lower their greenhouse gas emissions, but also take measures to remove carbon dioxide (CO2) from the atmosphere and durably store it at or below the Earth’s surface.Past analyses of the climate mitigation potential, costs, benefits, and drawbacks of different carbon dioxide removal (CDR) options have focused primarily on three strategies: bioenergy with carbon capture and storage (BECCS), in which CO2-absorbing plant matter is converted into fuels or directly burned to generate energy, with some of the plant’s carbon content captured and then stored safely and permanently; afforestation/reforestation, in which CO2-absorbing trees are planted in large numbers; and direct air carbon capture and storage (DACCS), a technology that captures and separates CO2 directly from ambient air, and injects it into geological reservoirs or incorporates it into durable products. To provide a more comprehensive and actionable analysis of CDR, a new study by researchers at the MIT Center for Sustainability Science and Strategy (CS3) first expands the option set to include biochar (charcoal produced from plant matter and stored in soil) and enhanced weathering (EW) (spreading finely ground rock particles on land to accelerate storage of CO2 in soil and water). The study then evaluates portfolios of all five options — in isolation and in combination — to assess their capability to meet the 1.5 C goal, and their potential impacts on land, energy, and policy costs.The study appears in the journal Environmental Research Letters. Aided by their global multi-region, multi-sector Economic Projection and Policy Analysis (EPPA) model, the MIT CS3 researchers produce three key findings.First, the most cost-effective, low-impact strategy that policymakers can take to achieve global net-zero emissions — an essential step in meeting the 1.5 C goal — is to diversify their CDR portfolio, rather than rely on any single option. This approach minimizes overall cropland and energy consumption, and negative impacts such as increased food insecurity and decreased energy supplies.By diversifying across multiple CDR options, the highest CDR deployment of around 31.5 gigatons of CO2 per year is achieved in 2100, while also proving the most cost-effective net-zero strategy. The study identifies BECCS and biochar as most cost-competitive in removing CO2 from the atmosphere, followed by EW, with DACCS as uncompetitive due to high capital and energy requirements. While posing logistical and other challenges, biochar and EW have the potential to improve soil quality and productivity across 45 percent of all croplands by 2100.“Diversifying CDR portfolios is the most cost-effective net-zero strategy because it avoids relying on a single CDR option, thereby reducing and redistributing negative impacts on agriculture, forestry, and other land uses, as well as on the energy sector,” says Solene Chiquier, lead author of the study who was a CS3 postdoc during its preparation.The second finding: There is no optimal CDR portfolio that will work well at global and national levels. The ideal CDR portfolio for a particular region will depend on local technological, economic, and geophysical conditions. For example, afforestation and reforestation would be of great benefit in places like Brazil, Latin America, and Africa, by not only sequestering carbon in more acreage of protected forest but also helping to preserve planetary well-being and human health.“In designing a sustainable, cost-effective CDR portfolio, it is important to account for regional availability of agricultural, energy, and carbon-storage resources,” says Sergey Paltsev, CS3 deputy director, MIT Energy Initiative senior research scientist, and supervising co-author of the study. “Our study highlights the need for enhancing knowledge about local conditions that favor some CDR options over others.”Finally, the MIT CS3 researchers show that delaying large-scale deployment of CDR portfolios could be very costly, leading to considerably higher carbon prices across the globe — a development sure to deter the climate mitigation efforts needed to achieve the 1.5 C goal. They recommend near-term implementation of policy and financial incentives to help fast-track those efforts. More

  • in

    How to make small modular reactors more cost-effective

    When Youyeon Choi was in high school, she discovered she really liked “thinking in geometry.” The shapes, the dimensions … she was into all of it. Today, geometry plays a prominent role in her doctoral work under the guidance of Professor Koroush Shirvan, as she explores ways to increase the competitiveness of small modular reactors (SMRs).Central to the thesis is metallic nuclear fuel in a helical cruciform shape, which improves surface area and lowers heat flux as compared to the traditional cylindrical equivalent.A childhood in a prominent nuclear energy countryHer passion for geometry notwithstanding, Choi admits she was not “really into studying” in middle school. But that changed when she started excelling in technical subjects in her high school years. And because it was the natural sciences that first caught Choi’s eye, she assumed she would major in the subject when she went to university.This focus, too, would change. Growing up in Seoul, Choi was becoming increasingly aware of the critical role nuclear energy played in meeting her native country’s energy needs. Twenty-six reactors provide nearly a third of South Korea’s electricity, according to the World Nuclear Association. The country is also one of the world’s most prominent nuclear energy entities.In such an ecosystem, Choi understood the stakes at play, especially with electricity-guzzling technologies such as AI and electric vehicles on the rise. Her father also discussed energy-related topics with Choi when she was in high school. Being soaked in that atmosphere eventually led Choi to nuclear engineering.

    Youyeon Choi: Making small modular reactors more cost-effective

    Early work in South KoreaExcelling in high school math and science, Choi was a shoo-in for college at Seoul National University. Initially intent on studying nuclear fusion, Choi switched to fission because she saw that the path to fusion was more convoluted and was still in the early stages of exploration.Choi went on to complete her bachelor’s and master’s degrees in nuclear engineering from the university. As part of her master’s thesis, she worked on a multi-physics modeling project involving high-fidelity simulations of reactor physics and thermal hydraulics to analyze reactor cores.South Korea exports its nuclear know-how widely, so work in the field can be immensely rewarding. Indeed, after graduate school, Choi moved to Daejeon, which has the moniker “Science City.” As an intern at the Korea Atomic Energy Research Institute (KAERI), she conducted experimental studies on the passive safety systems of nuclear reactors. Choi then moved to the Korea Institute of Nuclear Nonproliferation and Control, where she worked as a researcher developing nuclear security programs for countries. Given South Korea’s dominance in the field, other countries would tap its knowledge resource to tap their own nuclear energy programs. The focus was on international training programs, an arm of which involved cybersecurity and physical protection.While the work was impactful, Choi found she missed the modeling work she did as part of her master’s thesis. Looking to return to technical research, she applied to the MIT Department of Nuclear Science and Engineering (NSE). “MIT has the best nuclear engineering program in the States, and maybe even the world,” Choi says, explaining her decision to enroll as a doctoral student.Innovative research at MITAt NSE, Choi is working to make SMRs more price competitive as compared to traditional nuclear energy power plants.Due to their smaller size, SMRs are able to serve areas where larger reactors might not work, but they’re more expensive. One way to address costs is to squeeze more electricity out of a unit of fuel — to increase the power density. Choi is doing so by replacing the traditional uranium dioxide ceramic fuel in a cylindrical shape with a metal one in a helical cruciform. Such a replacement potentially offers twin advantages: the metal fuel has high conductivity, which means the fuel will operate even more safely at lower temperatures. And the twisted shape gives more surface area and lower heat flux. The net result is more electricity for the same volume.The project receives funding from a collaboration between Lightbridge Corp., which is exploring how advanced fuel technologies can improve the performance of water-cooled SMRs, and the U.S. Department of Energy Nuclear Energy University Program.With SMR efficiencies in mind, Choi is indulging her love of multi-physics modeling, and focusing on reactor physics, thermal hydraulics, and fuel performance simulation. “The goal of this modeling and simulation is to see if we can really use this fuel in the SMR,” Choi says. “I’m really enjoying doing the simulations because the geometry is really hard to model. Because the shape is twisted, there’s no symmetry at all,” she says. Always up for a challenge, Choi learned the various aspects of physics and a variety of computational tools, including the Monte Carlo code for reactor physics.Being at MIT has a whole roster of advantages, Choi says, and she especially appreciates the respect researchers have for each other. She appreciates being able to discuss projects with Shirvan and his focus on practical applications of research. At the same time, Choi appreciates the “exotic” nature of her project. “Even assessing if this SMR fuel is at all feasible is really hard, but I think it’s all possible because it’s MIT and my PI [principal investigator] is really invested in innovation,” she says.It’s an exciting time to be in nuclear engineering, Choi says. She serves as one of the board members of the student section of the American Nuclear Society and is an NSE representative of the Graduate Student Council for the 2024-25 academic year.Choi is excited about the global momentum toward nuclear as more countries are exploring the energy source and trying to build more nuclear power plants on the path to decarbonization. “I really do believe nuclear energy is going to be a leading carbon-free energy. It’s very important for our collective futures,” Choi says. More

  • in

    The role of modeling in the energy transition

    Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what’s going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”Dialogue, not forecastsEIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”Modeling uncertaintyThe key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”The future of energy modelingLooking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”  More

  • in

    How hard is it to prevent recurring blackouts in Puerto Rico?

    Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company  developed.”The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.  More