More stories

  • in

    Mining for the clean energy transition

    In a world powered increasingly by clean energy, drilling for oil and gas will gradually give way to digging for metals and minerals. Today, the “critical minerals” used to make electric cars, solar panels, wind turbines, and grid-scale battery storage are facing soaring demand — and some acute bottlenecks as miners race to catch up.

    According to a report from the International Energy Agency, by 2040, the worldwide demand for copper is expected to roughly double; demand for nickel and cobalt will grow at least sixfold; and the world’s hunger for lithium could reach 40 times what we use today.

    “Society is looking to the clean energy transition as a way to solve the environmental and social harms of climate change,” says Scott Odell, a visiting scientist at the MIT Environmental Solutions Initiative (ESI), where he helps run the ESI Mining, Environment, and Society Program, who is also a visiting assistant professor at George Washington University. “Yet mining the materials needed for that transition would also cause social and environmental impacts. So we need to look for ways to reduce our demand for minerals, while also improving current mining practices to minimize social and environmental impacts.”

    ESI recently hosted the inaugural MIT Conference on Mining, Environment, and Society to discuss how the clean energy transition may affect mining and the people and environments in mining areas. The conference convened representatives of mining companies, environmental and human rights groups, policymakers, and social and natural scientists to identify key concerns and possible collaborative solutions.

    “We can’t replace an abusive fossil fuel industry with an abusive mining industry that expands as we move through the energy transition,” said Jim Wormington, a senior researcher at Human Rights Watch, in a panel on the first day of the conference. “There’s a recognition from governments, civil society, and companies that this transition potentially has a really significant human rights and social cost, both in terms of emissions […] but also for communities and workers who are on the front lines of mining.”

    That focus on communities and workers was consistent throughout the three-day conference, as participants outlined the economic and social dimensions of standing up large numbers of new mines. Corporate mines can bring large influxes of government revenue and local investment, but the income is volatile and can leave policymakers and communities stranded when production declines or mineral prices fall. On the other hand, “artisanal” mining operations are an important source of critical minerals, but are hard to regulate and subject to abuses from brokers. And large reserves of minerals are found in conservation areas, regions with fragile ecosystems and experiencing water shortages that can be exacerbated by mining, in particular on Indigenous-controlled lands and other places where mine openings are deeply fraught.

    “One of the real triggers of conflict is a dissatisfaction with the current model of resource extraction,” said Jocelyn Fraser of the University of British Columbia in a panel discussion. “One that’s failed to support the long-term sustainable development of regions that host mining operations, and yet imposes significant local social and environmental impacts.”

    All these challenges point toward solutions in policy and in mining companies’ relationships with the communities where they work. Participants highlighted newer models of mining governance that can create better incentives for the ways mines operate — from full community ownership of mines to recognizing community rights to the benefits of mining to end-of-life planning for mines at the time they open.

    Many of the conference speakers also shared technological innovations that may help reduce mining challenges. Some operations are investing in desalination as alternative water sources in water-scarce regions; low-carbon alternatives are emerging to many of the fossil fuel-powered heavy machines that are mainstays of the industry; and work is being done to reclaim valuable minerals from mine tailings, helping to minimize both waste and the need to open new extraction sites.

    Increasingly, the mining industry itself is recognizing that reforms will allow it to thrive in a rapid clean-energy transition. “Decarbonization is really a profitability imperative,” said Kareemah Mohammed, managing director for sustainability services at the technology consultancy Accenture, on the conference’s second day. “It’s about securing a low-cost and steady supply of either minerals or metals, but it’s also doing so in an optimal way.”

    The three-day conference attracted over 350 attendees, from large mining companies, industry groups, consultancies, multilateral institutions, universities, nongovernmental organizations (NGOs), government, and more. It was held entirely virtually, a choice that helped make the conference not only truly international — participants joined from over 27 countries on six continents — but also accessible to members of nonprofits and professionals in the developing world.

    “Many people are concerned about the environmental and social challenges of supplying the clean energy revolution, and we’d heard repeatedly that there wasn’t a forum for government, industry, academia, NGOs, and communities to all sit at the same table and explore collaborative solutions,” says Christopher Noble, ESI’s director of corporate engagement. “Convening, and researching best practices, are roles that universities can play. The conversations at this conference have generated valuable ideas and consensus to pursue three parallel programs: best-in-class models for community engagement, improving ESG metrics and their use, and civil-society contributions to government/industry relations. We are developing these programs to keep the momentum going.”

    The MIT Conference on Mining, Environment, and Society was funded, in part, by Accenture, as part of the MIT/Accenture Convergence Initiative. Additional funding was provided by the Inter-American Development Bank. More

  • in

    3 Questions: Janelle Knox-Hayes on producing renewable energy that communities want

    Wind power accounted for 8 percent of U.S. electricity consumption in 2020, and is growing rapidly in the country’s energy portfolio. But some projects, like the now-defunct Cape Wind proposal for offshore power in Massachusetts, have run aground due to local opposition. Are there ways to avoid this in the future?

    MIT professors Janelle Knox-Hayes and Donald Sadoway think so. In a perspective piece published today in the journal Joule, they and eight other professors call for a new approach to wind-power deployment, one that engages communities in a process of “co-design” and adapts solutions to local needs. That process, they say, could spur additional creativity in renewable energy engineering, while making communities more amenable to existing technologies. In addition to Knox-Hayes and Sadoway, the paper’s co-authors are Michael J. Aziz of Harvard University; Dennice F. Gayme of Johns Hopkins University; Kathryn Johnson of the Colorado School of Mines; Perry Li of the University of Minnesota; Eric Loth of the University of Virginia; Lucy Y. Pao of the University of Colorado; Jessica Smith of the Colorado School of Mines; and Sonya Smith of Howard University.

    Knox-Hayes is the Lister Brothers Associate Professor of Economic Geography and Planning in MIT’s Department of Urban Studies and Planning, and an expert on the social and political context of renewable energy adoption; Sadoway is the John F. Elliott Professor of Materials Chemistry in MIT’s Department of Materials Science and Engineering, and a leading global expert on developing new forms of energy storage. MIT News spoke with Knox-Hayes about the topic.

    Q: What is the core problem you are addressing in this article?

    A: It is problematic to act as if technology can only be engineered in a silo and then delivered to society. To solve problems like climate change, we need to see technology as a socio-technical system, which is integrated from its inception into society. From a design standpoint, that begins with conversations, values assessments, and understanding what communities need.  If we can do that, we will have a much easier time delivering the technology in the end.

    What we have seen in the Northeast, in trying to meet our climate objectives and energy efficiency targets, is that we need a lot of offshore wind, and a lot of projects have stalled because a community was saying “no.” And part of the reason communities refuse projects is because they that they’ve never been properly consulted. What form does the technology take, and how would it operate within a community? That conversation can push the boundaries of engineering.

    Q: The new paper makes the case for a new practice of “co-design” in the field of renewable energy. You call this the “STEP” process, standing for all the socio-technical-political-economic issues that an engineering project might encounter. How would you describe the STEP idea? And to what extent would industry be open to new attempts to design an established technology?

    A: The idea is to bring together all these elements in an interdisciplinary process, and engage stakeholders. The process could start with a series of community forums where we bring everyone together, and do a needs assessment, which is a common practice in planning. We might see that offshore wind energy needs to be considered in tandem with the local fishing industry, or servicing the installations, or providing local workforce training. The STEP process allows us to take a step back, and start with planners, policymakers, and community members on the ground.

    It is also about changing the nature of research and practice and teaching, so that students are not just in classrooms, they are also learning to work with communities. I think formalizing that piece is important. We are starting now to really feel the impacts of climate change, so we have to confront the reality of breaking through political boundaries, even in the United States. That is the only way to make this successful, and that comes back to how can technology be co-designed.

    At MIT, innovation is the spirit of the endeavor, and that is why MIT has so many industry partners engaged in initiatives like MITEI [the MIT Energy Initiative] and the Climate Consortium. The value of the partnership is that MIT pushes the boundaries of what is possible. It is the idea that we can advance and we can do something incredible, we can innovate the future. What we are suggesting with this work is that innovation isn’t something that happens exclusively in a laboratory, but something that is very much built in partnership with communities and other stakeholders.

    Q: How much does this approach also apply to solar power, as the other leading type of renewable energy? It seems like communities also wrestle with where to locate solar arrays, or how to compensate homeowners, communities, and other solar hosts for the power they generate.

    A: I would not say solar has the same set of challenges, but rather that renewable technologies face similar challenges. With solar, there are also questions of access and siting. Another big challenge is to create financing models that provide value and opportunity at different scales. For example, is solar viable for tenants in multi-family units who want to engage with clean energy? This is a similar question for micro-wind opportunities for buildings. With offshore wind, a restriction is that if it is within sightlines, it might be problematic. But there are exciting technologies that have enabled deep wind, or the establishment of floating turbines up to 50 kilometers offshore. Storage solutions such as hydro-pneumatic energy storage, gravity energy storage or buoyancy storage can help maintain the transmission rate while reducing the number of transmission lines needed.

    In a lot of communities, the reality of renewables is that if you can generate your own energy, you can establish a level of security and resilience that feeds other benefits. 

    Nevertheless, as demonstrated in the Cape Wind case, technology [may be rejected] unless a community is involved from the beginning. Community involvement also creates other opportunities. Suppose, for example, that high school students are working as interns on renewable energy projects with engineers at great universities from the region. This provides a point of access for families and allows them to take pride in the systems they create.  It gives a further sense of purpose to the technology system, and vests the community in the system’s success. It is the difference between, “It was delivered to me,” and “I built it.” For researchers the article is a reminder that engineering and design are more successful if they are inclusive. Engineering and design processes are also meant to be accessible and fun. More

  • in

    A simple way to significantly increase lifetimes of fuel cells and other devices

    In research that could jump-start work on a range of technologies including fuel cells, which are key to storing solar and wind energy, MIT researchers have found a relatively simple way to increase the lifetimes of these devices: changing the pH of the system.

    Fuel and electrolysis cells made of materials known as solid metal oxides are of interest for several reasons. For example, in the electrolysis mode, they are very efficient at converting electricity from a renewable source into a storable fuel like hydrogen or methane that can be used in the fuel cell mode to generate electricity when the sun isn’t shining or the wind isn’t blowing. They can also be made without using costly metals like platinum. However, their commercial viability has been hampered, in part, because they degrade over time. Metal atoms seeping from the interconnects used to construct banks of fuel/electrolysis cells slowly poison the devices.

    “What we’ve been able to demonstrate is that we can not only reverse that degradation, but actually enhance the performance above the initial value by controlling the acidity of the air-electrode interface,” says Harry L. Tuller, the R.P. Simmons Professor of Ceramics and Electronic Materials in MIT’s Department of Materials Science and Engineering (DMSE).

    The research, initially funded by the U.S. Department of Energy through the Office of Fossil Energy and Carbon Management’s (FECM) National Energy Technology Laboratory, should help the department meet its goal of significantly cutting the degradation rate of solid oxide fuel cells by 2035 to 2050.

    “Extending the lifetime of solid oxide fuels cells helps deliver the low-cost, high-efficiency hydrogen production and power generation needed for a clean energy future,” says Robert Schrecengost, acting director of FECM’s Division of Hydrogen with Carbon Management. “The department applauds these advancements to mature and ultimately commercialize these technologies so that we can provide clean and reliable energy for the American people.”

    “I’ve been working in this area my whole professional life, and what I’ve seen until now is mostly incremental improvements,” says Tuller, who was recently named a 2022 Materials Research Society Fellow for his career-long work in solid-state chemistry and electrochemistry. “People are normally satisfied with seeing improvements by factors of tens-of-percent. So, actually seeing much larger improvements and, as importantly, identifying the source of the problem and the means to work around it, issues that we’ve been struggling with for all these decades, is remarkable.”

    Says James M. LeBeau, the John Chipman Associate Professor of Materials Science and Engineering at MIT, who was also involved in the research, “This work is important because it could overcome [some] of the limitations that have prevented the widespread use of solid oxide fuel cells. Additionally, the basic concept can be applied to many other materials used for applications in the energy-related field.”

    A report describing the work was reported Aug. 11, in Energy & Environmental Science. Additional authors of the paper are Han Gil Seo, a DMSE postdoc; Anna Staerz, formerly a DMSE postdoc, now at Interuniversity Microelectronics Centre (IMEC) Belgium and soon to join the Colorado School of Mines faculty; Dennis S. Kim, a DMSE postdoc; Dino Klotz, a DMSE visiting scientist, now at Zurich Instruments; Michael Xu, a DMSE graduate student; and Clement Nicollet, formerly a DMSE postdoc, now at the Université de Nantes. Seo and Staerz contributed equally to the work.

    Changing the acidity

    A fuel/electrolysis cell has three principal parts: two electrodes (a cathode and anode) separated by an electrolyte. In the electrolysis mode, electricity from, say, the wind, can be used to generate storable fuel like methane or hydrogen. On the other hand, in the reverse fuel cell reaction, that storable fuel can be used to create electricity when the wind isn’t blowing.

    A working fuel/electrolysis cell is composed of many individual cells that are stacked together and connected by steel metal interconnects that include the element chrome to keep the metal from oxidizing. But “it turns out that at the high temperatures that these cells run, some of that chrome evaporates and migrates to the interface between the cathode and the electrolyte, poisoning the oxygen incorporation reaction,” Tuller says. After a certain point, the efficiency of the cell has dropped to a point where it is not worth operating any longer.

    “So if you can extend the life of the fuel/electrolysis cell by slowing down this process, or ideally reversing it, you could go a long way towards making it practical,” Tuller says.

    The team showed that you can do both by controlling the acidity of the cathode surface. They also explained what is happening.

    To achieve their results, the team coated the fuel/electrolysis cell cathode with lithium oxide, a compound that changes the relative acidity of the surface from being acidic to being more basic. “After adding a small amount of lithium, we were able to recover the initial performance of a poisoned cell,” Tuller says. When the engineers added even more lithium, the performance improved far beyond the initial value. “We saw improvements of three to four orders of magnitude in the key oxygen reduction reaction rate and attribute the change to populating the surface of the electrode with electrons needed to drive the oxygen incorporation reaction.”

    The engineers went on to explain what is happening by observing the material at the nanoscale, or billionths of a meter, with state-of-the-art transmission electron microscopy and electron energy loss spectroscopy at MIT.nano. “We were interested in understanding the distribution of the different chemical additives [chromium and lithium oxide] on the surface,” says LeBeau.

    They found that the lithium oxide effectively dissolves the chromium to form a glassy material that no longer serves to degrade the cathode performance.

    Applications for sensors, catalysts, and more

    Many technologies like fuel cells are based on the ability of the oxide solids to rapidly breathe oxygen in and out of their crystalline structures, Tuller says. The MIT work essentially shows how to recover — and speed up — that ability by changing the surface acidity. As a result, the engineers are optimistic that the work could be applied to other technologies including, for example, sensors, catalysts, and oxygen permeation-based reactors.

    The team is also exploring the effect of acidity on systems poisoned by different elements, like silica.

    Concludes Tuller: “As is often the case in science, you stumble across something and notice an important trend that was not appreciated previously. Then you test that concept further, and you discover that it is really very fundamental.”

    In addition to the DOE, this work was also funded by the National Research Foundation of Korea, the MIT Department of Materials Science and Engineering via Tuller’s appointment as the R.P. Simmons Professor of Ceramics and Electronic Materials, and the U.S. Air Force Office of Scientific Research. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Energy storage important to creating affordable, reliable, deeply decarbonized electricity systems

    In deeply decarbonized energy systems utilizing high penetrations of variable renewable energy (VRE), energy storage is needed to keep the lights on and the electricity flowing when the sun isn’t shining and the wind isn’t blowing — when generation from these VRE resources is low or demand is high. The MIT Energy Initiative’s Future of Energy Storage study makes clear the need for energy storage and explores pathways using VRE resources and storage to reach decarbonized electricity systems efficiently by 2050.

    “The Future of Energy Storage,” a new multidisciplinary report from the MIT Energy Initiative (MITEI), urges government investment in sophisticated analytical tools for planning, operation, and regulation of electricity systems in order to deploy and use storage efficiently. Because storage technologies will have the ability to substitute for or complement essentially all other elements of a power system, including generation, transmission, and demand response, these tools will be critical to electricity system designers, operators, and regulators in the future. The study also recommends additional support for complementary staffing and upskilling programs at regulatory agencies at the state and federal levels. 

    Play video

    Why is energy storage so important?

    The MITEI report shows that energy storage makes deep decarbonization of reliable electric power systems affordable. “Fossil fuel power plant operators have traditionally responded to demand for electricity — in any given moment — by adjusting the supply of electricity flowing into the grid,” says MITEI Director Robert Armstrong, the Chevron Professor of Chemical Engineering and chair of the Future of Energy Storage study. “But VRE resources such as wind and solar depend on daily and seasonal variations as well as weather fluctuations; they aren’t always available to be dispatched to follow electricity demand. Our study finds that energy storage can help VRE-dominated electricity systems balance electricity supply and demand while maintaining reliability in a cost-effective manner — that in turn can support the electrification of many end-use activities beyond the electricity sector.”

    The three-year study is designed to help government, industry, and academia chart a path to developing and deploying electrical energy storage technologies as a way of encouraging electrification and decarbonization throughout the economy, while avoiding excessive or inequitable burdens.

    Focusing on three distinct regions of the United States, the study shows the need for a varied approach to energy storage and electricity system design in different parts of the country. Using modeling tools to look out to 2050, the study team also focuses beyond the United States, to emerging market and developing economy (EMDE) countries, particularly as represented by India. The findings highlight the powerful role storage can play in EMDE nations. These countries are expected to see massive growth in electricity demand over the next 30 years, due to rapid overall economic expansion and to increasing adoption of electricity-consuming technologies such as air conditioning. In particular, the study calls attention to the pivotal role battery storage can play in decarbonizing grids in EMDE countries that lack access to low-cost gas and currently rely on coal generation.

    The authors find that investment in VRE combined with storage is favored over new coal generation over the medium and long term in India, although existing coal plants may linger unless forced out by policy measures such as carbon pricing. 

    “Developing countries are a crucial part of the global decarbonization challenge,” says Robert Stoner, the deputy director for science and technology at MITEI and one of the report authors. “Our study shows how they can take advantage of the declining costs of renewables and storage in the coming decades to become climate leaders without sacrificing economic development and modernization.”

    The study examines four kinds of storage technologies: electrochemical, thermal, chemical, and mechanical. Some of these technologies, such as lithium-ion batteries, pumped storage hydro, and some thermal storage options, are proven and available for commercial deployment. The report recommends that the government focus R&D efforts on other storage technologies, which will require further development to be available by 2050 or sooner — among them, projects to advance alternative electrochemical storage technologies that rely on earth-abundant materials. It also suggests government incentives and mechanisms that reward success but don’t interfere with project management. The report calls for the federal government to change some of the rules governing technology demonstration projects to enable more projects on storage. Policies that require cost-sharing in exchange for intellectual property rights, the report argues, discourage the dissemination of knowledge. The report advocates for federal requirements for demonstration projects that share information with other U.S. entities.

    The report says many existing power plants that are being shut down can be converted to useful energy storage facilities by replacing their fossil fuel boilers with thermal storage and new steam generators. This retrofit can be done using commercially available technologies and may be attractive to plant owners and communities — using assets that would otherwise be abandoned as electricity systems decarbonize.  

    The study also looks at hydrogen and concludes that its use for storage will likely depend on the extent to which hydrogen is used in the overall economy. That broad use of hydrogen, the report says, will be driven by future costs of hydrogen production, transportation, and storage — and by the pace of innovation in hydrogen end-use applications. 

    The MITEI study predicts the distribution of hourly wholesale prices or the hourly marginal value of energy will change in deeply decarbonized power systems — with many more hours of very low prices and more hours of high prices compared to today’s wholesale markets. So the report recommends systems adopt retail pricing and retail load management options that reward all consumers for shifting electricity use away from times when high wholesale prices indicate scarcity, to times when low wholesale prices signal abundance. 

    The Future of Energy Storage study is the ninth in MITEI’s “Future of” series, exploring complex and vital issues involving energy and the environment. Previous studies have focused on nuclear power, solar energy, natural gas, geothermal energy, and coal (with capture and sequestration of carbon dioxide emissions), as well as on systems such as the U.S. electric power grid. The Alfred P. Sloan Foundation and the Heising-Simons Foundation provided core funding for MITEI’s Future of Energy Storage study. MITEI members Equinor and Shell provided additional support.  More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    New power sources

    In the mid-1990s, a few energy activists in Massachusetts had a vision: What if citizens had choice about the energy they consumed? Instead of being force-fed electricity sources selected by a utility company, what if cities, towns, and groups of individuals could purchase power that was cleaner and cheaper?

    The small group of activists — including a journalist, the head of a small nonprofit, a local county official, and a legislative aide — drafted model legislation along these lines that reached the state Senate in 1995. The measure stalled out. In 1997, they tried again. Massachusetts legislators were busy passing a bill to reform the state power industry in other ways, and this time the activists got their low-profile policy idea included in it — as a provision so marginal it only got a brief mention in The Boston Globe’s coverage of the bill.

    Today, this idea, often known as Community Choice Aggregation (CCA), is used by roughly 36 million people in the U.S., or 11 percent of the population. Local residents, as a bloc, purchase energy with certain specifications attached, and over 1,800 communities have adopted CCA in six states, with others testing CCA pilot programs. From such modest beginnings, CCA has become a big deal.

    “It started small, then had a profound impact,” says David Hsu, an associate professor at MIT who studies energy policy issues. Indeed, the trajectory of CCA is so striking that Hsu has researched its origins, combing through a variety of archival sources and interviewing the principals. He has now written a journal article examining the lessons and implications of this episode.

    Hsu’s paper, “Straight out of Cape Cod: The origin of community choice aggregation and its spread to other states,” appears in advance online form in the journal Energy Research and Social Science, and in the April print edition of the publication.

    “I wanted to show people that a small idea could take off into something big,” Hsu says. “For me that’s a really hopeful democratic story, where people could do something without feeling they had to take on a whole giant system that wouldn’t immediately respond to only one person.”

    Local control

    Aggregating consumers to purchase energy was not a novelty in the 1990s. Companies within many industries have long joined forces to gain purchasing power for energy. And Rhode Island tried a form of CCA slightly earlier than Massachusetts did.

    However, it is the Massachusetts model that has been adopted widely: Cities or towns can require power purchases from, say, renewable sources, while individual citizens can opt out of those agreements. More state funding (for things like efficiency improvements) is redirected to cities and towns as well.

    In both ways, CCA policies provide more local control over energy delivery. They have been adopted in California, Illinois, New Jersey, New York, and Ohio. Meanwhile, Maryland, New Hampshire, and Virginia have recently passed similar legislation (also known as municipal or government aggregation, or community choice energy).

    For cities and towns, Hsu says, “Maybe you don’t own outright the whole energy system, but let’s take away one particular function of the utility, which is procurement.”

    That vision motivated a handful of Massachusetts activists and policy experts in the 1990s, including journalist Scott Ridley, who co-wrote a 1986 book, “Power Struggle,” with the University of Massachusetts historian Richard Rudolph and had spent years thinking about ways to reconfigure the energy system; Matt Patrick, chair of a local nonprofit focused on energy efficiency; Rob O’Leary, a local official in Barnstable County, on Cape Cod; and Paul Fenn, a staff aide to the state senator who chaired the legislature’s energy committee.

    “It started with these political activists,” Hsu says.

    Hsu’s research emphasizes several lessons to be learned from the fact the legislation first failed in 1995, before unexpectedly passing in 1997. Ridley remained an author and public figure; Patrick and O’Leary would each eventually be elected to the state legislature, but only after 2000; and Fenn had left his staff position by 1995 and worked with the group long-distance from California (where he became a long-term advocate about the issue). Thus, at the time CCA passed in 1997, none of its main advocates held an insider position in state politics. How did it succeed?

    Lessons of the legislation

    In the first place, Hsu believes, a legislative process resembles what the political theorist John Kingdon has called a “multiple streams framework,” in which “many elements of the policymaking process are separate, meandering, and uncertain.” Legislation isn’t entirely controlled by big donors or other interest groups, and “policy entrepreneurs” can find success in unpredictable windows of opportunity.

    “It’s the most true-to-life theory,” says Hsu.  

    Second, Hsu emphasizes, finding allies is crucial. In the case of CCA, that came about in a few ways. Many towns in Massachusetts have a town-level legislature known as Town Meeting; the activists got those bodies in about 20 towns to pass nonbinding resolutions in favor of community choice. O’Leary helped create a regional county commission in Barnstable County, while Patrick crafted an energy plan for it. High electricity rates were affecting all of Cape Cod at the time, so community choice also served as an economic benefit for Cape Cod’s working-class service-industry employees. The activists also found that adding an opt-out clause to the 1997 version appealed to legislators, who would support CCA if their constituents were not all bound to it.

    “You really have to stick with it, and you have to look for coalition partners,” Hsu says. “It’s fun to hear them [the activists] talk about going to Town Meetings, and how they tried to build grassroots support. If you look for allies, you can get things done. [I hope] the people can see [themselves] in other people’s activism even if they’re not exactly the same as you are.”

    By 1997, the CCA legislation had more geographic support, was understood as both an economic and environmental benefit for voters, and would not force membership upon anyone. The activists, while giving media interviews, and holding conferences, had found additional traction in the principle of citizen choice.

    “It’s interesting to me how the rhetoric of [citizen] choice and the rhetoric of democracy proves to be effective,” Hsu says. “Legislators feel like they have to give everyone some choice. And it expresses a collective desire for a choice that the utilities take away by being monopolies.”

    He adds: “We need to set out principles that shape systems, rather than just taking the system as a given and trying to justify principles that are 150 years old.”

    One last element in CCA passage was good timing. The governor and legislature in Massachusetts were already seeking a “grand bargain” to restructure electricity delivery and loosen the grip of utilities; the CCA fit in as part of this larger reform movement. Still, CCA adoption has been gradual; about one-third of Massachusetts towns with CCA have only adopted it within the last five years.

    CCA’s growth does not mean it’s invulnerable to repeal or utility-funded opposition efforts — “In California there’s been pretty intense pushback,” Hsu notes. Still, Hsu concludes, the fact that a handful of activists could start a national energy-policy movement is a useful reminder that everyone’s actions can make a difference.

    “It wasn’t like they went charging through a barricade, they just found a way around it,” Hsu says. “I want my students to know you can organize and rethink the future. It takes some commitment and work over a long time.” More

  • in

    Preparing global online learners for the clean energy transition

    After a career devoted to making the electric power system more efficient and resilient, Marija Ilic came to MIT in 2018 eager not just to extend her research in new directions, but to prepare a new generation for the challenges of the clean-energy transition.

    To that end, Ilic, a senior research scientist in MIT’s Laboratory for Information and Decisions Systems (LIDS) and a senior staff member at Lincoln Laboratory in the Energy Systems Group, designed an edX course that captures her methods and vision: Principles of Modeling, Simulation, and Control for Electric Energy Systems.

    EdX is a provider of massive open online courses produced in partnership with MIT, Harvard University, and other leading universities. Ilic’s class made its online debut in June 2021, running for 12 weeks, and it is one of an expanding set of online courses funded by the MIT Energy Initiative (MITEI) to provide global learners with a view of the shifting energy landscape.

    Ilic first taught a version of the class while a professor at Carnegie Mellon University, rolled out a second iteration at MIT just as the pandemic struck, and then revamped the class for its current online presentation. But no matter the course location, Ilic focuses on a central theme: “With the need for decarbonization, which will mean accommodating new energy sources such as solar and wind, we must rethink how we operate power systems,” she says. “This class is about how to pose and solve the kinds of problems we will face during this transformation.”

    Hot global topic

    The edX class has been designed to welcome a broad mix of students. In summer 2021, more than 2,000 signed up from 109 countries, ranging from high school students to retirees. In surveys, some said they were drawn to the class by the opportunity to advance their knowledge of modeling. Many others hoped to learn about the move to decarbonize energy systems.

    “The energy transition is a hot topic everywhere in the world, not just in the U.S.,” says teaching assistant Miroslav Kosanic. “In the class, there were veterans of the oil industry and others working in investment and finance jobs related to energy who wanted to understand the potential impacts of changes in energy systems, as well as students from different fields and professors seeking to update their curricula — all gathered into a community.”

    Kosanic, who is currently a PhD student at MIT in electrical engineering and computer science, had taken this class remotely in the spring semester of 2021, while he was still in college in Serbia. “I knew I was interested in power systems, but this course was eye-opening for me, showing how to apply control theory and to model different components of these systems,” he says. “I finished the course and thought, this is just the beginning, and I’d like to learn a lot more.” Kosanic performed so well online that Ilic recruited him to MIT, as a LIDS researcher and edX course teaching assistant, where he grades homework assignments and moderates a lively learner community forum.

    A platform for problem-solving

    The course starts with fundamental concepts in electric power systems operations and management, and it steadily adds layers of complexity, posing real-world problems along the way. Ilic explains how voltage travels from point to point across transmission lines and how grid managers modulate systems to ensure that enough, but not too much, electricity flows. “To deliver power from one location to the next one, operators must constantly make adjustments to ensure that the receiving end can handle the voltage transmitted, optimizing voltage to avoid overheating the wires,” she says.

    In her early lectures, Ilic notes the fundamental constraints of current grid operations, organized around a hierarchy of regional managers dealing with a handful of very large oil, gas, coal, and nuclear power plants, and occupied primarily with the steady delivery of megawatt-hours to far-flung customers. But historically, this top-down structure doesn’t do a good job of preventing loss of energy due to sub-optimal transmission conditions or due to outages related to extreme weather events.

    These issues promise to grow for grid operators as distributed resources such as solar and wind enter the picture, Ilic tells students. In the United States, under new rules dictated by the Federal Energy Regulatory Commission, utilities must begin to integrate the distributed, intermittent electricity produced by wind farms, solar complexes, and even by homes and cars, which flows at voltages much lower than electricity produced by large power plants.

    Finding ways to optimize existing energy systems and to accommodate low- and zero-carbon energy sources requires powerful new modes of analysis and problem-solving. This is where Ilic’s toolbox comes in: a mathematical modeling strategy and companion software that simplifies the input and output of electrical systems, no matter how large or how small. “In the last part of the course, we take up modeling different solutions to electric service in a way that is technology-agnostic, where it only matters how much a black-box energy source produces, and the rates of production and consumption,” says Ilic.

    This black-box modeling approach, which Ilic pioneered in her research, enables students to see, for instance, “what is happening with their own household consumption, and how it affects the larger system,” says Rupamathi Jaddivada PhD ’20, a co-instructor of the edX class and a postdoc in electrical engineering and computer science. “Without getting lost in details of current or voltage, or how different components work, we think about electric energy systems as dynamical components interacting with each other, at different spatial scales.” This means that with just a basic knowledge of physical laws, high school and undergraduate students can take advantage of the course “and get excited about cleaner and more reliable energy,” adds Ilic.

    What Jaddivada and Ilic describe as “zoom in, zoom out” systems thinking leverages the ubiquity of digital communications and the so-called “internet of things.” Energy devices of all scales can link directly to other devices in a network instead of just to a central operations hub, allowing for real-time adjustments in voltage, for instance, vastly improving the potential for optimizing energy flows.

    “In the course, we discuss how information exchange will be key to integrating new end-to-end energy resources and, because of this interactivity, how we can model better ways of controlling entire energy networks,” says Ilic. “It’s a big lesson of the course to show the value of information and software in enabling us to decarbonize the system and build resilience, rather than just building hardware.”

    By the end of the course, students are invited to pursue independent research projects. Some might model the impact of a new energy source on a local grid or investigate different options for reducing energy loss in transmission lines.

    “It would be nice if they see that we don’t have to rely on hardware or large-scale solutions to bring about improved electric service and a clean and resilient grid, but instead on information technologies such as smart components exchanging data in real time, or microgrids in neighborhoods that sustain themselves even when they lose power,” says Ilic. “I hope students walk away convinced that it does make sense to rethink how we operate our basic power systems and that with systematic, physics-based modeling and IT methods we can enable better, more flexible operation in the future.”

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative More