More stories

  • in

    Absent legislative victory, the president can still meet US climate goals

    The most recent United Nations climate change report indicates that without significant action to mitigate global warming, the extent and magnitude of climate impacts — from floods to droughts to the spread of disease — could outpace the world’s ability to adapt to them. The latest effort to introduce meaningful climate legislation in the United States Congress, the Build Back Better bill, has stalled. The climate package in that bill — $555 billion in funding for climate resilience and clean energy — aims to reduce U.S. greenhouse gas emissions by about 50 percent below 2005 levels by 2030, the nation’s current Paris Agreement pledge. With prospects of passing a standalone climate package in the Senate far from assured, is there another pathway to fulfilling that pledge?

    Recent detailed legal analysis shows that there is at least one viable option for the United States to achieve the 2030 target without legislative action. Under Section 115 on International Air Pollution of the Clean Air Act, the U.S. Environmental Protection Agency (EPA) could assign emissions targets to the states that collectively meet the national goal. The president could simply issue an executive order to empower the EPA to do just that. But would that be prudent?

    A new study led by researchers at the MIT Joint Program on the Science and Policy of Global Change explores how, under a federally coordinated carbon dioxide emissions cap-and-trade program aligned with the U.S. Paris Agreement pledge and implemented through Section 115 of the Clean Air Act, the EPA might allocate emissions cuts among states. Recognizing that the Biden or any future administration considering this strategy would need to carefully weigh its benefits against its potential political risks, the study highlights the policy’s net economic benefits to the nation.

    The researchers calculate those net benefits by combining the estimated total cost of carbon dioxide emissions reduction under the policy with the corresponding estimated expenditures that would be avoided as a result of the policy’s implementation — expenditures on health care due to particulate air pollution, and on society at large due to climate impacts.

    Assessing three carbon dioxide emissions allocation strategies (each with legal precedent) for implementing Section 115 to return cap-and-trade program revenue to the states and distribute it to state residents on an equal per-capita basis, the study finds that at the national level, the economic net benefits are substantial, ranging from $70 to $150 billion in 2030. The results appear in the journal Environmental Research Letters.

    “Our findings not only show significant net gains to the U.S. economy under a national emissions policy implemented through the Clean Air Act’s Section 115,” says Mei Yuan, a research scientist at the MIT Joint Program and lead author of the study. “They also show the policy impact on consumer costs may differ across states depending on the choice of allocation strategy.”

    The national price on carbon needed to achieve the policy’s emissions target, as well as the policy’s ultimate cost to consumers, are substantially lower than those found in studies a decade earlier, although in line with other recent studies. The researchers speculate that this is largely due to ongoing expansion of ambitious state policies in the electricity sector and declining renewable energy costs. The policy is also progressive, consistent with earlier studies, in that equal lump-sum distribution of allowance revenue to state residents generally leads to net benefits to lower-income households. Regional disparities in consumer costs can be moderated by the allocation of allowances among states.

    State-by-state emissions estimates for the study are derived from MIT’s U.S. Regional Energy Policy model, with electricity sector detail of the Renewable Energy Development System model developed by the U.S. National Renewable Energy Laboratory; air quality benefits are estimated using U.S. EPA and other models; and the climate benefits estimate is based on the social cost of carbon, the U.S. federal government’s assessment of the economic damages that would result from emitting one additional ton of carbon dioxide into the atmosphere (currently $51/ton, adjusted for inflation). 

    “In addition to illustrating the economic, health, and climate benefits of a Section 115 implementation, our study underscores the advantages of a policy that imposes a uniform carbon price across all economic sectors,” says John Reilly, former co-director of the MIT Joint Program and a study co-author. “A national carbon price would serve as a major incentive for all sectors to decarbonize.” More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    What choices does the world need to make to keep global warming below 2 C?

    When the 2015 Paris Agreement set a long-term goal of keeping global warming “well below 2 degrees Celsius, compared to pre-industrial levels” to avoid the worst impacts of climate change, it did not specify how its nearly 200 signatory nations could collectively achieve that goal. Each nation was left to its own devices to reduce greenhouse gas emissions in alignment with the 2 C target. Now a new modeling strategy developed at the MIT Joint Program on the Science and Policy of Global Change that explores hundreds of potential future development pathways provides new insights on the energy and technology choices needed for the world to meet that target.

    Described in a study appearing in the journal Earth’s Future, the new strategy combines two well-known computer modeling techniques to scope out the energy and technology choices needed over the coming decades to reduce emissions sufficiently to achieve the Paris goal.

    The first technique, Monte Carlo analysis, quantifies uncertainty levels for dozens of energy and economic indicators including fossil fuel availability, advanced energy technology costs, and population and economic growth; feeds that information into a multi-region, multi-economic-sector model of the world economy that captures the cross-sectoral impacts of energy transitions; and runs that model hundreds of times to estimate the likelihood of different outcomes. The MIT study focuses on projections through the year 2100 of economic growth and emissions for different sectors of the global economy, as well as energy and technology use.

    The second technique, scenario discovery, uses machine learning tools to screen databases of model simulations in order to identify outcomes of interest and their conditions for occurring. The MIT study applies these tools in a unique way by combining them with the Monte Carlo analysis to explore how different outcomes are related to one another (e.g., do low-emission outcomes necessarily involve large shares of renewable electricity?). This approach can also identify individual scenarios, out of the hundreds explored, that result in specific combinations of outcomes of interest (e.g., scenarios with low emissions, high GDP growth, and limited impact on electricity prices), and also provide insight into the conditions needed for that combination of outcomes.

    Using this unique approach, the MIT Joint Program researchers find several possible patterns of energy and technology development under a specified long-term climate target or economic outcome.

    “This approach shows that there are many pathways to a successful energy transition that can be a win-win for the environment and economy,” says Jennifer Morris, an MIT Joint Program research scientist and the study’s lead author. “Toward that end, it can be used to guide decision-makers in government and industry to make sound energy and technology choices and avoid biases in perceptions of what ’needs’ to happen to achieve certain outcomes.”

    For example, while achieving the 2 C goal, the global level of combined wind and solar electricity generation by 2050 could be less than three times or more than 12 times the current level (which is just over 2,000 terawatt hours). These are very different energy pathways, but both can be consistent with the 2 C goal. Similarly, there are many different energy mixes that can be consistent with maintaining high GDP growth in the United States while also achieving the 2 C goal, with different possible roles for renewables, natural gas, carbon capture and storage, and bioenergy. The study finds renewables to be the most robust electricity investment option, with sizable growth projected under each of the long-term temperature targets explored.

    The researchers also find that long-term climate targets have little impact on economic output for most economic sectors through 2050, but do require each sector to significantly accelerate reduction of its greenhouse gas emissions intensity (emissions per unit of economic output) so as to reach near-zero levels by midcentury.

    “Given the range of development pathways that can be consistent with meeting a 2 degrees C goal, policies that target only specific sectors or technologies can unnecessarily narrow the solution space, leading to higher costs,” says former MIT Joint Program Co-Director John Reilly, a co-author of the study. “Our findings suggest that policies designed to encourage a portfolio of technologies and sectoral actions can be a wise strategy that hedges against risks.”

    The research was supported by the U.S. Department of Energy Office of Science. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Finding the questions that guide MIT fusion research

    “One of the things I learned was, doing good science isn’t so much about finding the answers as figuring out what the important questions are.”

    As Martin Greenwald retires from the responsibilities of senior scientist and deputy director of the MIT Plasma Science and Fusion Center (PSFC), he reflects on his almost 50 years of science study, 43 of them as a researcher at MIT, pursuing the question of how to make the carbon-free energy of fusion a reality.

    Most of Greenwald’s important questions about fusion began after graduating from MIT with a BS in both physics and chemistry. Beginning graduate work at the University of California at Berkeley, he felt compelled to learn more about fusion as an energy source that could have “a real societal impact.” At the time, researchers were exploring new ideas for devices that could create and confine fusion plasmas. Greenwald worked on Berkeley’s “alternate concept” TORMAC, a Toroidal Magnetic Cusp. “It didn’t work out very well,” he laughs. “The first thing I was known for was making the measurements that shut down the program.”

    Believing the temperature of the plasma generated by the device would not be as high as his group leader expected, Greenwald developed hardware that could measure the low temperatures predicted by his own “back of the envelope calculations.” As he anticipated, his measurements showed that “this was not a fusion plasma; this was hardly a confined plasma at all.”

    With a PhD from Berkeley, Greenwald returned to MIT for a research position at the PSFC, attracted by the center’s “esprit de corps.”

    He arrived in time to participate in the final experiments on Alcator A, the first in a series of tokamaks built at MIT, all characterized by compact size and featuring high-field magnets. The tokamak design was then becoming favored as the most effective route to fusion: its doughnut-shaped vacuum chamber, surrounded by electromagnets, could confine the turbulent plasma long enough, while increasing its heat and density, to make fusion occur.

    Alcator A showed that the energy confinement time improves in relation to increasing plasma density. MIT’s succeeding device, Alcator C, was designed to use higher magnetic fields, boosting expectations that it would reach higher densities and better confinement. To attain these goals, however, Greenwald had to pursue a new technique that increased density by injecting pellets of frozen fuel into the plasma, a method he likens to throwing “snowballs in hell.” This work was notable for the creation of a new regime of enhanced plasma confinement on Alcator C. In those experiments, a confined plasma surpassed for the first time one of the two Lawson criteria — the minimum required value for the product of the plasma density and confinement time — for making net power from fusion. This had been a milestone for fusion research since their publication by John Lawson in 1957.

    Greenwald continued to make a name for himself as part of a larger study into the physics of the Compact Ignition Tokamak — a high-field burning plasma experiment that the U.S. program was proposing to build in the late 1980s. The result, unexpectedly, was a new scaling law, later known as the “Greenwald Density Limit,” and a new theory for the mechanism of the limit. It has been used to accurately predict performance on much larger machines built since.

    The center’s next tokamak, Alcator C-Mod, started operation in 1993 and ran for more than 20 years, with Greenwald as the chair of its Experimental Program Committee. Larger than Alcator C, the new device supported a highly shaped plasma, strong radiofrequency heating, and an all-metal plasma-facing first wall. All of these would eventually be required in a fusion power system.

    C-Mod proved to be MIT’s most enduring fusion experiment to date, producing important results for 20 years. During that time Greenwald contributed not only to the experiments, but to mentoring the next generation. Research scientist Ryan Sweeney notes that “Martin quickly gained my trust as a mentor, in part due to his often casual dress and slightly untamed hair, which are embodiments of his transparency and his focus on what matters. He can quiet a room of PhDs and demand attention not by intimidation, but rather by his calmness and his ability to bring clarity to complicated problems, be they scientific or human in nature.”

    Greenwald worked closely with the group of students who, in PSFC Director Dennis Whyte’s class, came up with the tokamak concept that evolved into SPARC. MIT is now pursuing this compact, high-field tokamak with Commonwealth Fusion Systems, a startup that grew out of the collective enthusiasm for this concept, and the growing realization it could work. Greenwald now heads the Physics Group for the SPARC project at MIT. He has helped confirm the device’s physics basis in order to predict performance and guide engineering decisions.

    “Martin’s multifaceted talents are thoroughly embodied by, and imprinted on, SPARC” says Whyte. “First, his leadership in its plasma confinement physics validation and publication place SPARC on a firm scientific footing. Secondly, the impact of the density limit he discovered, which shows that fuel density increases with magnetic field and decreasing the size of the tokamak, is critical in obtaining high fusion power density not just in SPARC, but in future power plants. Third, and perhaps most impressive, is Martin’s mentorship of the SPARC generation of leadership.”

    Greenwald’s expertise and easygoing personality have made him an asset as head of the PSFC Office for Computer Services and group leader for data acquisition and computing, and sought for many professional committees. He has been an APS Fellow since 2000, and was an APS Distinguished Lecturer in Plasma Physics (2001-02). He was also presented in 2014 with a Leadership Award from Fusion Power Associates. He is currently an associate editor for Physics of Plasmas and a member of the Lawrence Livermore National Laboratory Physical Sciences Directorate External Review Committee.

    Although leaving his full-time responsibilities, Greenwald will remain at MIT as a visiting scientist, a role he says will allow him to “stick my nose into everything without being responsible for anything.”

    “At some point in the race you have to hand off the baton,“ he says. “And it doesn’t mean you’re not interested in the outcome; and it doesn’t mean you’re just going to walk away into the stands. I want to be there at the end when we succeed.” More

  • in

    Chemical reactions for the energy transition

    One challenge in decarbonizing the energy system is knowing how to deal with new types of fuels. Traditional fuels such as natural gas and oil can be combined with other materials and then heated to high temperatures so they chemically react to produce other useful fuels or substances, or even energy to do work. But new materials such as biofuels can’t take as much heat without breaking down.

    A key ingredient in such chemical reactions is a specially designed solid catalyst that is added to encourage the reaction to happen but isn’t itself consumed in the process. With traditional materials, the solid catalyst typically interacts with a gas; but with fuels derived from biomass, for example, the catalyst must work with a liquid — a special challenge for those who design catalysts.

    For nearly a decade, Yogesh Surendranath, an associate professor of chemistry at MIT, has been focusing on chemical reactions between solid catalysts and liquids, but in a different situation: rather than using heat to drive reactions, he and his team input electricity from a battery or a renewable source such as wind or solar to give chemically inactive molecules more energy so they react. And key to their research is designing and fabricating solid catalysts that work well for reactions involving liquids.

    Recognizing the need to use biomass to develop sustainable liquid fuels, Surendranath wondered whether he and his team could take the principles they have learned about designing catalysts to drive liquid-solid reactions with electricity and apply them to reactions that occur at liquid-solid interfaces without any input of electricity.

    To their surprise, they found that their knowledge is directly relevant. Why? “What we found — amazingly — is that even when you don’t hook up wires to your catalyst, there are tiny internal ‘wires’ that do the reaction,” says Surendranath. “So, reactions that people generally think operate without any flow of current actually do involve electrons shuttling from one place to another.” And that means that Surendranath and his team can bring the powerful techniques of electrochemistry to bear on the problem of designing catalysts for sustainable fuels.

    A novel hypothesis

    Their work has focused on a class of chemical reactions important in the energy transition that involve adding oxygen to small organic (carbon-containing) molecules such as ethanol, methanol, and formic acid. The conventional assumption is that the reactant and oxygen chemically react to form the product plus water. And a solid catalyst — often a combination of metals — is present to provide sites on which the reactant and oxygen can interact.

    But Surendranath proposed a different view of what’s going on. In the usual setup, two catalysts, each one composed of many nanoparticles, are mounted on a conductive carbon substrate and submerged in water. In that arrangement, negatively charged electrons can flow easily through the carbon, while positively charged protons can flow easily through water.

    Surendranath’s hypothesis was that the conversion of reactant to product progresses by means of two separate “half-reactions” on the two catalysts. On one catalyst, the reactant turns into a product, in the process sending electrons into the carbon substrate and protons into the water. Those electrons and protons are picked up by the other catalyst, where they drive the oxygen-to-water conversion. So, instead of a single reaction, two separate but coordinated half-reactions together achieve the net conversion of reactant to product.

    As a result, the overall reaction doesn’t actually involve any net electron production or consumption. It is a standard “thermal” reaction resulting from the energy in the molecules and maybe some added heat. The conventional approach to designing a catalyst for such a reaction would focus on increasing the rate of that reactant-to-product conversion. And the best catalyst for that kind of reaction could turn out to be, say, gold or palladium or some other expensive precious metal.

    However, if that reaction actually involves two half-reactions, as Surendranath proposed, there is a flow of electrical charge (the electrons and protons) between them. So Surendranath and others in the field could instead use techniques of electrochemistry to design not a single catalyst for the overall reaction but rather two separate catalysts — one to speed up one half-reaction and one to speed up the other half-reaction. “That means we don’t have to design one catalyst to do all the heavy lifting of speeding up the entire reaction,” says Surendranath. “We might be able to pair up two low-cost, earth-abundant catalysts, each of which does half of the reaction well, and together they carry out the overall transformation quickly and efficiently.”

    But there’s one more consideration: Electrons can flow through the entire catalyst composite, which encompasses the catalyst particle(s) and the carbon substrate. For the chemical conversion to happen as quickly as possible, the rate at which electrons are put into the catalyst composite must exactly match the rate at which they are taken out. Focusing on just the electrons, if the reaction-to-product conversion on the first catalyst sends the same number of electrons per second into the “bath of electrons” in the catalyst composite as the oxygen-to-water conversion on the second catalyst takes out, the two half-reactions will be balanced, and the electron flow — and the rate of the combined reaction — will be fast. The trick is to find good catalysts for each of the half-reactions that are perfectly matched in terms of electrons in and electrons out.

    “A good catalyst or pair of catalysts can maintain an electrical potential — essentially a voltage — at which both half-reactions are fast and are balanced,” says Jaeyune Ryu PhD ’21, a former member of the Surendranath lab and lead author of the study; Ryu is now a postdoc at Harvard University. “The rates of the reactions are equal, and the voltage in the catalyst composite won’t change during the overall thermal reaction.”

    Drawing on electrochemistry

    Based on their new understanding, Surendranath, Ryu, and their colleagues turned to electrochemistry techniques to identify a good catalyst for each half-reaction that would also pair up to work well together. Their analytical framework for guiding catalyst development for systems that combine two half-reactions is based on a theory that has been used to understand corrosion for almost 100 years, but has rarely been applied to understand or design catalysts for reactions involving small molecules important for the energy transition.

    Key to their work is a potentiostat, a type of voltmeter that can either passively measure the voltage of a system or actively change the voltage to cause a reaction to occur. In their experiments, Surendranath and his team use the potentiostat to measure the voltage of the catalyst in real time, monitoring how it changes millisecond to millisecond. They then correlate those voltage measurements with simultaneous but separate measurements of the overall rate of catalysis to understand the reaction pathway.

    For their study of the conversion of small, energy-related molecules, they first tested a series of catalysts to find good ones for each half-reaction — one to convert the reactant to product, producing electrons and protons, and another to convert the oxygen to water, consuming electrons and protons. In each case, a promising candidate would yield a rapid reaction — that is, a fast flow of electrons and protons out or in.

    To help identify an effective catalyst for performing the first half-reaction, the researchers used their potentiostat to input carefully controlled voltages and measured the resulting current that flowed through the catalyst. A good catalyst will generate lots of current for little applied voltage; a poor catalyst will require high applied voltage to get the same amount of current. The team then followed the same procedure to identify a good catalyst for the second half-reaction.

    To expedite the overall reaction, the researchers needed to find two catalysts that matched well — where the amount of current at a given applied voltage was high for each of them, ensuring that as one produced a rapid flow of electrons and protons, the other one consumed them at the same rate.

    To test promising pairs, the researchers used the potentiostat to measure the voltage of the catalyst composite during net catalysis — not changing the voltage as before, but now just measuring it from tiny samples. In each test, the voltage will naturally settle at a certain level, and the goal is for that to happen when the rate of both reactions is high.

    Validating their hypothesis and looking ahead

    By testing the two half-reactions, the researchers could measure how the reaction rate for each one varied with changes in the applied voltage. From those measurements, they could predict the voltage at which the full reaction would proceed fastest. Measurements of the full reaction matched their predictions, supporting their hypothesis.

    The team’s novel approach of using electrochemistry techniques to examine reactions thought to be strictly thermal in nature provides new insights into the detailed steps by which those reactions occur and therefore into how to design catalysts to speed them up. “We can now use a divide-and-conquer strategy,” says Ryu. “We know that the net thermal reaction in our study happens through two ‘hidden’ but coupled half-reactions, so we can aim to optimize one half-reaction at a time” — possibly using low-cost catalyst materials for one or both.

    Adds Surendranath, “One of the things that we’re excited about in this study is that the result is not final in and of itself. It has really seeded a brand-new thrust area in our research program, including new ways to design catalysts for the production and transformation of renewable fuels and chemicals.”

    This research was supported primarily by the Air Force Office of Scientific Research. Jaeyune Ryu PhD ’21 was supported by a Samsung Scholarship. Additional support was provided by a National Science Foundation Graduate Research Fellowship.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Finding her way to fusion

    “I catch myself startling people in public.”

    Zoe Fisher’s animated hands carry part of the conversation as she describes how her naturally loud and expressive laughter turned heads in the streets of Yerevan. There during MIT’s Independent Activities period (IAP), she was helping teach nuclear science at the American University of Armenia, before returning to MIT to pursue fusion research at the Plasma Science and Fusion Center (PSFC).

    Startling people may simply be in Fisher’s DNA. She admits that when she first arrived at MIT, knowing nothing about nuclear science and engineering (NSE), she chose to join that department’s Freshman Pre-Orientation Program (FPOP) “for the shock value.” It was a choice unexpected by family, friends, and mostly herself. Now in her senior year, a 2021 recipient of NSE’s Irving Kaplan Award for academic achievements by a junior and entering a fifth-year master of science program in nuclear fusion, Fisher credits that original spontaneous impulse for introducing her to a subject she found so compelling that, after exploring multiple possibilities, she had to return to it.

    Fisher’s venture to Armenia, under the guidance of NSE associate professor Areg Danagoulian, is not the only time she has taught oversees with MISTI’s Global Teaching Labs, though it is the first time she has taught nuclear science, not to mention thermodynamics and materials science. During IAP 2020 she was a student teacher at a German high school, teaching life sciences, mathematics, and even English to grades five through 12. And after her first year she explored the transportation industry with a mechanical engineering internship in Tuscany, Italy.

    By the time she was ready to declare her NSE major she had sampled the alternatives both overseas and at home, taking advantage of MIT’s Undergraduate Research Opportunities Program (UROP). Drawn to fusion’s potential as an endless source of carbon-free energy on earth, she decided to try research at the PSFC, to see if the study was a good fit. 

    Much fusion research at MIT has favored heating hydrogen fuel inside a donut-shaped device called a tokamak, creating plasma that is hot and dense enough for fusion to occur. Because plasma will follow magnetic field lines, these devices are wrapped with magnets to keep the hot fuel from damaging the chamber walls.

    Fisher was assigned to SPARC, the PSFC’s new tokamak collaboration with MIT startup Commonwealth Fusion Systems (CSF), which uses a game-changing high-temperature superconducting (HTS) tape to create fusion magnets that minimize tokamak size and maximize performance. Working on a database reference book for SPARC materials, she was finding purpose even in the most repetitive tasks. “Which is how I knew I wanted to stay in fusion,” she laughs.

    Fisher’s latest UROP assignment takes her — literally — deeper into SPARC research. She works in a basement laboratory in building NW13 nicknamed “The Vault,” on a proton accelerator whose name conjures an underworld: DANTE. Supervised by PSFC Director Dennis Whyte and postdoc David Fischer, she is exploring the effects of radiation damage on the thin HTS tape that is key to SPARC’s design, and ultimately to the success of ARC, a prototype working fusion power plant.

    Because repetitive bombardment with neutrons produced during the fusion process can diminish the superconducting properties of the HTS, it is crucial to test the tape repeatedly. Fisher assists in assembling and testing the experimental setups for irradiating the HTS samples. Fisher recalls her first project was installing a “shutter” that would allow researchers to control exactly how much radiation reached the tape without having to turn off the entire experiment.

    “You could just push the button — block the radiation — then unblock it. It sounds super simple, but it took many trials. Because first I needed the right size solenoid, and then I couldn’t find a piece of metal that was small enough, and then we needed cryogenic glue…. To this day the actual final piece is made partially of paper towels.”

    She shrugs and laughs. “It worked, and it was the cheapest option.”

    Fisher is always ready to find the fun in fusion. Referring to DANTE as “A really cool dude,” she admits, “He’s perhaps a bit fickle. I may or may not have broken him once.” During a recent IAP seminar, she joined other PSFC UROP students to discuss her research, and expanded on how a mishap can become a gateway to understanding.

    “The grad student I work with and I got to repair almost the entire internal circuit when we blew the fuse — which originally was a really bad thing. But it ended up being great because we figured out exactly how it works.”

    Fisher’s upbeat spirit makes her ideal not only for the challenges of fusion research, but for serving the MIT community. As a student representative for NSE’s Diversity, Equity and Inclusion Committee, she meets monthly with the goal of growing and supporting diversity within the department.

    “This opportunity is impactful because I get my voice, and the voices of my peers, taken seriously,” she says. “Currently, we are spending most of our efforts trying to identify and eliminate hurdles based on race, ethnicity, gender, and income that prevent people from pursuing — and applying to — NSE.”

    To break from the lab and committees, she explores the Charles River as part of MIT’s varsity sailing team, refusing to miss a sunset. She also volunteers as an FPOP mentor, seeking to provide incoming first-years with the kind of experience that will make them want to return to the topic, as she did.

    She looks forward to continuing her studies on the HTS tapes she has been irradiating, proposing to send a current pulse above the critical current through the tape, to possibly anneal any defects from radiation, which would make repairs on future fusion power plants much easier.

    Fisher credits her current path to her UROP mentors and their infectious enthusiasm for the carbon-free potential of fusion energy.

    “UROPing around the PSFC showed me what I wanted to do with my life,” she says. “Who doesn’t want to save the world?” More