More stories

  • in

    Making hydrogen power a reality

    For decades, government and industry have looked to hydrogen as a potentially game-changing tool in the quest for clean energy. As far back as the early days of the Clinton administration, energy sector observers and public policy experts have extolled the virtues of hydrogen — to the point that some people have joked that hydrogen is the energy of the future, “and always will be.”

    Even as wind and solar power have become commonplace in recent years, hydrogen has been held back by high costs and other challenges. But the fuel may finally be poised to have its moment. At the MIT Energy Initiative Spring Symposium — entitled “Hydrogen’s role in a decarbonized energy system” — experts discussed hydrogen production routes, hydrogen consumption markets, the path to a robust hydrogen infrastructure, and policy changes needed to achieve a “hydrogen future.”

    During one panel, “Options for producing low-carbon hydrogen at scale,” four experts laid out existing and planned efforts to leverage hydrogen for decarbonization. 

    “The race is on”

    Huyen N. Dinh, a senior scientist and group manager at the National Renewable Energy Laboratory (NREL), is the director of HydroGEN, a consortium of several U.S. Department of Energy (DOE) national laboratories that accelerates research and development of innovative and advanced water splitting materials and technologies for clean, sustainable, and low-cost hydrogen production.

    For the past 14 years, Dinh has worked on fuel cells and hydrogen production for NREL. “We think that the 2020s is the decade of hydrogen,” she said. Dinh believes that the energy carrier is poised to come into its own over the next few years, pointing to several domestic and international activities surrounding the fuel and citing a Hydrogen Council report that projected the future impacts of hydrogen — including 30 million jobs and $2.5 trillion in global revenue by 2050.

    “Now is the time for hydrogen, and the global race is on,” she said.

    Dinh also explained the parameters of the Hydrogen Shot — the first of the DOE’s “Energy Earthshots” aimed at accelerating breakthroughs for affordable and reliable clean energy solutions. Hydrogen fuel currently costs around $5 per kilogram to produce, and the Hydrogen Shot’s stated goal is to bring that down by 80 percent to $1 per kilogram within a decade.

    The Hydrogen Shot will be facilitated by $9.5 billion in funding for at least four clean hydrogen hubs located in different parts of the United States, as well as extensive research and development, manufacturing, and recycling from last year’s bipartisan infrastructure law. Still, Dinh noted that it took more than 40 years for solar and wind power to become cost competitive, and now industry, government, national lab, and academic leaders are hoping to achieve similar reductions in hydrogen fuel costs over a much shorter time frame. In the near term, she said, stakeholders will need to improve the efficiency, durability, and affordability of hydrogen production through electrolysis (using electricity to split water) using today’s renewable and nuclear power sources. Over the long term, the focus may shift to splitting water more directly through heat or solar energy, she said.

    “The time frame is short, the competition is intense, and a coordinated effort is critical for domestic competitiveness,” Dinh said.

    Hydrogen across continents

    Wambui Mutoru, principal engineer for international commercial development, exploration, and production international at the Norwegian global energy company Equinor, said that hydrogen is an important component in the company’s ambitions to be carbon-neutral by 2050. The company, in collaboration with partners, has several hydrogen projects in the works, and Mutoru laid out the company’s Hydrogen to Humber project in Northern England. Currently, the Humber region emits more carbon dioxide than any other industrial cluster in the United Kingdom — 50 percent more, in fact, than the next-largest carbon emitter.   

    “The ambition here is for us to deploy the world’s first at-scale hydrogen value chain to decarbonize the Humber industrial cluster,” Mutoru said.

    The project consists of three components: a clean hydrogen production facility, an onshore hydrogen and carbon dioxide transmission network, and offshore carbon dioxide transportation and storage operations. Mutoru highlighted the importance of carbon capture and storage in hydrogen production. Equinor, she said, has captured and sequestered carbon offshore for more than 25 years, storing more than 25 million tons of carbon dioxide during that time.

    Mutoru also touched on Equinor’s efforts to build a decarbonized energy hub in the Appalachian region of the United States, covering territory in Ohio, West Virginia, and Pennsylvania. By 2040, she said, the company’s ambition is to produce about 1.5 million tons of clean hydrogen per year in the region — roughly equivalent to 6.8 gigawatts of electricity — while also storing 30 million tons of carbon dioxide.

    Mutoru acknowledged that the biggest challenge facing potential hydrogen producers is the current lack of viable business models. “Resolving that challenge requires cross-industry collaboration, and supportive policy frameworks so that the market for hydrogen can be built and sustained over the long term,” she said.

    Confronting barriers

    Gretchen Baier, executive external strategy and communications leader for Dow, noted that the company already produces hydrogen in multiple ways. For one, Dow operates the world’s largest ethane cracker, in Texas. An ethane cracker heats ethane to break apart molecular bonds to form ethylene, with hydrogen one of the byproducts of the process. Also, Baier showed a slide of the 1891 patent for the electrolysis of brine water, which also produces hydrogen. The company still engages in this practice, but Dow does not have an effective way of utilizing the resulting hydrogen for their own fuel.

    “Just take a moment to think about that,” Baier said. “We’ve been talking about hydrogen production and the cost of it, and this is basically free hydrogen. And it’s still too much of a barrier to somewhat recycle that and use it for ourselves. The environment is clearly changing, and we do have plans for that, but I think that kind of sets some of the challenges that face industry here.”

    However, Baier said, hydrogen is expected to play a significant role in Dow’s future as the company attempts to decarbonize by 2050. The company, she said, plans to optimize hydrogen allocation and production, retrofit turbines for hydrogen fueling, and purchase clean hydrogen. By 2040, Dow expects more than 60 percent of its sites to be hydrogen-ready.

    Baier noted that hydrogen fuel is not a “panacea,” but rather one among many potential contributors as industry attempts to reduce or eliminate carbon emissions in the coming decades. “Hydrogen has an important role, but it’s not the only answer,” she said.

    “This is real”

    Colleen Wright is vice president of corporate strategy for Constellation, which recently separated from Exelon Corporation. (Exelon now owns the former company’s regulated utilities, such as Commonwealth Edison and Baltimore Gas and Electric, while Constellation owns the competitive generation and supply portions of the business.) Wright stressed the advantages of nuclear power in hydrogen production, which she said include superior economics, low barriers to implementation, and scalability.

    “A quarter of emissions in the world are currently from hard-to-decarbonize sectors — the industrial sector, steel making, heavy-duty transportation, aviation,” she said. “These are really challenging decarbonization sectors, and as we continue to expand and electrify, we’re going to need more supply. We’re also going to need to produce clean hydrogen using emissions-free power.”

    “The scale of nuclear power plants is uniquely suited to be able to scale hydrogen production,” Wright added. She mentioned Constellation’s Nine Mile Point site in the State of New York, which received a DOE grant for a pilot program that will see a proton exchange membrane electrolyzer installed at the site.

    “We’re very excited to see hydrogen go from a [research and development] conversation to a commercial conversation,” she said. “We’ve been calling it a little bit of a ‘middle-school dance.’ Everybody is standing around the circle, waiting to see who’s willing to put something at stake. But this is real. We’re not dancing around the edges. There are a lot of people who are big players, who are willing to put skin in the game today.” More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    Energy storage important to creating affordable, reliable, deeply decarbonized electricity systems

    In deeply decarbonized energy systems utilizing high penetrations of variable renewable energy (VRE), energy storage is needed to keep the lights on and the electricity flowing when the sun isn’t shining and the wind isn’t blowing — when generation from these VRE resources is low or demand is high. The MIT Energy Initiative’s Future of Energy Storage study makes clear the need for energy storage and explores pathways using VRE resources and storage to reach decarbonized electricity systems efficiently by 2050.

    “The Future of Energy Storage,” a new multidisciplinary report from the MIT Energy Initiative (MITEI), urges government investment in sophisticated analytical tools for planning, operation, and regulation of electricity systems in order to deploy and use storage efficiently. Because storage technologies will have the ability to substitute for or complement essentially all other elements of a power system, including generation, transmission, and demand response, these tools will be critical to electricity system designers, operators, and regulators in the future. The study also recommends additional support for complementary staffing and upskilling programs at regulatory agencies at the state and federal levels. 

    Play video

    Why is energy storage so important?

    The MITEI report shows that energy storage makes deep decarbonization of reliable electric power systems affordable. “Fossil fuel power plant operators have traditionally responded to demand for electricity — in any given moment — by adjusting the supply of electricity flowing into the grid,” says MITEI Director Robert Armstrong, the Chevron Professor of Chemical Engineering and chair of the Future of Energy Storage study. “But VRE resources such as wind and solar depend on daily and seasonal variations as well as weather fluctuations; they aren’t always available to be dispatched to follow electricity demand. Our study finds that energy storage can help VRE-dominated electricity systems balance electricity supply and demand while maintaining reliability in a cost-effective manner — that in turn can support the electrification of many end-use activities beyond the electricity sector.”

    The three-year study is designed to help government, industry, and academia chart a path to developing and deploying electrical energy storage technologies as a way of encouraging electrification and decarbonization throughout the economy, while avoiding excessive or inequitable burdens.

    Focusing on three distinct regions of the United States, the study shows the need for a varied approach to energy storage and electricity system design in different parts of the country. Using modeling tools to look out to 2050, the study team also focuses beyond the United States, to emerging market and developing economy (EMDE) countries, particularly as represented by India. The findings highlight the powerful role storage can play in EMDE nations. These countries are expected to see massive growth in electricity demand over the next 30 years, due to rapid overall economic expansion and to increasing adoption of electricity-consuming technologies such as air conditioning. In particular, the study calls attention to the pivotal role battery storage can play in decarbonizing grids in EMDE countries that lack access to low-cost gas and currently rely on coal generation.

    The authors find that investment in VRE combined with storage is favored over new coal generation over the medium and long term in India, although existing coal plants may linger unless forced out by policy measures such as carbon pricing. 

    “Developing countries are a crucial part of the global decarbonization challenge,” says Robert Stoner, the deputy director for science and technology at MITEI and one of the report authors. “Our study shows how they can take advantage of the declining costs of renewables and storage in the coming decades to become climate leaders without sacrificing economic development and modernization.”

    The study examines four kinds of storage technologies: electrochemical, thermal, chemical, and mechanical. Some of these technologies, such as lithium-ion batteries, pumped storage hydro, and some thermal storage options, are proven and available for commercial deployment. The report recommends that the government focus R&D efforts on other storage technologies, which will require further development to be available by 2050 or sooner — among them, projects to advance alternative electrochemical storage technologies that rely on earth-abundant materials. It also suggests government incentives and mechanisms that reward success but don’t interfere with project management. The report calls for the federal government to change some of the rules governing technology demonstration projects to enable more projects on storage. Policies that require cost-sharing in exchange for intellectual property rights, the report argues, discourage the dissemination of knowledge. The report advocates for federal requirements for demonstration projects that share information with other U.S. entities.

    The report says many existing power plants that are being shut down can be converted to useful energy storage facilities by replacing their fossil fuel boilers with thermal storage and new steam generators. This retrofit can be done using commercially available technologies and may be attractive to plant owners and communities — using assets that would otherwise be abandoned as electricity systems decarbonize.  

    The study also looks at hydrogen and concludes that its use for storage will likely depend on the extent to which hydrogen is used in the overall economy. That broad use of hydrogen, the report says, will be driven by future costs of hydrogen production, transportation, and storage — and by the pace of innovation in hydrogen end-use applications. 

    The MITEI study predicts the distribution of hourly wholesale prices or the hourly marginal value of energy will change in deeply decarbonized power systems — with many more hours of very low prices and more hours of high prices compared to today’s wholesale markets. So the report recommends systems adopt retail pricing and retail load management options that reward all consumers for shifting electricity use away from times when high wholesale prices indicate scarcity, to times when low wholesale prices signal abundance. 

    The Future of Energy Storage study is the ninth in MITEI’s “Future of” series, exploring complex and vital issues involving energy and the environment. Previous studies have focused on nuclear power, solar energy, natural gas, geothermal energy, and coal (with capture and sequestration of carbon dioxide emissions), as well as on systems such as the U.S. electric power grid. The Alfred P. Sloan Foundation and the Heising-Simons Foundation provided core funding for MITEI’s Future of Energy Storage study. MITEI members Equinor and Shell provided additional support.  More

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Absent legislative victory, the president can still meet US climate goals

    The most recent United Nations climate change report indicates that without significant action to mitigate global warming, the extent and magnitude of climate impacts — from floods to droughts to the spread of disease — could outpace the world’s ability to adapt to them. The latest effort to introduce meaningful climate legislation in the United States Congress, the Build Back Better bill, has stalled. The climate package in that bill — $555 billion in funding for climate resilience and clean energy — aims to reduce U.S. greenhouse gas emissions by about 50 percent below 2005 levels by 2030, the nation’s current Paris Agreement pledge. With prospects of passing a standalone climate package in the Senate far from assured, is there another pathway to fulfilling that pledge?

    Recent detailed legal analysis shows that there is at least one viable option for the United States to achieve the 2030 target without legislative action. Under Section 115 on International Air Pollution of the Clean Air Act, the U.S. Environmental Protection Agency (EPA) could assign emissions targets to the states that collectively meet the national goal. The president could simply issue an executive order to empower the EPA to do just that. But would that be prudent?

    A new study led by researchers at the MIT Joint Program on the Science and Policy of Global Change explores how, under a federally coordinated carbon dioxide emissions cap-and-trade program aligned with the U.S. Paris Agreement pledge and implemented through Section 115 of the Clean Air Act, the EPA might allocate emissions cuts among states. Recognizing that the Biden or any future administration considering this strategy would need to carefully weigh its benefits against its potential political risks, the study highlights the policy’s net economic benefits to the nation.

    The researchers calculate those net benefits by combining the estimated total cost of carbon dioxide emissions reduction under the policy with the corresponding estimated expenditures that would be avoided as a result of the policy’s implementation — expenditures on health care due to particulate air pollution, and on society at large due to climate impacts.

    Assessing three carbon dioxide emissions allocation strategies (each with legal precedent) for implementing Section 115 to return cap-and-trade program revenue to the states and distribute it to state residents on an equal per-capita basis, the study finds that at the national level, the economic net benefits are substantial, ranging from $70 to $150 billion in 2030. The results appear in the journal Environmental Research Letters.

    “Our findings not only show significant net gains to the U.S. economy under a national emissions policy implemented through the Clean Air Act’s Section 115,” says Mei Yuan, a research scientist at the MIT Joint Program and lead author of the study. “They also show the policy impact on consumer costs may differ across states depending on the choice of allocation strategy.”

    The national price on carbon needed to achieve the policy’s emissions target, as well as the policy’s ultimate cost to consumers, are substantially lower than those found in studies a decade earlier, although in line with other recent studies. The researchers speculate that this is largely due to ongoing expansion of ambitious state policies in the electricity sector and declining renewable energy costs. The policy is also progressive, consistent with earlier studies, in that equal lump-sum distribution of allowance revenue to state residents generally leads to net benefits to lower-income households. Regional disparities in consumer costs can be moderated by the allocation of allowances among states.

    State-by-state emissions estimates for the study are derived from MIT’s U.S. Regional Energy Policy model, with electricity sector detail of the Renewable Energy Development System model developed by the U.S. National Renewable Energy Laboratory; air quality benefits are estimated using U.S. EPA and other models; and the climate benefits estimate is based on the social cost of carbon, the U.S. federal government’s assessment of the economic damages that would result from emitting one additional ton of carbon dioxide into the atmosphere (currently $51/ton, adjusted for inflation). 

    “In addition to illustrating the economic, health, and climate benefits of a Section 115 implementation, our study underscores the advantages of a policy that imposes a uniform carbon price across all economic sectors,” says John Reilly, former co-director of the MIT Joint Program and a study co-author. “A national carbon price would serve as a major incentive for all sectors to decarbonize.” More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    What choices does the world need to make to keep global warming below 2 C?

    When the 2015 Paris Agreement set a long-term goal of keeping global warming “well below 2 degrees Celsius, compared to pre-industrial levels” to avoid the worst impacts of climate change, it did not specify how its nearly 200 signatory nations could collectively achieve that goal. Each nation was left to its own devices to reduce greenhouse gas emissions in alignment with the 2 C target. Now a new modeling strategy developed at the MIT Joint Program on the Science and Policy of Global Change that explores hundreds of potential future development pathways provides new insights on the energy and technology choices needed for the world to meet that target.

    Described in a study appearing in the journal Earth’s Future, the new strategy combines two well-known computer modeling techniques to scope out the energy and technology choices needed over the coming decades to reduce emissions sufficiently to achieve the Paris goal.

    The first technique, Monte Carlo analysis, quantifies uncertainty levels for dozens of energy and economic indicators including fossil fuel availability, advanced energy technology costs, and population and economic growth; feeds that information into a multi-region, multi-economic-sector model of the world economy that captures the cross-sectoral impacts of energy transitions; and runs that model hundreds of times to estimate the likelihood of different outcomes. The MIT study focuses on projections through the year 2100 of economic growth and emissions for different sectors of the global economy, as well as energy and technology use.

    The second technique, scenario discovery, uses machine learning tools to screen databases of model simulations in order to identify outcomes of interest and their conditions for occurring. The MIT study applies these tools in a unique way by combining them with the Monte Carlo analysis to explore how different outcomes are related to one another (e.g., do low-emission outcomes necessarily involve large shares of renewable electricity?). This approach can also identify individual scenarios, out of the hundreds explored, that result in specific combinations of outcomes of interest (e.g., scenarios with low emissions, high GDP growth, and limited impact on electricity prices), and also provide insight into the conditions needed for that combination of outcomes.

    Using this unique approach, the MIT Joint Program researchers find several possible patterns of energy and technology development under a specified long-term climate target or economic outcome.

    “This approach shows that there are many pathways to a successful energy transition that can be a win-win for the environment and economy,” says Jennifer Morris, an MIT Joint Program research scientist and the study’s lead author. “Toward that end, it can be used to guide decision-makers in government and industry to make sound energy and technology choices and avoid biases in perceptions of what ’needs’ to happen to achieve certain outcomes.”

    For example, while achieving the 2 C goal, the global level of combined wind and solar electricity generation by 2050 could be less than three times or more than 12 times the current level (which is just over 2,000 terawatt hours). These are very different energy pathways, but both can be consistent with the 2 C goal. Similarly, there are many different energy mixes that can be consistent with maintaining high GDP growth in the United States while also achieving the 2 C goal, with different possible roles for renewables, natural gas, carbon capture and storage, and bioenergy. The study finds renewables to be the most robust electricity investment option, with sizable growth projected under each of the long-term temperature targets explored.

    The researchers also find that long-term climate targets have little impact on economic output for most economic sectors through 2050, but do require each sector to significantly accelerate reduction of its greenhouse gas emissions intensity (emissions per unit of economic output) so as to reach near-zero levels by midcentury.

    “Given the range of development pathways that can be consistent with meeting a 2 degrees C goal, policies that target only specific sectors or technologies can unnecessarily narrow the solution space, leading to higher costs,” says former MIT Joint Program Co-Director John Reilly, a co-author of the study. “Our findings suggest that policies designed to encourage a portfolio of technologies and sectoral actions can be a wise strategy that hedges against risks.”

    The research was supported by the U.S. Department of Energy Office of Science. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More