More stories

  • in

    Future nuclear power reactors could rely on molten salts — but what about corrosion?

    Most discussions of how to avert climate change focus on solar and wind generation as key to the transition to a future carbon-free power system. But Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering at MIT and associate director of the MIT Plasma Science and Fusion Center (PSFC), is impatient with such talk. “We can say we should have only wind and solar someday. But we don’t have the luxury of ‘someday’ anymore, so we can’t ignore other helpful ways to combat climate change,” he says. “To me, it’s an ‘all-hands-on-deck’ thing. Solar and wind are clearly a big part of the solution. But I think that nuclear power also has a critical role to play.”

    For decades, researchers have been working on designs for both fission and fusion nuclear reactors using molten salts as fuels or coolants. While those designs promise significant safety and performance advantages, there’s a catch: Molten salt and the impurities within it often corrode metals, ultimately causing them to crack, weaken, and fail. Inside a reactor, key metal components will be exposed not only to molten salt but also simultaneously to radiation, which generally has a detrimental effect on materials, making them more brittle and prone to failure. Will irradiation make metal components inside a molten salt-cooled nuclear reactor corrode even more quickly?

    Short and Weiyue Zhou PhD ’21, a postdoc in the PSFC, have been investigating that question for eight years. Their recent experimental findings show that certain alloys will corrode more slowly when they’re irradiated — and identifying them among all the available commercial alloys can be straightforward.

    The first challenge — building a test facility

    When Short and Zhou began investigating the effect of radiation on corrosion, practically no reliable facilities existed to look at the two effects at once. The standard approach was to examine such mechanisms in sequence: first corrode, then irradiate, then examine the impact on the material. That approach greatly simplifies the task for the researchers, but with a major trade-off. “In a reactor, everything is going to be happening at the same time,” says Short. “If you separate the two processes, you’re not simulating a reactor; you’re doing some other experiment that’s not as relevant.”

    So, Short and Zhou took on the challenge of designing and building an experimental setup that could do both at once. Short credits a team at the University of Michigan for paving the way by designing a device that could accomplish that feat in water, rather than molten salts. Even so, Zhou notes, it took them three years to come up with a device that would work with molten salts. Both researchers recall failure after failure, but the persistent Zhou ultimately tried a totally new design, and it worked. Short adds that it also took them three years to precisely replicate the salt mixture used by industry — another factor critical to getting a meaningful result. The hardest part was achieving and ensuring that the purity was correct by removing critical impurities such as moisture, oxygen, and certain other metals.

    As they were developing and testing their setup, Short and Zhou obtained initial results showing that proton irradiation did not always accelerate corrosion but sometimes actually decelerated it. They and others had hypothesized that possibility, but even so, they were surprised. “We thought we must be doing something wrong,” recalls Short. “Maybe we mixed up the samples or something.” But they subsequently made similar observations for a variety of conditions, increasing their confidence that their initial observations were not outliers.

    The successful setup

    Central to their approach is the use of accelerated protons to mimic the impact of the neutrons inside a nuclear reactor. Generating neutrons would be both impractical and prohibitively expensive, and the neutrons would make everything highly radioactive, posing health risks and requiring very long times for an irradiated sample to cool down enough to be examined. Using protons would enable Short and Zhou to examine radiation-altered corrosion both rapidly and safely.

    Key to their experimental setup is a test chamber that they attach to a proton accelerator. To prepare the test chamber for an experiment, they place inside it a thin disc of the metal alloy being tested on top of a a pellet of salt. During the test, the entire foil disc is exposed to a bath of molten salt. At the same time, a beam of protons bombards the sample from the side opposite the salt pellet, but the proton beam is restricted to a circle in the middle of the foil sample. “No one can argue with our results then,” says Short. “In a single experiment, the whole sample is subjected to corrosion, and only a circle in the center of the sample is simultaneously irradiated by protons. We can see the curvature of the proton beam outline in our results, so we know which region is which.”

    The results with that arrangement were unchanged from the initial results. They confirmed the researchers’ preliminary findings, supporting their controversial hypothesis that rather than accelerating corrosion, radiation would actually decelerate corrosion in some materials under some conditions. Fortunately, they just happen to be the same conditions that will be experienced by metals in molten salt-cooled reactors.

    Why is that outcome controversial? A closeup look at the corrosion process will explain. When salt corrodes metal, the salt finds atomic-level openings in the solid, seeps in, and dissolves salt-soluble atoms, pulling them out and leaving a gap in the material — a spot where the material is now weak. “Radiation adds energy to atoms, causing them to be ballistically knocked out of their positions and move very fast,” explains Short. So, it makes sense that irradiating a material would cause atoms to move into the salt more quickly, increasing the rate of corrosion. Yet in some of their tests, the researchers found the opposite to be true.

    Experiments with “model” alloys

    The researchers’ first experiments in their novel setup involved “model” alloys consisting of nickel and chromium, a simple combination that would give them a first look at the corrosion process in action. In addition, they added europium fluoride to the salt, a compound known to speed up corrosion. In our everyday world, we often think of corrosion as taking years or decades, but in the more extreme conditions of a molten salt reactor it can noticeably occur in just hours. The researchers used the europium fluoride to speed up corrosion even more without changing the corrosion process. This allowed for more rapid determination of which materials, under which conditions, experienced more or less corrosion with simultaneous proton irradiation.

    The use of protons to emulate neutron damage to materials meant that the experimental setup had to be carefully designed and the operating conditions carefully selected and controlled. Protons are hydrogen atoms with an electrical charge, and under some conditions the hydrogen could chemically react with atoms in the sample foil, altering the corrosion response, or with ions in the salt, making the salt more corrosive. Therefore, the proton beam had to penetrate the foil sample but then stop in the salt as soon as possible. Under these conditions, the researchers found they could deliver a relatively uniform dose of radiation inside the foil layer while also minimizing chemical reactions in both the foil and the salt.

    Tests showed that a proton beam accelerated to 3 million electron-volts combined with a foil sample between 25 and 30 microns thick would work well for their nickel-chromium alloys. The temperature and duration of the exposure could be adjusted based on the corrosion susceptibility of the specific materials being tested.

    Optical images of samples examined after tests with the model alloys showed a clear boundary between the area that was exposed only to the molten salt and the area that was also exposed to the proton beam. Electron microscope images focusing on that boundary showed that the area that had been exposed only to the molten salt included dark patches where the molten salt had penetrated all the way through the foil, while the area that had also been exposed to the proton beam showed almost no such dark patches.

    To confirm that the dark patches were due to corrosion, the researchers cut through the foil sample to create cross sections. In them, they could see tunnels that the salt had dug into the sample. “For regions not under radiation, we see that the salt tunnels link the one side of the sample to the other side,” says Zhou. “For regions under radiation, we see that the salt tunnels stop more or less halfway and rarely reach the other side. So we verified that they didn’t penetrate the whole way.”

    The results “exceeded our wildest expectations,” says Short. “In every test we ran, the application of radiation slowed corrosion by a factor of two to three times.”

    More experiments, more insights

    In subsequent tests, the researchers more closely replicated commercially available molten salt by omitting the additive (europium fluoride) that they had used to speed up corrosion, and they tweaked the temperature for even more realistic conditions. “In carefully monitored tests, we found that by raising the temperature by 100 degrees Celsius, we could get corrosion to happen about 1,000 times faster than it would in a reactor,” says Short.

    Images from experiments with the nickel-chromium alloy plus the molten salt without the corrosive additive yielded further insights. Electron microscope images of the side of the foil sample facing the molten salt showed that in sections only exposed to the molten salt, the corrosion is clearly focused on the weakest part of the structure — the boundaries between the grains in the metal. In sections that were exposed to both the molten salt and the proton beam, the corrosion isn’t limited to the grain boundaries but is more spread out over the surface. Experimental results showed that these cracks are shallower and less likely to cause a key component to break.

    Short explains the observations. Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are areas — called grain boundaries — where the atoms don’t line up as well. In the corrosion-only images, dark lines track the grain boundaries. Molten salt has seeped into the grain boundaries and pulled out salt-soluble atoms. In the corrosion-plus-irradiation images, the damage is more general. It’s not only the grain boundaries that get attacked but also regions within the grains.

    So, when the material is irradiated, the molten salt also removes material from within the grains. Over time, more material comes out of the grains themselves than from the spaces between them. The removal isn’t focused on the grain boundaries; it’s spread out over the whole surface. As a result, any cracks that form are shallower and more spread out, and the material is less likely to fail.

    Testing commercial alloys

    The experiments described thus far involved model alloys — simple combinations of elements that are good for studying science but would never be used in a reactor. In the next series of experiments, the researchers focused on three commercially available alloys that are composed of nickel, chromium, iron, molybdenum, and other elements in various combinations.

    Results from the experiments with the commercial alloys showed a consistent pattern — one that confirmed an idea that the researchers had going in: the higher the concentration of salt-soluble elements in the alloy, the worse the radiation-induced corrosion damage. Radiation will increase the rate at which salt-soluble atoms such as chromium leave the grain boundaries, hastening the corrosion process. However, if there are more not-soluble elements such as nickel present, those atoms will go into the salt more slowly. Over time, they’ll accumulate at the grain boundary and form a protective coating that blocks the grain boundary — a “self-healing mechanism that decelerates the rate of corrosion,” say the researchers.

    Thus, if an alloy consists mostly of atoms that don’t dissolve in molten salt, irradiation will cause them to form a protective coating that slows the corrosion process. But if an alloy consists mostly of atoms that dissolve in molten salt, irradiation will make them dissolve faster, speeding up corrosion. As Short summarizes, “In terms of corrosion, irradiation makes a good alloy better and a bad alloy worse.”

    Real-world relevance plus practical guidelines

    Short and Zhou find their results encouraging. In a nuclear reactor made of “good” alloys, the slowdown in corrosion will probably be even more pronounced than what they observed in their proton-based experiments because the neutrons that inflict the damage won’t chemically react with the salt to make it more corrosive. As a result, reactor designers could push the envelope more in their operating conditions, allowing them to get more power out of the same nuclear plant without compromising on safety.

    However, the researchers stress that there’s much work to be done. Many more projects are needed to explore and understand the exact corrosion mechanism in specific alloys under different irradiation conditions. In addition, their findings need to be replicated by groups at other institutions using their own facilities. “What needs to happen now is for other labs to build their own facilities and start verifying whether they get the same results as we did,” says Short. To that end, Short and Zhou have made the details of their experimental setup and all of their data freely available online. “We’ve also been actively communicating with researchers at other institutions who have contacted us,” adds Zhou. “When they’re planning to visit, we offer to show them demonstration experiments while they’re here.”

    But already their findings provide practical guidance for other researchers and equipment designers. For example, the standard way to quantify corrosion damage is by “mass loss,” a measure of how much weight the material has lost. But Short and Zhou consider mass loss a flawed measure of corrosion in molten salts. “If you’re a nuclear plant operator, you usually care whether your structural components are going to break,” says Short. “Our experiments show that radiation can change how deep the cracks are, when all other things are held constant. The deeper the cracks, the more likely a structural component is to break, leading to a reactor failure.”

    In addition, the researchers offer a simple rule for identifying good metal alloys for structural components in molten salt reactors. Manufacturers provide extensive lists of available alloys with different compositions, microstructures, and additives. Faced with a list of options for critical structures, the designer of a new nuclear fission or fusion reactor can simply examine the composition of each alloy being offered. The one with the highest content of corrosion-resistant elements such as nickel will be the best choice. Inside a nuclear reactor, that alloy should respond to a bombardment of radiation not by corroding more rapidly but by forming a protective layer that helps block the corrosion process. “That may seem like a trivial result, but the exact threshold where radiation decelerates corrosion depends on the salt chemistry, the density of neutrons in the reactor, their energies, and a few other factors,” says Short. “Therefore, the complete guidelines are a bit more complicated. But they’re presented in a straightforward way that users can understand and utilize to make a good choice for the molten salt–based reactor they’re designing.”

    This research was funded, in part, by Eni S.p.A. through the MIT Plasma Science and Fusion Center’s Laboratory for Innovative Fusion Technologies. Earlier work was funded, in part, by the Transatomic Power Corporation and by the U.S. Department of Energy Nuclear Energy University Program. Equipment development and testing was supported by the Transatomic Power Corporation.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Optimizing nuclear fuels for next-generation reactors

    In 2010, when Ericmoore Jossou was attending college in northern Nigeria, the lights would flicker in and out all day, sometimes lasting only for a couple of hours at a time. The frustrating experience reaffirmed Jossou’s realization that the country’s sporadic energy supply was a problem. It was the beginning of his path toward nuclear engineering.

    Because of the energy crisis, “I told myself I was going to find myself in a career that allows me to develop energy technologies that can easily be scaled to meet the energy needs of the world, including my own country,” says Jossou, an assistant professor in a shared position between the departments of Nuclear Science and Engineering (NSE), where is the John Clark Hardwick (1986) Professor, and of Electrical Engineering and Computer Science.

    Today, Jossou uses computer simulations for rational materials design, AI-aided purposeful development of cladding materials and fuels for next-generation nuclear reactors. As one of the shared faculty hires between the MIT Schwarzman College of Computing and departments across MIT, his appointment recognizes his commitment to computing for climate and the environment.

    A well-rounded education in Nigeria

    Growing up in Lagos, Jossou knew education was about more than just bookish knowledge, so he was eager to travel and experience other cultures. He would start in his own backyard by traveling across the Niger river and enrolling in Ahmadu Bello University in northern Nigeria. Moving from the south was a cultural education with a different language and different foods. It was here that Jossou got to try and love tuwo shinkafa, a northern Nigerian rice-based specialty, for the first time.

    After his undergraduate studies, armed with a bachelor’s degree in chemistry, Jossou was among a small cohort selected for a specialty master’s training program funded by the World Bank Institute and African Development Bank. The program at the African University of Science and Technology in Abuja, Nigeria, is a pan-African venture dedicated to nurturing homegrown science talent on the continent. Visiting professors from around the world taught intensive three-week courses, an experience which felt like drinking from a fire hose. The program widened Jossou’s views and he set his sights on a doctoral program with an emphasis on clean energy systems.

    A pivot to nuclear science

    While in Nigeria, Jossou learned of Professor Jerzy Szpunar at the University of Saskatchewan in Canada, who was looking for a student researcher to explore fuels and alloys for nuclear reactors. Before then, Jossou was lukewarm on nuclear energy, but the research sounded fascinating. The Fukushima, Japan, incident was recently in the rearview mirror and Jossou remembered his early determination to address his own country’s energy crisis. He was sold on the idea and graduated with a doctoral degree from the University of Saskatchewan on an international dean’s scholarship.

    Jossou’s postdoctoral work registered a brief stint at Brookhaven National Laboratory as staff scientist. He leaped at the opportunity to join MIT NSE as a way of realizing his research interest and teaching future engineers. “I would really like to conduct cutting-edge research in nuclear materials design and to pass on my knowledge to the next generation of scientists and engineers and there’s no better place to do that than at MIT,” Jossou says.

    Merging material science and computational modeling

    Jossou’s doctoral work on designing nuclear fuels for next-generation reactors forms the basis of research his lab is pursuing at MIT NSE. Nuclear reactors that were built in the 1950s and ’60s are getting a makeover in terms of improved accident tolerance. Reactors are not confined to one kind, either: We have micro reactors and are now considering ones using metallic nuclear fuels, Jossou points out. The diversity of options is enough to keep researchers busy testing materials fit for cladding, the lining that prevents corrosion of the fuel and release of radioactive fission products into the surrounding reactor coolant.

    The team is also investigating fuels that improve burn-up efficiencies, so they can last longer in the reactor. An intriguing approach has been to immobilize the gas bubbles that arise from the fission process, so they don’t grow and degrade the fuel.

    Since joining MIT in July 2023, Jossou is setting up a lab that optimizes the composition of accident-tolerant nuclear fuels. He is leaning on his materials science background and looping computer simulations and artificial intelligence in the mix.

    Computer simulations allow the researchers to narrow down the potential field of candidates, optimized for specific parameters, so they can synthesize only the most promising candidates in the lab. And AI’s predictive capabilities guide researchers on which materials composition to consider next. “We no longer depend on serendipity to choose our materials, our lab is based on rational materials design,” Jossou says, “we can rapidly design advanced nuclear fuels.”

    Advancing energy causes in Africa

    Now that he is at MIT, Jossou admits the view from the outside is different. He now harbors a different perspective on what Africa needs to address some of its challenges. “The starting point to solve our problems is not money; it needs to start with ideas,” he says, “we need to find highly skilled people who can actually solve problems.” That job involves adding economic value to the rich arrays of raw materials that the continent is blessed with. It frustrates Jossou that Niger, a country rich in raw material for uranium, has no nuclear reactors of its own. It ships most of its ore to France. “The path forward is to find a way to refine these materials in Africa and to be able to power the industries on that continent as well,” Jossou says.

    Jossou is determined to do his part to eliminate these roadblocks.

    Anchored in mentorship, Jossou’s solution aims to train talent from Africa in his own lab. He has applied for a MIT Global Experiences MISTI grant to facilitate travel and research studies for Ghanaian scientists. “The goal is to conduct research in our facility and perhaps add value to indigenous materials,” Jossou says.

    Adding value has been a consistent theme of Jossou’s career. He remembers wanting to become a neurosurgeon after reading “Gifted Hands,” moved by the personal story of the author, Ben Carson. As Jossou grew older, however, he realized that becoming a doctor wasn’t necessarily what he wanted. Instead, he was looking to add value. “What I wanted was really to take on a career that allows me to solve a societal problem.” The societal problem of clean and safe energy for all is precisely what Jossou is working on today. More

  • in

    Making the clean energy transition work for everyone

    The clean energy transition is already underway, but how do we make sure it happens in a manner that is affordable, sustainable, and fair for everyone?

    That was the overarching question at this year’s MIT Energy Conference, which took place March 11 and 12 in Boston and was titled “Short and Long: A Balanced Approach to the Energy Transition.”

    Each year, the student-run conference brings together leaders in the energy sector to discuss the progress and challenges they see in their work toward a greener future. Participants come from research, industry, government, academia, and the investment community to network and exchange ideas over two whirlwind days of keynote talks, fireside chats, and panel discussions.

    Several participants noted that clean energy technologies are already cost-competitive with fossil fuels, but changing the way the world works requires more than just technology.

    “None of this is easy, but I think developing innovative new technologies is really easy compared to the things we’re talking about here, which is how to blend social justice, soft engineering, and systems thinking that puts people first,” Daniel Kammen, a distinguished professor of energy at the University of California at Berkeley, said in a keynote talk. “While clean energy has a long way to go, it is more than ready to transition us from fossil fuels.”

    The event also featured a keynote discussion between MIT President Sally Kornbluth and MIT’s Kyocera Professor of Ceramics Yet-Ming Chiang, in which Kornbluth discussed her first year at MIT as well as a recently announced, campus-wide effort to solve critical climate problems known as the Climate Project at MIT.

    “The reason I wanted to come to MIT was I saw that MIT has the potential to solve the world’s biggest problems, and first among those for me was the climate crisis,” Kornbluth said. “I’m excited about where we are, I’m excited about the enthusiasm of the community, and I think we’ll be able to make really impactful discoveries through this project.”

    Fostering new technologies

    Several panels convened experts in new or emerging technology fields to discuss what it will take for their solutions to contribute to deep decarbonization.

    “The fun thing and challenging thing about first-of-a-kind technologies is they’re all kind of different,” said Jonah Wagner, principal assistant director for industrial innovation and clean energy in the U.S. Office of Science and Technology Policy. “You can map their growth against specific challenges you expect to see, but every single technology is going to face their own challenges, and every single one will have to defy an engineering barrier to get off the ground.”

    Among the emerging technologies discussed was next-generation geothermal energy, which uses new techniques to extract heat from the Earth’s crust in new places.

    A promising aspect of the technology is that it can leverage existing infrastructure and expertise from the oil and gas industry. Many newly developed techniques for geothermal production, for instance, use the same drills and rigs as those used for hydraulic fracturing.

    “The fact that we have a robust ecosystem of oil and gas labor and technology in the U.S. makes innovation in geothermal much more accessible compared to some of the challenges we’re seeing in nuclear or direct-air capture, where some of the supply chains are disaggregated around the world,” said Gabrial Malek, chief of staff at the geothermal company Fervo Energy.

    Another technology generating excitement — if not net energy quite yet — is fusion, the process of combining, or fusing, light atoms together to form heavier ones for a net energy gain, in the same process that powers the sun. MIT spinout Commonwealth Fusion Systems (CFS) has already validated many aspects of its approach for achieving fusion power, and the company’s unique partnership with MIT was discussed in a panel on the industry’s progress.

    “We’re standing on the shoulders of decades of research from the scientific community, and we want to maintain those ties even as we continue developing our technology,” CFS Chief Science Officer Brandon Sorbom PhD ’17 said, noting that CFS is one of the largest company sponsors of research at MIT and collaborates with institutions around the world. “Engaging with the community is a really valuable lever to get new ideas and to sanity check our own ideas.”

    Sorbom said that as CFS advances fusion energy, the company is thinking about how it can replicate its processes to lower costs and maximize the technology’s impact around the planet.

    “For fusion to work, it has to work for everyone,” Sorbom said. “I think the affordability piece is really important. We can’t just build this technological jewel that only one class of nations can afford. It has to be a technology that can be deployed throughout the entire world.”

    The event also gave students — many from MIT — a chance to learn more about careers in energy and featured a startup showcase, in which dozens of companies displayed their energy and sustainability solutions.

    “More than 700 people are here from every corner of the energy industry, so there are so many folks to connect with and help me push my vision into reality,” says GreenLIB CEO Fred Rostami, whose company recycles lithium-ion batteries. “The good thing about the energy transition is that a lot of these technologies and industries overlap, so I think we can enable this transition by working together at events like this.”

    A focused climate strategy

    Kornbluth noted that when she came to MIT, a large percentage of students and faculty were already working on climate-related technologies. With the Climate Project at MIT, she wanted to help ensure the whole of those efforts is greater than the sum of its parts.

    The project is organized around six distinct missions, including decarbonizing energy and industry, empowering frontline communities, and building healthy, resilient cities. Kornbluth says the mission areas will help MIT community members collaborate around multidisciplinary challenges. Her team, which includes a committee of faculty advisors, has begun to search for the leads of each mission area, and Kornbluth said she is planning to appoint a vice president for climate at the Institute.

    “I want someone who has the purview of the whole Institute and will report directly to me to help make sure this project stays on track,” Kornbluth explained.

    In his conversation about the initiative with Kornbluth, Yet-Ming Chiang said projects will be funded based on their potential to reduce emissions and make the planet more sustainable at scale.

    “Projects should be very high risk, with very high impact,” Chiang explained. “They should have a chance to prove themselves, and those efforts should not be limited by resources, only by time.”

    In discussing her vision of the climate project, Kornbluth alluded to the “short and long” theme of the conference.

    “It’s about balancing research and commercialization,” Kornbluth said. “The climate project has a very variable timeframe, and I think universities are the sector that can think about the things that might be 30 years out. We have to think about the incentives across the entire innovation pipeline and how we can keep an eye on the long term while making sure the short-term things get out rapidly.” More

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Tests show high-temperature superconducting magnets are ready for fusion

    In the predawn hours of Sept. 5, 2021, engineers achieved a major milestone in the labs of MIT’s Plasma Science and Fusion Center (PSFC), when a new type of magnet, made from high-temperature superconducting material, achieved a world-record magnetic field strength of 20 tesla for a large-scale magnet. That’s the intensity needed to build a fusion power plant that is expected to produce a net output of power and potentially usher in an era of virtually limitless power production.

    The test was immediately declared a success, having met all the criteria established for the design of the new fusion device, dubbed SPARC, for which the magnets are the key enabling technology. Champagne corks popped as the weary team of experimenters, who had labored long and hard to make the achievement possible, celebrated their accomplishment.

    But that was far from the end of the process. Over the ensuing months, the team tore apart and inspected the components of the magnet, pored over and analyzed the data from hundreds of instruments that recorded details of the tests, and performed two additional test runs on the same magnet, ultimately pushing it to its breaking point in order to learn the details of any possible failure modes.

    All of this work has now culminated in a detailed report by researchers at PSFC and MIT spinout company Commonwealth Fusion Systems (CFS), published in a collection of six peer-reviewed papers in a special edition of the March issue of IEEE Transactions on Applied Superconductivity. Together, the papers describe the design and fabrication of the magnet and the diagnostic equipment needed to evaluate its performance, as well as the lessons learned from the process. Overall, the team found, the predictions and computer modeling were spot-on, verifying that the magnet’s unique design elements could serve as the foundation for a fusion power plant.

    Enabling practical fusion power

    The successful test of the magnet, says Hitachi America Professor of Engineering Dennis Whyte, who recently stepped down as director of the PSFC, was “the most important thing, in my opinion, in the last 30 years of fusion research.”

    Before the Sept. 5 demonstration, the best-available superconducting magnets were powerful enough to potentially achieve fusion energy — but only at sizes and costs that could never be practical or economically viable. Then, when the tests showed the practicality of such a strong magnet at a greatly reduced size, “overnight, it basically changed the cost per watt of a fusion reactor by a factor of almost 40 in one day,” Whyte says.

    “Now fusion has a chance,” Whyte adds. Tokamaks, the most widely used design for experimental fusion devices, “have a chance, in my opinion, of being economical because you’ve got a quantum change in your ability, with the known confinement physics rules, about being able to greatly reduce the size and the cost of objects that would make fusion possible.”

    The comprehensive data and analysis from the PSFC’s magnet test, as detailed in the six new papers, has demonstrated that plans for a new generation of fusion devices — the one designed by MIT and CFS, as well as similar designs by other commercial fusion companies — are built on a solid foundation in science.

    The superconducting breakthrough

    Fusion, the process of combining light atoms to form heavier ones, powers the sun and stars, but harnessing that process on Earth has proved to be a daunting challenge, with decades of hard work and many billions of dollars spent on experimental devices. The long-sought, but never yet achieved, goal is to build a fusion power plant that produces more energy than it consumes. Such a power plant could produce electricity without emitting greenhouse gases during operation, and generating very little radioactive waste. Fusion’s fuel, a form of hydrogen that can be derived from seawater, is virtually limitless.

    But to make it work requires compressing the fuel at extraordinarily high temperatures and pressures, and since no known material could withstand such temperatures, the fuel must be held in place by extremely powerful magnetic fields. Producing such strong fields requires superconducting magnets, but all previous fusion magnets have been made with a superconducting material that requires frigid temperatures of about 4 degrees above absolute zero (4 kelvins, or -270 degrees Celsius). In the last few years, a newer material nicknamed REBCO, for rare-earth barium copper oxide, was added to fusion magnets, and allows them to operate at 20 kelvins, a temperature that despite being only 16 kelvins warmer, brings significant advantages in terms of material properties and practical engineering.

    Taking advantage of this new higher-temperature superconducting material was not just a matter of substituting it in existing magnet designs. Instead, “it was a rework from the ground up of almost all the principles that you use to build superconducting magnets,” Whyte says. The new REBCO material is “extraordinarily different than the previous generation of superconductors. You’re not just going to adapt and replace, you’re actually going to innovate from the ground up.” The new papers in Transactions on Applied Superconductivity describe the details of that redesign process, now that patent protection is in place.

    A key innovation: no insulation

    One of the dramatic innovations, which had many others in the field skeptical of its chances of success, was the elimination of insulation around the thin, flat ribbons of superconducting tape that formed the magnet. Like virtually all electrical wires, conventional superconducting magnets are fully protected by insulating material to prevent short-circuits between the wires. But in the new magnet, the tape was left completely bare; the engineers relied on REBCO’s much greater conductivity to keep the current flowing through the material.

    “When we started this project, in let’s say 2018, the technology of using high-temperature superconductors to build large-scale high-field magnets was in its infancy,” says Zach Hartwig, the Robert N. Noyce Career Development Professor in the Department of Nuclear Science and Engineering. Hartwig has a co-appointment at the PSFC and is the head of its engineering group, which led the magnet development project. “The state of the art was small benchtop experiments, not really representative of what it takes to build a full-size thing. Our magnet development project started at benchtop scale and ended up at full scale in a short amount of time,” he adds, noting that the team built a 20,000-pound magnet that produced a steady, even magnetic field of just over 20 tesla — far beyond any such field ever produced at large scale.

    “The standard way to build these magnets is you would wind the conductor and you have insulation between the windings, and you need insulation to deal with the high voltages that are generated during off-normal events such as a shutdown.” Eliminating the layers of insulation, he says, “has the advantage of being a low-voltage system. It greatly simplifies the fabrication processes and schedule.” It also leaves more room for other elements, such as more cooling or more structure for strength.

    The magnet assembly is a slightly smaller-scale version of the ones that will form the donut-shaped chamber of the SPARC fusion device now being built by CFS in Devens, Massachusetts. It consists of 16 plates, called pancakes, each bearing a spiral winding of the superconducting tape on one side and cooling channels for helium gas on the other.

    But the no-insulation design was considered risky, and a lot was riding on the test program. “This was the first magnet at any sufficient scale that really probed what is involved in designing and building and testing a magnet with this so-called no-insulation no-twist technology,” Hartwig says. “It was very much a surprise to the community when we announced that it was a no-insulation coil.”

    Pushing to the limit … and beyond

    The initial test, described in previous papers, proved that the design and manufacturing process not only worked but was highly stable — something that some researchers had doubted. The next two test runs, also performed in late 2021, then pushed the device to the limit by deliberately creating unstable conditions, including a complete shutoff of incoming power that can lead to a catastrophic overheating. Known as quenching, this is considered a worst-case scenario for the operation of such magnets, with the potential to destroy the equipment.

    Part of the mission of the test program, Hartwig says, was “to actually go off and intentionally quench a full-scale magnet, so that we can get the critical data at the right scale and the right conditions to advance the science, to validate the design codes, and then to take the magnet apart and see what went wrong, why did it go wrong, and how do we take the next iteration toward fixing that. … It was a very successful test.”

    That final test, which ended with the melting of one corner of one of the 16 pancakes, produced a wealth of new information, Hartwig says. For one thing, they had been using several different computational models to design and predict the performance of various aspects of the magnet’s performance, and for the most part, the models agreed in their overall predictions and were well-validated by the series of tests and real-world measurements. But in predicting the effect of the quench, the model predictions diverged, so it was necessary to get the experimental data to evaluate the models’ validity.

    “The highest-fidelity models that we had predicted almost exactly how the magnet would warm up, to what degree it would warm up as it started to quench, and where would the resulting damage to the magnet would be,” he says. As described in detail in one of the new reports, “That test actually told us exactly the physics that was going on, and it told us which models were useful going forward and which to leave by the wayside because they’re not right.”

    Whyte says, “Basically we did the worst thing possible to a coil, on purpose, after we had tested all other aspects of the coil performance. And we found that most of the coil survived with no damage,” while one isolated area sustained some melting. “It’s like a few percent of the volume of the coil that got damaged.” And that led to revisions in the design that are expected to prevent such damage in the actual fusion device magnets, even under the most extreme conditions.

    Hartwig emphasizes that a major reason the team was able to accomplish such a radical new record-setting magnet design, and get it right the very first time and on a breakneck schedule, was thanks to the deep level of knowledge, expertise, and equipment accumulated over decades of operation of the Alcator C-Mod tokamak, the Francis Bitter Magnet Laboratory, and other work carried out at PSFC. “This goes to the heart of the institutional capabilities of a place like this,” he says. “We had the capability, the infrastructure, and the space and the people to do these things under one roof.”

    The collaboration with CFS was also key, he says, with MIT and CFS combining the most powerful aspects of an academic institution and private company to do things together that neither could have done on their own. “For example, one of the major contributions from CFS was leveraging the power of a private company to establish and scale up a supply chain at an unprecedented level and timeline for the most critical material in the project: 300 kilometers (186 miles) of high-temperature superconductor, which was procured with rigorous quality control in under a year, and integrated on schedule into the magnet.”

    The integration of the two teams, those from MIT and those from CFS, also was crucial to the success, he says. “We thought of ourselves as one team, and that made it possible to do what we did.” More

  • in

    Making nuclear energy facilities easier to build and transport

    For the United States to meet its net zero goals, nuclear energy needs to be on the smorgasbord of options. The problem: Its production still suffers from a lack of scale. To increase access rapidly, we need to stand up reactors quickly, says Isabel Naranjo De Candido, a third-year doctoral student advised by Professor Koroush Shirvan.

    One option is to work with microreactors, transportable units that can be wheeled to areas that need clean electricity. Naranjo De Candido’s master’s thesis at MIT, supervised by Professor Jacopo Buongiorno, focused on such reactors.

    Another way to improve access to nuclear energy is to develop reactors that are modular so their component units can be manufactured quickly while still maintaining quality. “The idea is that you apply the industrialization techniques of manufacturing so companies produce more [nuclear] vessels, with a more predictable supply chain,” she says. The assumption is that working with standardized recipes to manufacture just a few designed components over and over again improves speed and reliability and decreases cost.

    As part of her doctoral studies, Naranjo De Candido is working on optimizing the operations and management of these small, modular reactors so they can be efficient in all stages of their lifecycle: building; operations and maintenance; and decommissioning. The motivation for her research is simple: “We need nuclear for climate change because we need a reliable and stable source of energy to fight climate change,” she says.

    Play video

    A childhood in Italy

    Despite her passion for nuclear energy and engineering today, Naranjo De Candido was unsure what she wanted to pursue after high school in Padua, Italy. The daughter of a physician Italian mother and an architect Spanish father, she enrolled in a science-based high school shortly after middle school, as she knew that was the track she enjoyed best.

    Having earned very high marks in school, she won a full scholarship to study in Pisa, at the special Sant’Anna School of Advanced Studies. Housed in a centuries-old convent, the school granted only masters and doctoral degrees. “I had to select what to study but I was unsure. I knew I was interested in engineering,” she recalls, “so I selected mechanical engineering because it’s more generic.”

    It turns out Sant’Anna was a perfect fit for Naranjo De Candido to explore her passions. An inspirational nuclear engineering course during her studies set her on the path toward studying the field as part of her master’s studies in Pisa. During her time there, she traveled around the world — to China as part of a student exchange program and to Switzerland and the United States for internships. “I formed a good background and curriculum and that allowed me to [gain admission] to MIT,” she says.

    At an internship at NASA’s Jet Propulsion Lab, she met an MIT mechanical engineering student who encouraged her to apply to the school for doctoral studies. Yet another mentor in the Italian nuclear sector had also suggested she apply to MIT to pursue nuclear engineering, so she decided to take the leap.

    And she is glad she did.

    Improving access to nuclear energy

    At MIT, Naranjo De Candido is working on improving access to nuclear energy by scaling down reactor size and, in the case of microreactors, making them mobile enough to travel to places where they’re needed. “The idea with a microreactor is that when the fuel is exhausted, you replace the entire microreactor onsite with a freshly fueled unit and take the old one back to a central facility where it’s going to be refueled,” she says. One of the early use cases for such microreactors has been remote mining sites which need reliable power 24/7.

    Modular reactors, about 10 times the size of microreactors, ensure access differently: The components can be manufactured and installed at scale. These reactors don’t just deliver electricity but also cater to the market for industrial heat, she says. “You can locate them close to industrial facilities and use the heat directly to power ammonia or hydrogen production or water desalinization for example,” she adds.

    As more of these modular reactors are installed, the industry is expected to expand to include enterprises that choose to simply build them and hand off operations to other companies. Whereas traditional nuclear energy reactors might have a full suite of staff on board, smaller-scale reactors such as modular ones cannot afford to staff in large numbers, so talent needs to be optimized and staff shared among many units. “Many of these companies are very interested in knowing exactly how many people and how much money to allocate, and how to organize resources to serve more than one reactor at the same time,” she says.

    Naranjo De Candido is working on a complex software program that factors in a large range of variables — from raw materials cost and worker training, reactor size, megawatt output and more — and leans on historical data to predict what resources newer plants might need. The program also informs operators about the tradeoffs they need to accept. For example, she explains, “if you reduce people below the typical level assigned, how does that impact the reliability of the plant, that is, the number of hours that it is able to operate without malfunctions and failures?”

    And managing and operating a nuclear reactor is particularly complex because safety standards limit how much time workers can work in certain areas and how safe zones need to be handled.

    “There’s a shortage of [qualified talent] in the industry so this is not just about reducing costs but also about making it possible to have plants out there,” Naranjo De Candido says. Different types of talent are needed, from professionals who specialize in mechanical components to electronic controls. The model that she is working on considers the need for such specialized skillsets as well as making room for cross-training talent in multiple fields as needed.

    In keeping with her goal of making nuclear energy more accessible, the optimization software will be open-source, available for all to use. “We want this to be a common ground for utilities and vendors and other players to be able to communicate better,” Naranjo De Candido says, Doing so will accelerate the operation of nuclear energy plants at scale, she hopes — an achievement that will come not a moment too soon. More

  • in

    New study shows how universities are critical to emerging fusion industry

    A new study suggests that universities have an essential role to fulfill in the continued growth and success of any modern high-tech industry, and especially the nascent fusion industry; however, the importance of that role is not reflected in the number of fusion-oriented faculty and educational channels currently available. Academia’s responsiveness to the birth of other modern scientific fields, such as aeronautics and nuclear fission, provides a template for the steps universities can take to enable a robust fusion industry.

    Authored by Dennis Whyte, the Hitachi America Professor of Engineering and director of the Plasma Science and Fusion Center at MIT; Carlos Paz-Soldan, associate professor of applied physics and applied mathematics at Columbia University; and Brian D. Wirth, the Governor’s Chair Professor of Computational Nuclear Engineering at the University of Tennessee, the paper was recently published in the journal Physics of Plasmas as part of a special collection titled “Private Fusion Research: Opportunities and Challenges in Plasma Science.”

    With contributions from authors in academia, government, and private industry, the collection outlines a framework for public-private partnerships that will be essential for the success of the fusion industry.

    Now being seen as a potential source of unlimited green energy, fusion is the same process that powers the sun — hydrogen atoms combine to form helium, releasing vast amounts of clean energy in the form of light and heat.

    The excitement surrounding fusion’s arrival has resulted in the proliferation of dozens of for-profit companies positioning themselves at the forefront of the commercial fusion energy industry. In the near future, those companies will require a significant network of fusion-fluent workers to take on varied tasks requiring a range of skills.

    While the authors acknowledge the role of private industry, especially as an increasingly dominant source of research funding, they also show that academia is and will continue to be critical to industry’s development, and it cannot be decoupled from private industry’s growth. Despite the evidence of this burgeoning interest, the size and scale of the field’s academic network at U.S.-based universities is sparse.

    According to Whyte, “Diversifying the [fusion] field by adding more tracks for master’s students and undergraduates who can transition into industry more quickly is an important step.”

    An analysis found that while there are 57 universities in the United States active in plasma and fusion research, the average number of tenured or tenure-track plasma/fusion faculty at each institution is only two. By comparison, a sampling of US News and World Report’s top 10 programs for nuclear fission and aeronautics/astronautics found an average of nearly 20 faculty devoted to fission and 32 to aero/astro.

    “University programs in fusion and their sponsors need to up their game and hire additional faculty if they want to provide the necessary workforce to support a growing U.S. fusion industry,” adds Paz-Soldan.

    The growth and proliferation of those fields and others, such as computing and biotechnology, were historically in lockstep with the creation of academic programs that helped drive the fields’ progress and widespread acceptance. Creating a similar path for fusion is essential to ensuring its sustainable growth, and as Wirth notes, “that this growth should be pursued in a way that is interdisciplinary across numerous engineering and science disciplines.”

    At MIT, an example of that path is seen at the Plasma Science and Fusion Center.

    The center has deep historical ties to government research programs, and the largest fusion company in the world, Commonwealth Fusion Systems (CFS), was spun out of the PSFC by Whyte’s former students and an MIT postdoc. Whyte also serves as the primary investigator in collaborative research with CFS on SPARC, a proof-of-concept fusion platform for advancing tokamak science that is scheduled for completion in 2025.

    “Public and private roles in the fusion community are rapidly evolving in response to the growth of privately funded commercial product development,” says Michael Segal, head of open innovation at CFS. “The fusion industry will increasingly rely on its university partners to train students, work across diverse disciplines, and execute small and midsize programs at speed.”

    According to the authors, another key reason academia will remain essential to the continued growth and development of fusion is because it is unconflicted. Whyte comments, “Our mandate is sharing information and education, which means we have no competitive conflict and innovation can flow freely.” Furthermore, fusion science is inherently multidisciplinary: “[It] requires physicists, computer scientists, engineers, chemists, etc. and it’s easy to tap into all those disciplines in an academic environment where they’re all naturally rubbing elbows and collaborating.”

    Creating a new energy industry, however, will also require a workforce skilled in disciplines other than STEM, say the authors. As fusion companies continue to grow, they will need expertise in finance, safety, licensing, and market analysis. Any successful fusion enterprise will also have major geopolitical, societal, and economic impacts, all of which must be managed.

    Ultimately, there are several steps the authors identify to help build the connections between academia and industry that will be important going forward: The first is for universities to acknowledge the rapidly changing fusion landscape and begin to adapt. “Universities need to embrace the growth of the private sector in fusion, recognize the opportunities it provides, and seek out mutually beneficial partnerships,” says Paz-Soldan.

    The second step is to reconcile the mission of educational institutions — unconflicted open access — with condensed timelines and proprietary outputs that come with private partnerships. At the same time, the authors note that private fusion companies should embrace the transparency of academia by publishing and sharing the findings they can through peer-reviewed journals, which will be a necessary part of building the industry’s credibility.

    The last step, the authors say, is for universities to become more flexible and creative in their technology licensing strategies to ensure ideas and innovations find their way from the lab into industry.

    “As an industry, we’re in a unique position because everything is brand new,” Whyte says. “But we’re enough students of history that we can see what’s needed to succeed; quantifying the status of the private and academic landscape is an important strategic touchstone. By drawing attention to the current trajectory, hopefully we’ll be in a better position to work with our colleagues in the public and private sector and make better-informed choices about how to proceed.” More

  • in

    How to decarbonize the world, at scale

    The world in recent years has largely been moving on from debates about the need to curb carbon emissions and focusing more on action — the development, implementation, and deployment of the technological, economic, and policy measures to spur the scale of reductions needed by mid-century. That was the message Robert Stoner, the interim director of the MIT Energy Initiative (MITEI), gave in his opening remarks at the 2023 MITEI Annual Research Conference.

    Attendees at the two-day conference included faculty members, researchers, industry and financial leaders, government officials, and students, as well as more than 50 online participants from around the world.

    “We are at an extraordinary inflection point. We have this narrow window in time to mitigate the worst effects of climate change by transforming our entire energy system and economy,” said Jonah Wagner, the chief strategist of the U.S. Department of Energy’s (DOE) Loan Programs Office, in one of the conference’s keynote speeches.

    Yet the solutions exist, he said. “Most of the technologies that we need to deploy to stay close to the international target of 1.5 degrees Celsius warming are proven and ready to go,” he said. “We have over 80 percent of the technologies we will need through 2030, and at least half of the technologies we will need through 2050.”

    For example, Wagner pointed to the newly commissioned advanced nuclear power plant near Augusta, Georgia — the first new nuclear reactor built in the United States in a generation, partly funded through DOE loans. “It will be the largest source of clean power in America,” he said. Though implementing all the needed technologies in the United States through mid-century will cost an estimated $10 trillion, or about $300 billion a year, most of that money will come from the private sector, he said.

    As the United States faces what he describes as “a tsunami of distributed energy production,” one key example of the strategy that’s needed going forward, he said, is encouraging the development of virtual power plants (VPPs). The U.S. power grid is growing, he said, and will add 200 gigawatts of peak demand by 2030. But rather than building new, large power plants to satisfy that need, much of the increase can be accommodated by VPPs, he said — which are “aggregations of distributed energy resources like rooftop solar with batteries, like electric vehicles (EVs) and chargers, like smart appliances, commercial and industrial loads on the grid that can be used together to help balance supply and demand just like a traditional power plant.” For example, by shifting the time of demand for some applications where the timing is not critical, such as recharging EVs late at night instead of right after getting home from work when demand may be peaking, the need for extra peak power can be alleviated.

    Such programs “offer a broad range of benefits,” including affordability, reliability and resilience, decarbonization, and emissions reductions. But implementing such systems on a wide scale requires some up-front help, he explained. Payment for consumers to enroll in programs that allow such time adjustments “is the majority of the cost” of establishing VPPs, he says, “and that means most of the money spent on VPPs goes back into the pockets of American consumers.” But to make that happen, there is a need for standardization of VPP operations “so that we are not recreating the wheel every single time we deploy a pilot or an effort with a utility.”

    The conference’s other keynote speaker, Anne White, the vice provost and associate vice president for research administration at MIT, cited devastating recent floods, wildfires, and many other extreme weather-related crises around the world that have been exacerbated by climate change. “We saw in myriad ways that energy concerns and climate concerns are one and the same,” she said. “So, we must urgently develop and scale low-carbon and zero-carbon solutions to prevent future warming. And we must do this with a practical, systems-based approach that considers efficiency, affordability, equity, and sustainability for how the world will meet its energy needs.”

    White added that at MIT, “we are mobilizing everything.” People at MIT feel a strong sense of responsibility for dealing with these global issues, she said, “and I think it’s because we believe we have tools that can really make a difference.”

    Among the specific promising technologies that have sprung from MIT’s labs, she pointed out, is the rapid development of fusion technology that led to MIT spinoff company Commonwealth Fusion Systems, which aims to build a demonstration unit of a practical fusion power reactor by the decade’s end. That’s an outcome of decades of research, she emphasized — the kinds of early-stage risky work that only academic labs, with help from government grants, can carry out.

    For example, she pointed to the more than 200 projects that MITEI has provided seed funds of $150,000 each for two years, totaling over $28 million to date. Such early support is “a key part of producing the kind of transformative innovation we know we all need.” In addition, MIT’s The Engine has also helped launch not only Commonwealth Fusion Systems, but also Form Energy, a company building a plant in West Virginia to manufacture advanced iron-air batteries for renewable energy storage, and many others.

    Following that theme of supporting early innovation, the conference featured two panels that served to highlight the work of students and alumni and their energy-related startup companies. First, a startup showcase, moderated by Catarina Madeira, the director of MIT’s Startup Exchange, featured presentations about seven recent spinoff companies that are developing cutting-edge technologies that emerged from MIT research. These included:

    Aeroshield, developing a new kind of highly-insulated window using a unique aerogel material;
    Sublime, which is developing a low-emissions concrete;
    Found Energy, developing a way to use recycled aluminum as a fuel;
    Veir, developing superconducting power lines;
    Emvolom, developing inexpensive green fuels from waste gases;
    Boston Metal, developing low-emissions production processes for steel and other metals;
    Transaera, with a new kind of efficient air conditioning; and
    Carbon Recycling International, producing cheap hydrogen fuel and syngas.
    Later in the conference, a “student slam competition” featured presentations by 11 students who described results of energy projects they had been working on this past summer. The projects were as diverse as analyzing opposition to wind farms in Maine, how best to allocate EV charging stations, optimizing bioenergy production, recycling the lithium from batteries, encouraging adoption of heat pumps, and conflict analysis about energy project siting. Attendees voted on the quality of the student presentations, and electrical engineering and computer science student Tori Hagenlocker was declared first-place winner for her talk on heat pump adoption.

    Students were also featured in a first-time addition to the conference: a panel discussion among five current or recent students, giving their perspective on today’s energy issues and priorities, and how they are working toward trying to make a difference. Andres Alvarez, a recent graduate in nuclear engineering, described his work with a startup focused on identifying and supporting early-stage ideas that have potential. Graduate student Dyanna Jaye of urban studies and planning spoke about her work helping to launch a group called the Sunrise Movement to try to drive climate change as a top priority for the country, and her work helping to develop the Green New Deal.

    Peter Scott, a graduate student in mechanical engineering who is studying green hydrogen production, spoke of the need for a “very drastic and rapid phaseout of current, existing fossil fuels” and a halt on developing new sources. Amar Dayal, an MBA candidate at the MIT Sloan School of Management, talked about the interplay between technology and policy, and the crucial role that legislation like the Inflation Reduction Act can have in enabling new energy technology to make the climb to commercialization. And Shreyaa Raghavan, a doctoral student in the Institute of Data, Systems, and Society, talked about the importance of multidisciplinary approaches to climate issues, including the important role of computer science. She added that MIT does well on this compared to other institutions, and “sustainability and decarbonization is a pillar in a lot of the different departments and programs that exist here.”

    Some recent recipients of MITEI’s Seed Fund grants reported on their progress in a panel discussion moderated by MITEI Executive Director Martha Broad. Seed grant recipient Ariel Furst, a professor of chemical engineering, pointed out that access to electricity is very much concentrated in the global North and that, overall, one in 10 people worldwide lacks access to electricity and some 2.5 billion people “rely on dirty fuels to heat their homes and cook their food,” with impacts on both health and climate. The solution her project is developing involves using DNA molecules combined with catalysts to passively convert captured carbon dioxide into ethylene, a widely used chemical feedstock and fuel. Kerri Cahoy, a professor of aeronautics and astronautics, described her work on a system for monitoring methane emissions and power-line conditions by using satellite-based sensors. She and her team found that power lines often begin emitting detectable broadband radio frequencies long before they actually fail in a way that could spark fires.

    Admir Masic, an associate professor of civil and environmental engineering, described work on mining the ocean for minerals such as magnesium hydroxide to be used for carbon capture. The process can turn carbon dioxide into solid material that is stable over geological times and potentially usable as a construction material. Kripa Varanasi, a professor of mechanical engineering, said that over the years MITEI seed funding helped some of his projects that “went on to become startup companies, and some of them are thriving.” He described ongoing work on a new kind of electrolyzer for green hydrogen production. He developed a system using bubble-attracting surfaces to increase the efficiency of bioreactors that generate hydrogen fuel.

    A series of panel discussions over the two days covered a range of topics related to technologies and policies that could make a difference in combating climate change. On the technological side, one panel led by Randall Field, the executive director of MITEI’s Future Energy Systems Center, looked at large, hard-to-decarbonize industrial processes. Antoine Allanore, a professor of metallurgy, described progress in developing innovative processes for producing iron and steel, among the world’s most used commodities, in a way that drastically reduces greenhouse gas emissions. Greg Wilson of JERA Americas described the potential for ammonia produced from renewable sources to substitute for natural gas in power plants, greatly reducing emissions. Yet-Ming Chiang, a professor in materials science and engineering, described ways to decarbonize cement production using a novel low-temperature process. And Guiyan Zang, a research scientist at MITEI, spoke of efforts to reduce the carbon footprint of producing ethylene, a major industrial chemical, by using an electrochemical process.

    Another panel, led by Jacopo Buongiorno, professor of nuclear science and engineering, explored the brightening future for expansion of nuclear power, including new, small, modular reactors that are finally emerging into commercial demonstration. “There is for the first time truly here in the U.S. in at least a decade-and-a-half, a lot of excitement, a lot of attention towards nuclear,” Buongiorno said. Nuclear power currently produces 45 to 50 percent of the nation’s carbon-free electricity, the panelists said, and with the first new nuclear power plant in decades now in operation, the stage is set for significant growth.

    Carbon capture and sequestration was the subject of a panel led by David Babson, the executive director of MIT’s Climate Grand Challenges program. MIT professors Betar Gallant and Kripa Varanasi and industry representatives Elisabeth Birkeland from Equinor and Luc Huyse from Chevron Technology Ventures described significant progress in various approaches to recovering carbon dioxide from power plant emissions, from the air, and from the ocean, and converting it into fuels, construction materials, or other valuable commodities.

    Some panel discussions also addressed the financial and policy side of the climate issue. A panel on geopolitical implications of the energy transition was moderated by MITEI Deputy Director of Policy Christopher Knittel, who said “energy has always been synonymous with geopolitics.” He said that as concerns shift from where to find the oil and gas to where is the cobalt and nickel and other elements that will be needed, “not only are we worried about where the deposits of natural resources are, but we’re going to be more and more worried about how governments are incentivizing the transition” to developing this new mix of natural resources. Panelist Suzanne Berger, an Institute professor, said “we’re now at a moment of unique openness and opportunity for creating a new American production system,” one that is much more efficient and less carbon-producing.

    One panel dealt with the investor’s perspective on the possibilities and pitfalls of emerging energy technologies. Moderator Jacqueline Pless, an assistant professor in MIT Sloan, said “there’s a lot of momentum now in this space. It’s a really ripe time for investing,” but the risks are real. “Tons of investment is needed in some very big and uncertain technologies.”

    The role that large, established companies can play in leading a transition to cleaner energy was addressed by another panel. Moderator J.J. Laukatis, MITEI’s director of member services, said that “the scale of this transformation is massive, and it will also be very different from anything we’ve seen in the past. We’re going to have to scale up complex new technologies and systems across the board, from hydrogen to EVs to the electrical grid, at rates we haven’t done before.” And doing so will require a concerted effort that includes industry as well as government and academia. More