More stories

  • in

    Nuno Loureiro named director of MIT’s Plasma Science and Fusion Center

    Nuno Loureiro, professor of nuclear science and engineering and of physics, has been appointed the new director of the MIT Plasma Science and Fusion Center, effective May 1.Loureiro is taking the helm of one of MIT’s largest labs: more than 250 full-time researchers, staff members, and students work and study in seven buildings with 250,000 square feet of lab space. A theoretical physicist and fusion scientist, Loureiro joined MIT as a faculty member in 2016, and was appointed deputy director of the Plasma Science and Fusion Center (PSFC) in 2022. Loureiro succeeds Dennis Whyte, who stepped down at the end of 2023 to return to teaching and research.Stepping into his new role as director, Loureiro says, “The PSFC has an impressive tradition of discovery and leadership in plasma and fusion science and engineering. Becoming director of the PSFC is an incredible opportunity to shape the future of these fields. We have a world-class team, and it’s an honor to be chosen as its leader.”Loureiro’s own research ranges widely. He is recognized for advancing the understanding of multiple aspects of plasma behavior, particularly turbulence and the physics underpinning solar flares and other astronomical phenomena. In the fusion domain, his work enables the design of fusion devices that can more efficiently control and harness the energy of fusing plasmas, bringing the dream of clean, near-limitless fusion power that much closer. Plasma physics is foundational to advancing fusion science, a fact Loureiro has embraced and that is relevant as he considers the direction of the PSFC’s multidisciplinary research. “But plasma physics is only one aspect of our focus. Building a scientific agenda that continues and expands on the PSFC’s history of innovation in all aspects of fusion science and engineering is vital, and a key facet of that work is facilitating our researchers’ efforts to produce the breakthroughs that are necessary for the realization of fusion energy.”As the climate crisis accelerates, fusion power continues to grow in appeal: It produces no carbon emissions, its fuel is plentiful, and dangerous “meltdowns” are impossible. The sooner that fusion power is commercially available, the greater impact it can have on reducing greenhouse gas emissions and meeting global climate goals. While technical challenges remain, “the PSFC is well poised to meet them, and continue to show leadership. We are a mission-driven lab, and our students and staff are incredibly motivated,” Loureiro comments.“As MIT continues to lead the way toward the delivery of clean fusion power onto the grid, I have no doubt that Nuno is the right person to step into this key position at this critical time,” says Maria T. Zuber, MIT’s presidential advisor for science and technology policy. “I look forward to the steady advance of plasma physics and fusion science at MIT under Nuno’s leadership.”Over the last decade, there have been massive leaps forward in the field of fusion energy, driven in part by innovations like high-temperature superconducting magnets developed at the PSFC. Further progress is guaranteed: Loureiro believes that “The next few years are certain to be an exciting time for us, and for fusion as a whole. It’s the dawn of a new era with burning plasma experiments” — a reference to the collaboration between the PSFC and Commonwealth Fusion Systems, a startup company spun out of the PSFC, to build SPARC, a fusion device that is slated to turn on in 2026 and produce a burning plasma that yields more energy than it consumes. “It’s going to be a watershed moment,” says Loureiro.He continues, “In addition, we have strong connections to inertial confinement fusion experiments, including those at Lawrence Livermore National Lab, and we’re looking forward to expanding our research into stellarators, which are another kind of magnetic fusion device.” Over recent years, the PSFC has significantly increased its collaboration with industrial partners such Eni, IBM, and others. Loureiro sees great value in this: “These collaborations are mutually beneficial: they allow us to grow our research portfolio while advancing companies’ R&D efforts. It’s very dynamic and exciting.”Loureiro’s directorship begins as the PSFC is launching key tech development projects like LIBRA, a “blanket” of molten salt that can be wrapped around fusion vessels and perform double duty as a neutron energy absorber and a breeder for tritium (the fuel for fusion). Researchers at the PSFC have also developed a way to rapidly test the durability of materials being considered for use in a fusion power plant environment, and are now creating an experiment that will utilize a powerful particle accelerator called a gyrotron to irradiate candidate materials.Interest in fusion is at an all-time high; the demand for researchers and engineers, particularly in the nascent commercial fusion industry, is reflected by the record number of graduate students that are studying at the PSFC — more than 90 across seven affiliated MIT departments. The PSFC’s classrooms are full, and Loureiro notes a palpable sense of excitement. “Students are our greatest strength,” says Loureiro. “They come here to do world-class research but also to grow as individuals, and I want to give them a great place to do that. Supporting those experiences, making sure they can be as successful as possible is one of my top priorities.” Loureiro plans to continue teaching and advising students after his appointment begins.MIT President Sally Kornbluth’s recently announced Climate Project is a clarion call for Loureiro: “It’s not hyperbole to say MIT is where you go to find solutions to humanity’s biggest problems,” he says. “Fusion is a hard problem, but it can be solved with resolve and ingenuity — characteristics that define MIT. Fusion energy will change the course of human history. It’s both humbling and exciting to be leading a research center that will play a key role in enabling that change.”  More

  • in

    Offering clean energy around the clock

    As remarkable as the rise of solar and wind farms has been over the last 20 years, achieving complete decarbonization is going to require a host of complementary technologies. That’s because renewables offer only intermittent power. They also can’t directly provide the high temperatures necessary for many industrial processes.

    Now, 247Solar is building high-temperature concentrated solar power systems that use overnight thermal energy storage to provide round-the-clock power and industrial-grade heat.

    The company’s modular systems can be used as standalone microgrids for communities or to provide power in remote places like mines and farms. They can also be used in conjunction with wind and conventional solar farms, giving customers 24/7 power from renewables and allowing them to offset use of the grid.

    “One of my motivations for working on this system was trying to solve the problem of intermittency,” 247Solar CEO Bruce Anderson ’69, SM ’73 says. “I just couldn’t see how we could get to zero emissions with solar photovoltaics (PV) and wind. Even with PV, wind, and batteries, we can’t get there, because there’s always bad weather, and current batteries aren’t economical over long periods. You have to have a solution that operates 24 hours a day.”

    The company’s system is inspired by the design of a high-temperature heat exchanger by the late MIT Professor Emeritus David Gordon Wilson, who co-founded the company with Anderson. The company integrates that heat exchanger into what Anderson describes as a conventional, jet-engine-like turbine, enabling the turbine to produce power by circulating ambient pressure hot air with no combustion or emissions — what the company calls a first in the industry.

    Here’s how the system works: Each 247Solar system uses a field of sun-tracking mirrors called heliostats to reflect sunlight to the top of a central tower. The tower features a proprietary solar receiver that heats air to around 1,000 Celsius at atmospheric pressure. The air is then used to drive 247Solar’s turbines and generate 400 kilowatts of electricity and 600 kilowatts of heat. Some of the hot air is also routed through a long-duration thermal energy storage system, where it heats solid materials that retain the heat. The stored heat is then used to drive the turbines when the sun stops shining.

    “We offer round-the-clock electricity, but we also offer a combined heat and power option, with the ability to take heat up to 970 Celsius for use in industrial processes,” Anderson says. “It’s a very flexible system.”

    The company’s first deployment will be with a large utility in India. If that goes well, 247Solar hopes to scale up rapidly with other utilities, corporations, and communities around the globe.

    A new approach to concentrated solar

    Anderson kept in touch with his MIT network after graduating in 1973. He served as the director of MIT’s Industrial Liaison Program (ILP) between 1996 and 2000 and was elected as an alumni member of the MIT Corporation in 2013. The ILP connects companies with MIT’s network of students, faculty, and alumni to facilitate innovation, and the experience changed the course of Anderson’s career.

    “That was an extremely fascinating job, and from it two things happened,” Anderson says. “One is that I realized I was really an entrepreneur and was not well-suited to the university environment, and the other is that I was reminded of the countless amazing innovations coming out of MIT.”

    After leaving as director, Anderson began a startup incubator where he worked with MIT professors to start companies. Eventually, one of those professors was Wilson, who had invented the new heat exchanger and a ceramic turbine. Anderson and Wilson ended up putting together a small team to commercialize the technology in the early 2000s.

    Anderson had done his MIT master’s thesis on solar energy in the 1970s, and the team realized the heat exchanger made possible a novel approach to concentrated solar power. In 2010, they received a $6 million development grant from the U.S. Department of Energy. But their first solar receiver was damaged during shipping to a national laboratory for testing, and the company ran out of money.

    It wasn’t until 2015 that Anderson was able to raise money to get the company back off the ground. By that time, a new high-temperature metal alloy had been developed that Anderson swapped out for Wilson’s ceramic heat exchanger.

    The Covid-19 pandemic further slowed 247’s plans to build a demonstration facility at its test site in Arizona, but strong customer interest has kept the company busy. Concentrated solar power doesn’t work everywhere — Arizona’s clear sunshine is a better fit than Florida’s hazy skies, for example — but Anderson is currently in talks with communities in parts of the U.S., India, Africa, and Australia where the technology would be a good fit.

    These days, the company is increasingly proposing combining its systems with traditional solar PV, which lets customers reap the benefits of low-cost solar electricity during the day while using 247’s energy at night.

    “That way we can get at least 24, if not more, hours of energy from a sunny day,” Anderson says. “We’re really moving toward these hybrid systems, which work like a Prius: Sometimes you’re using one source of energy, sometimes you’re using the other.”

    The company also sells its HeatStorE thermal batteries as standalone systems. Instead of being heated by the solar system, the thermal storage is heated by circulating air through an electric coil that’s been heated by electricity, either from the grid, standalone PV, or wind. The heat can be stored for nine hours or more on a single charge and then dispatched as electricity plus industrial process heat at 250 Celsius, or as heat only, up to 970 Celsius.

    Anderson says 247’s thermal battery is about one-seventh the cost of lithium-ion batteries per kilowatt hour produced.

    Scaling a new model

    The company is keeping its system flexible for whatever path customers want to take to complete decarbonization.

    In addition to 247’s India project, the company is in advanced talks with off-grid communities in the Unites States and Egypt, mining operators around the world, and the government of a small country in Africa. Anderson says the company’s next customer will likely be an off-grid community in the U.S. that currently relies on diesel generators for power.

    The company has also partnered with a financial company that will allow it to access capital to fund its own projects and sell clean energy directly to customers, which Anderson says will help 247 grow faster than relying solely on selling entire systems to each customer.

    As it works to scale up its deployments, Anderson believes 247 offers a solution to help customers respond to increasing pressure from governments as well as community members.

    “Emerging economies in places like Africa don’t have any alternative to fossil fuels if they want 24/7 electricity,” Anderson says. “Our owning and operating costs are less than half that of diesel gen-sets. Customers today really want to stop producing emissions if they can, so you’ve got villages, mines, industries, and entire countries where the people inside are saying, ‘We can’t burn diesel anymore.’” More

  • in

    New major crosses disciplines to address climate change

    Lauren Aguilar knew she wanted to study energy systems at MIT, but before Course 1-12 (Climate System Science and Engineering) became a new undergraduate major, she didn’t see an obvious path to study the systems aspects of energy, policy, and climate associated with the energy transition.

    Aguilar was drawn to the new major that was jointly launched by the departments of Civil and Environmental Engineering (CEE) and Earth, Atmospheric and Planetary Sciences (EAPS) in 2023. She could take engineering systems classes and gain knowledge in climate.

    “Having climate knowledge enriches my understanding of how to build reliable and resilient energy systems for climate change mitigation. Understanding upon what scale we can forecast and predict climate change is crucial to build the appropriate level of energy infrastructure,” says Aguilar.

    The interdisciplinary structure of the 1-12 major has students engaging with and learning from professors in different disciplines across the Institute. The blended major was designed to provide a foundational understanding of the Earth system and engineering principles — as well as an understanding of human and institutional behavior as it relates to the climate challenge. Students learn the fundamental sciences through subjects like an atmospheric chemistry class focused on the global carbon cycle or a physics class on low-carbon energy systems. The major also covers topics in data science and machine learning as they relate to forecasting climate risks and building resilience, in addition to policy, economics, and environmental justice studies.

    Junior Ananda Figueiredo was one of the first students to declare the 1-12 major. Her decision to change majors stemmed from a motivation to improve people’s lives, especially when it comes to equality. “I like to look at things from a systems perspective, and climate change is such a complicated issue connected to many different pieces of our society,” says Figueiredo.

    A multifaceted field of study

    The 1-12 major prepares students with the necessary foundational expertise across disciplines to confront climate change. Andrew Babbin, an academic advisor in the new degree program and the Cecil and Ida Green Career Development Associate Professor in EAPS, says the new major harnesses rigorous training encompassing science, engineering, and policy to design and execute a way forward for society.

    Within its first year, Course 1-12 has attracted students with a diverse set of interests, ranging from machine learning for sustainability to nature-based solutions for carbon management to developing the next renewable energy technology and integrating it into the power system.

    Academic advisor Michael Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering, says the best part of this degree is the students, and the enthusiasm and optimism they bring to the climate challenge.

    “We have students seeking to impact policy and students double-majoring in computer science. For this generation, climate change is a challenge for today, not for the future. Their actions inside and outside the classroom speak to the urgency of the challenge and the promise that we can solve it,” Howland says.

    The degree program also leaves plenty of space for students to develop and follow their interests. Sophomore Katherine Kempff began this spring semester as a 1-12 major interested in sustainability and renewable energy. Kempff was worried she wouldn’t be able to finish 1-12 once she made the switch to a different set of classes, but Howland assured her there would be no problems, based on the structure of 1-12.

    “I really like how flexible 1-12 is. There’s a lot of classes that satisfy the requirements, and you are not pigeonholed. I feel like I’m going to be able to do what I’m interested in, rather than just following a set path of a major,” says Kempff.

    Kempff is leveraging her skills she developed this semester and exploring different career interests. She is interviewing for sustainability and energy-sector internships in Boston and MIT this summer, and is particularly interested in assisting MIT in meeting its new sustainability goals.

    Engineering a sustainable future

    The new major dovetail’s MIT’s commitment to address climate change with its steps in prioritizing and enhancing climate education. As the Institute continues making strides to accelerate solutions, students can play a leading role in changing the future.   

    “Climate awareness is critical to all MIT students, most of whom will face the consequences of the projection models for the end of the century,” says Babbin. “One-12 will be a focal point of the climate education mission to train the brightest and most creative students to engineer a better world and understand the complex science necessary to design and verify any solutions they invent.”

    Justin Cole, who transferred to MIT in January from the University of Colorado, served in the U.S. Air Force for nine years. Over the course of his service, he had a front row seat to the changing climate. From helping with the wildfire cleanup in Black Forest, Colorado — after the state’s most destructive fire at the time — to witnessing two category 5 typhoons in Japan in 2018, Cole’s experiences of these natural disasters impressed upon him that climate security was a prerequisite to international security. 

    Cole was recently accepted into the MIT Energy and Climate Club Launchpad initiative where he will work to solve real-world climate and energy problems with professionals in industry.

    “All of the dots are connecting so far in my classes, and all the hopes that I have for studying the climate crisis and the solutions to it at MIT are coming true,” says Cole.

    With a career path that is increasingly growing, there is a rising demand for scientists and engineers who have both deep knowledge of environmental and climate systems and expertise in methods for climate change mitigation.

    “Climate science must be coupled with climate solutions. As we experience worsening climate change, the environmental system will increasingly behave in new ways that we haven’t seen in the past,” says Howland. “Solutions to climate change must go beyond good engineering of small-scale components. We need to ensure that our system-scale solutions are maximally effective in reducing climate change, but are also resilient to climate change. And there is no time to waste,” he says. More

  • in

    Extracting hydrogen from rocks

    It’s commonly thought that the most abundant element in the universe, hydrogen, exists mainly alongside other elements — with oxygen in water, for example, and with carbon in methane. But naturally occurring underground pockets of pure hydrogen are punching holes in that notion — and generating attention as a potentially unlimited source of carbon-free power. One interested party is the U.S. Department of Energy, which last month awarded $20 million in research grants to 18 teams from laboratories, universities, and private companies to develop technologies that can lead to cheap, clean fuel from the subsurface. Geologic hydrogen, as it’s known, is produced when water reacts with iron-rich rocks, causing the iron to oxidize. One of the grant recipients, MIT Assistant Professor Iwnetim Abate’s research group, will use its $1.3 million grant to determine the ideal conditions for producing hydrogen underground — considering factors such as catalysts to initiate the chemical reaction, temperature, pressure, and pH levels. The goal is to improve efficiency for large-scale production, meeting global energy needs at a competitive cost. The U.S. Geological Survey estimates there are potentially billions of tons of geologic hydrogen buried in the Earth’s crust. Accumulations have been discovered worldwide, and a slew of startups are searching for extractable deposits. Abate is looking to jump-start the natural hydrogen production process, implementing “proactive” approaches that involve stimulating production and harvesting the gas.                                                                                                                         “We aim to optimize the reaction parameters to make the reaction faster and produce hydrogen in an economically feasible manner,” says Abate, the Chipman Development Professor in the Department of Materials Science and Engineering (DMSE). Abate’s research centers on designing materials and technologies for the renewable energy transition, including next-generation batteries and novel chemical methods for energy storage. 

    Sparking innovation

    Interest in geologic hydrogen is growing at a time when governments worldwide are seeking carbon-free energy alternatives to oil and gas. In December, French President Emmanuel Macron said his government would provide funding to explore natural hydrogen. And in February, government and private sector witnesses briefed U.S. lawmakers on opportunities to extract hydrogen from the ground. Today commercial hydrogen is manufactured at $2 a kilogram, mostly for fertilizer and chemical and steel production, but most methods involve burning fossil fuels, which release Earth-heating carbon. “Green hydrogen,” produced with renewable energy, is promising, but at $7 per kilogram, it’s expensive. “If you get hydrogen at a dollar a kilo, it’s competitive with natural gas on an energy-price basis,” says Douglas Wicks, a program director at Advanced Research Projects Agency – Energy (ARPA-E), the Department of Energy organization leading the geologic hydrogen grant program. Recipients of the ARPA-E grants include Colorado School of Mines, Texas Tech University, and Los Alamos National Laboratory, plus private companies including Koloma, a hydrogen production startup that has received funding from Amazon and Bill Gates. The projects themselves are diverse, ranging from applying industrial oil and gas methods for hydrogen production and extraction to developing models to understand hydrogen formation in rocks. The purpose: to address questions in what Wicks calls a “total white space.” “In geologic hydrogen, we don’t know how we can accelerate the production of it, because it’s a chemical reaction, nor do we really understand how to engineer the subsurface so that we can safely extract it,” Wicks says. “We’re trying to bring in the best skills of each of the different groups to work on this under the idea that the ensemble should be able to give us good answers in a fairly rapid timeframe.” Geochemist Viacheslav Zgonnik, one of the foremost experts in the natural hydrogen field, agrees that the list of unknowns is long, as is the road to the first commercial projects. But he says efforts to stimulate hydrogen production — to harness the natural reaction between water and rock — present “tremendous potential.” “The idea is to find ways we can accelerate that reaction and control it so we can produce hydrogen on demand in specific places,” says Zgonnik, CEO and founder of Natural Hydrogen Energy, a Denver-based startup that has mineral leases for exploratory drilling in the United States. “If we can achieve that goal, it means that we can potentially replace fossil fuels with stimulated hydrogen.”

    “A full-circle moment”

    For Abate, the connection to the project is personal. As a child in his hometown in Ethiopia, power outages were a usual occurrence — the lights would be out three, maybe four days a week. Flickering candles or pollutant-emitting kerosene lamps were often the only source of light for doing homework at night. “And for the household, we had to use wood and charcoal for chores such as cooking,” says Abate. “That was my story all the way until the end of high school and before I came to the U.S. for college.” In 1987, well-diggers drilling for water in Mali in Western Africa uncovered a natural hydrogen deposit, causing an explosion. Decades later, Malian entrepreneur Aliou Diallo and his Canadian oil and gas company tapped the well and used an engine to burn hydrogen and power electricity in the nearby village. Ditching oil and gas, Diallo launched Hydroma, the world’s first hydrogen exploration enterprise. The company is drilling wells near the original site that have yielded high concentrations of the gas. “So, what used to be known as an energy-poor continent now is generating hope for the future of the world,” Abate says. “Learning about that was a full-circle moment for me. Of course, the problem is global; the solution is global. But then the connection with my personal journey, plus the solution coming from my home continent, makes me personally connected to the problem and to the solution.”

    Experiments that scale

    Abate and researchers in his lab are formulating a recipe for a fluid that will induce the chemical reaction that triggers hydrogen production in rocks. The main ingredient is water, and the team is testing “simple” materials for catalysts that will speed up the reaction and in turn increase the amount of hydrogen produced, says postdoc Yifan Gao. “Some catalysts are very costly and hard to produce, requiring complex production or preparation,” Gao says. “A catalyst that’s inexpensive and abundant will allow us to enhance the production rate — that way, we produce it at an economically feasible rate, but also with an economically feasible yield.” The iron-rich rocks in which the chemical reaction happens can be found across the United States and the world. To optimize the reaction across a diversity of geological compositions and environments, Abate and Gao are developing what they call a high-throughput system, consisting of artificial intelligence software and robotics, to test different catalyst mixtures and simulate what would happen when applied to rocks from various regions, with different external conditions like temperature and pressure. “And from that we measure how much hydrogen we are producing for each possible combination,” Abate says. “Then the AI will learn from the experiments and suggest to us, ‘Based on what I’ve learned and based on the literature, I suggest you test this composition of catalyst material for this rock.’” The team is writing a paper on its project and aims to publish its findings in the coming months. The next milestones for the project, after developing the catalyst recipe, is designing a reactor that will serve two purposes. First, fitted with technologies such as Raman spectroscopy, it will allow researchers to identify and optimize the chemical conditions that lead to improved rates and yield of hydrogen production. The lab-scale device will also inform the design of a real-world reactor that can accelerate hydrogen production in the field. “That would be a plant-scale reactor that would be implanted into the subsurface,” Abate says. The cross-disciplinary project is also tapping the expertise of Yang Shao-Horn, of MIT’s Department of Mechanical Engineering and DMSE, for computational analysis of the catalyst, and Esteban Gazel, a Cornell University scientist who will lend his expertise in geology and geochemistry. He’ll focus on understanding the iron-rich ultramafic rock formations across the United States and the globe and how they react with water. For Wicks at ARPA-E, the questions Abate and the other grant recipients are asking are just the first, critical steps in uncharted energy territory. “If we can understand how to stimulate these rocks into generating hydrogen, safely getting it up, it really unleashes the potential energy source,” he says. Then the emerging industry will look to oil and gas for the drilling, piping, and gas extraction know-how. “As I like to say, this is enabling technology that we hope to, in a very short term, enable us to say, ‘Is there really something there?’” More

  • in

    Future nuclear power reactors could rely on molten salts — but what about corrosion?

    Most discussions of how to avert climate change focus on solar and wind generation as key to the transition to a future carbon-free power system. But Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering at MIT and associate director of the MIT Plasma Science and Fusion Center (PSFC), is impatient with such talk. “We can say we should have only wind and solar someday. But we don’t have the luxury of ‘someday’ anymore, so we can’t ignore other helpful ways to combat climate change,” he says. “To me, it’s an ‘all-hands-on-deck’ thing. Solar and wind are clearly a big part of the solution. But I think that nuclear power also has a critical role to play.”

    For decades, researchers have been working on designs for both fission and fusion nuclear reactors using molten salts as fuels or coolants. While those designs promise significant safety and performance advantages, there’s a catch: Molten salt and the impurities within it often corrode metals, ultimately causing them to crack, weaken, and fail. Inside a reactor, key metal components will be exposed not only to molten salt but also simultaneously to radiation, which generally has a detrimental effect on materials, making them more brittle and prone to failure. Will irradiation make metal components inside a molten salt-cooled nuclear reactor corrode even more quickly?

    Short and Weiyue Zhou PhD ’21, a postdoc in the PSFC, have been investigating that question for eight years. Their recent experimental findings show that certain alloys will corrode more slowly when they’re irradiated — and identifying them among all the available commercial alloys can be straightforward.

    The first challenge — building a test facility

    When Short and Zhou began investigating the effect of radiation on corrosion, practically no reliable facilities existed to look at the two effects at once. The standard approach was to examine such mechanisms in sequence: first corrode, then irradiate, then examine the impact on the material. That approach greatly simplifies the task for the researchers, but with a major trade-off. “In a reactor, everything is going to be happening at the same time,” says Short. “If you separate the two processes, you’re not simulating a reactor; you’re doing some other experiment that’s not as relevant.”

    So, Short and Zhou took on the challenge of designing and building an experimental setup that could do both at once. Short credits a team at the University of Michigan for paving the way by designing a device that could accomplish that feat in water, rather than molten salts. Even so, Zhou notes, it took them three years to come up with a device that would work with molten salts. Both researchers recall failure after failure, but the persistent Zhou ultimately tried a totally new design, and it worked. Short adds that it also took them three years to precisely replicate the salt mixture used by industry — another factor critical to getting a meaningful result. The hardest part was achieving and ensuring that the purity was correct by removing critical impurities such as moisture, oxygen, and certain other metals.

    As they were developing and testing their setup, Short and Zhou obtained initial results showing that proton irradiation did not always accelerate corrosion but sometimes actually decelerated it. They and others had hypothesized that possibility, but even so, they were surprised. “We thought we must be doing something wrong,” recalls Short. “Maybe we mixed up the samples or something.” But they subsequently made similar observations for a variety of conditions, increasing their confidence that their initial observations were not outliers.

    The successful setup

    Central to their approach is the use of accelerated protons to mimic the impact of the neutrons inside a nuclear reactor. Generating neutrons would be both impractical and prohibitively expensive, and the neutrons would make everything highly radioactive, posing health risks and requiring very long times for an irradiated sample to cool down enough to be examined. Using protons would enable Short and Zhou to examine radiation-altered corrosion both rapidly and safely.

    Key to their experimental setup is a test chamber that they attach to a proton accelerator. To prepare the test chamber for an experiment, they place inside it a thin disc of the metal alloy being tested on top of a a pellet of salt. During the test, the entire foil disc is exposed to a bath of molten salt. At the same time, a beam of protons bombards the sample from the side opposite the salt pellet, but the proton beam is restricted to a circle in the middle of the foil sample. “No one can argue with our results then,” says Short. “In a single experiment, the whole sample is subjected to corrosion, and only a circle in the center of the sample is simultaneously irradiated by protons. We can see the curvature of the proton beam outline in our results, so we know which region is which.”

    The results with that arrangement were unchanged from the initial results. They confirmed the researchers’ preliminary findings, supporting their controversial hypothesis that rather than accelerating corrosion, radiation would actually decelerate corrosion in some materials under some conditions. Fortunately, they just happen to be the same conditions that will be experienced by metals in molten salt-cooled reactors.

    Why is that outcome controversial? A closeup look at the corrosion process will explain. When salt corrodes metal, the salt finds atomic-level openings in the solid, seeps in, and dissolves salt-soluble atoms, pulling them out and leaving a gap in the material — a spot where the material is now weak. “Radiation adds energy to atoms, causing them to be ballistically knocked out of their positions and move very fast,” explains Short. So, it makes sense that irradiating a material would cause atoms to move into the salt more quickly, increasing the rate of corrosion. Yet in some of their tests, the researchers found the opposite to be true.

    Experiments with “model” alloys

    The researchers’ first experiments in their novel setup involved “model” alloys consisting of nickel and chromium, a simple combination that would give them a first look at the corrosion process in action. In addition, they added europium fluoride to the salt, a compound known to speed up corrosion. In our everyday world, we often think of corrosion as taking years or decades, but in the more extreme conditions of a molten salt reactor it can noticeably occur in just hours. The researchers used the europium fluoride to speed up corrosion even more without changing the corrosion process. This allowed for more rapid determination of which materials, under which conditions, experienced more or less corrosion with simultaneous proton irradiation.

    The use of protons to emulate neutron damage to materials meant that the experimental setup had to be carefully designed and the operating conditions carefully selected and controlled. Protons are hydrogen atoms with an electrical charge, and under some conditions the hydrogen could chemically react with atoms in the sample foil, altering the corrosion response, or with ions in the salt, making the salt more corrosive. Therefore, the proton beam had to penetrate the foil sample but then stop in the salt as soon as possible. Under these conditions, the researchers found they could deliver a relatively uniform dose of radiation inside the foil layer while also minimizing chemical reactions in both the foil and the salt.

    Tests showed that a proton beam accelerated to 3 million electron-volts combined with a foil sample between 25 and 30 microns thick would work well for their nickel-chromium alloys. The temperature and duration of the exposure could be adjusted based on the corrosion susceptibility of the specific materials being tested.

    Optical images of samples examined after tests with the model alloys showed a clear boundary between the area that was exposed only to the molten salt and the area that was also exposed to the proton beam. Electron microscope images focusing on that boundary showed that the area that had been exposed only to the molten salt included dark patches where the molten salt had penetrated all the way through the foil, while the area that had also been exposed to the proton beam showed almost no such dark patches.

    To confirm that the dark patches were due to corrosion, the researchers cut through the foil sample to create cross sections. In them, they could see tunnels that the salt had dug into the sample. “For regions not under radiation, we see that the salt tunnels link the one side of the sample to the other side,” says Zhou. “For regions under radiation, we see that the salt tunnels stop more or less halfway and rarely reach the other side. So we verified that they didn’t penetrate the whole way.”

    The results “exceeded our wildest expectations,” says Short. “In every test we ran, the application of radiation slowed corrosion by a factor of two to three times.”

    More experiments, more insights

    In subsequent tests, the researchers more closely replicated commercially available molten salt by omitting the additive (europium fluoride) that they had used to speed up corrosion, and they tweaked the temperature for even more realistic conditions. “In carefully monitored tests, we found that by raising the temperature by 100 degrees Celsius, we could get corrosion to happen about 1,000 times faster than it would in a reactor,” says Short.

    Images from experiments with the nickel-chromium alloy plus the molten salt without the corrosive additive yielded further insights. Electron microscope images of the side of the foil sample facing the molten salt showed that in sections only exposed to the molten salt, the corrosion is clearly focused on the weakest part of the structure — the boundaries between the grains in the metal. In sections that were exposed to both the molten salt and the proton beam, the corrosion isn’t limited to the grain boundaries but is more spread out over the surface. Experimental results showed that these cracks are shallower and less likely to cause a key component to break.

    Short explains the observations. Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are areas — called grain boundaries — where the atoms don’t line up as well. In the corrosion-only images, dark lines track the grain boundaries. Molten salt has seeped into the grain boundaries and pulled out salt-soluble atoms. In the corrosion-plus-irradiation images, the damage is more general. It’s not only the grain boundaries that get attacked but also regions within the grains.

    So, when the material is irradiated, the molten salt also removes material from within the grains. Over time, more material comes out of the grains themselves than from the spaces between them. The removal isn’t focused on the grain boundaries; it’s spread out over the whole surface. As a result, any cracks that form are shallower and more spread out, and the material is less likely to fail.

    Testing commercial alloys

    The experiments described thus far involved model alloys — simple combinations of elements that are good for studying science but would never be used in a reactor. In the next series of experiments, the researchers focused on three commercially available alloys that are composed of nickel, chromium, iron, molybdenum, and other elements in various combinations.

    Results from the experiments with the commercial alloys showed a consistent pattern — one that confirmed an idea that the researchers had going in: the higher the concentration of salt-soluble elements in the alloy, the worse the radiation-induced corrosion damage. Radiation will increase the rate at which salt-soluble atoms such as chromium leave the grain boundaries, hastening the corrosion process. However, if there are more not-soluble elements such as nickel present, those atoms will go into the salt more slowly. Over time, they’ll accumulate at the grain boundary and form a protective coating that blocks the grain boundary — a “self-healing mechanism that decelerates the rate of corrosion,” say the researchers.

    Thus, if an alloy consists mostly of atoms that don’t dissolve in molten salt, irradiation will cause them to form a protective coating that slows the corrosion process. But if an alloy consists mostly of atoms that dissolve in molten salt, irradiation will make them dissolve faster, speeding up corrosion. As Short summarizes, “In terms of corrosion, irradiation makes a good alloy better and a bad alloy worse.”

    Real-world relevance plus practical guidelines

    Short and Zhou find their results encouraging. In a nuclear reactor made of “good” alloys, the slowdown in corrosion will probably be even more pronounced than what they observed in their proton-based experiments because the neutrons that inflict the damage won’t chemically react with the salt to make it more corrosive. As a result, reactor designers could push the envelope more in their operating conditions, allowing them to get more power out of the same nuclear plant without compromising on safety.

    However, the researchers stress that there’s much work to be done. Many more projects are needed to explore and understand the exact corrosion mechanism in specific alloys under different irradiation conditions. In addition, their findings need to be replicated by groups at other institutions using their own facilities. “What needs to happen now is for other labs to build their own facilities and start verifying whether they get the same results as we did,” says Short. To that end, Short and Zhou have made the details of their experimental setup and all of their data freely available online. “We’ve also been actively communicating with researchers at other institutions who have contacted us,” adds Zhou. “When they’re planning to visit, we offer to show them demonstration experiments while they’re here.”

    But already their findings provide practical guidance for other researchers and equipment designers. For example, the standard way to quantify corrosion damage is by “mass loss,” a measure of how much weight the material has lost. But Short and Zhou consider mass loss a flawed measure of corrosion in molten salts. “If you’re a nuclear plant operator, you usually care whether your structural components are going to break,” says Short. “Our experiments show that radiation can change how deep the cracks are, when all other things are held constant. The deeper the cracks, the more likely a structural component is to break, leading to a reactor failure.”

    In addition, the researchers offer a simple rule for identifying good metal alloys for structural components in molten salt reactors. Manufacturers provide extensive lists of available alloys with different compositions, microstructures, and additives. Faced with a list of options for critical structures, the designer of a new nuclear fission or fusion reactor can simply examine the composition of each alloy being offered. The one with the highest content of corrosion-resistant elements such as nickel will be the best choice. Inside a nuclear reactor, that alloy should respond to a bombardment of radiation not by corroding more rapidly but by forming a protective layer that helps block the corrosion process. “That may seem like a trivial result, but the exact threshold where radiation decelerates corrosion depends on the salt chemistry, the density of neutrons in the reactor, their energies, and a few other factors,” says Short. “Therefore, the complete guidelines are a bit more complicated. But they’re presented in a straightforward way that users can understand and utilize to make a good choice for the molten salt–based reactor they’re designing.”

    This research was funded, in part, by Eni S.p.A. through the MIT Plasma Science and Fusion Center’s Laboratory for Innovative Fusion Technologies. Earlier work was funded, in part, by the Transatomic Power Corporation and by the U.S. Department of Energy Nuclear Energy University Program. Equipment development and testing was supported by the Transatomic Power Corporation.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Optimizing nuclear fuels for next-generation reactors

    In 2010, when Ericmoore Jossou was attending college in northern Nigeria, the lights would flicker in and out all day, sometimes lasting only for a couple of hours at a time. The frustrating experience reaffirmed Jossou’s realization that the country’s sporadic energy supply was a problem. It was the beginning of his path toward nuclear engineering.

    Because of the energy crisis, “I told myself I was going to find myself in a career that allows me to develop energy technologies that can easily be scaled to meet the energy needs of the world, including my own country,” says Jossou, an assistant professor in a shared position between the departments of Nuclear Science and Engineering (NSE), where is the John Clark Hardwick (1986) Professor, and of Electrical Engineering and Computer Science.

    Today, Jossou uses computer simulations for rational materials design, AI-aided purposeful development of cladding materials and fuels for next-generation nuclear reactors. As one of the shared faculty hires between the MIT Schwarzman College of Computing and departments across MIT, his appointment recognizes his commitment to computing for climate and the environment.

    A well-rounded education in Nigeria

    Growing up in Lagos, Jossou knew education was about more than just bookish knowledge, so he was eager to travel and experience other cultures. He would start in his own backyard by traveling across the Niger river and enrolling in Ahmadu Bello University in northern Nigeria. Moving from the south was a cultural education with a different language and different foods. It was here that Jossou got to try and love tuwo shinkafa, a northern Nigerian rice-based specialty, for the first time.

    After his undergraduate studies, armed with a bachelor’s degree in chemistry, Jossou was among a small cohort selected for a specialty master’s training program funded by the World Bank Institute and African Development Bank. The program at the African University of Science and Technology in Abuja, Nigeria, is a pan-African venture dedicated to nurturing homegrown science talent on the continent. Visiting professors from around the world taught intensive three-week courses, an experience which felt like drinking from a fire hose. The program widened Jossou’s views and he set his sights on a doctoral program with an emphasis on clean energy systems.

    A pivot to nuclear science

    While in Nigeria, Jossou learned of Professor Jerzy Szpunar at the University of Saskatchewan in Canada, who was looking for a student researcher to explore fuels and alloys for nuclear reactors. Before then, Jossou was lukewarm on nuclear energy, but the research sounded fascinating. The Fukushima, Japan, incident was recently in the rearview mirror and Jossou remembered his early determination to address his own country’s energy crisis. He was sold on the idea and graduated with a doctoral degree from the University of Saskatchewan on an international dean’s scholarship.

    Jossou’s postdoctoral work registered a brief stint at Brookhaven National Laboratory as staff scientist. He leaped at the opportunity to join MIT NSE as a way of realizing his research interest and teaching future engineers. “I would really like to conduct cutting-edge research in nuclear materials design and to pass on my knowledge to the next generation of scientists and engineers and there’s no better place to do that than at MIT,” Jossou says.

    Merging material science and computational modeling

    Jossou’s doctoral work on designing nuclear fuels for next-generation reactors forms the basis of research his lab is pursuing at MIT NSE. Nuclear reactors that were built in the 1950s and ’60s are getting a makeover in terms of improved accident tolerance. Reactors are not confined to one kind, either: We have micro reactors and are now considering ones using metallic nuclear fuels, Jossou points out. The diversity of options is enough to keep researchers busy testing materials fit for cladding, the lining that prevents corrosion of the fuel and release of radioactive fission products into the surrounding reactor coolant.

    The team is also investigating fuels that improve burn-up efficiencies, so they can last longer in the reactor. An intriguing approach has been to immobilize the gas bubbles that arise from the fission process, so they don’t grow and degrade the fuel.

    Since joining MIT in July 2023, Jossou is setting up a lab that optimizes the composition of accident-tolerant nuclear fuels. He is leaning on his materials science background and looping computer simulations and artificial intelligence in the mix.

    Computer simulations allow the researchers to narrow down the potential field of candidates, optimized for specific parameters, so they can synthesize only the most promising candidates in the lab. And AI’s predictive capabilities guide researchers on which materials composition to consider next. “We no longer depend on serendipity to choose our materials, our lab is based on rational materials design,” Jossou says, “we can rapidly design advanced nuclear fuels.”

    Advancing energy causes in Africa

    Now that he is at MIT, Jossou admits the view from the outside is different. He now harbors a different perspective on what Africa needs to address some of its challenges. “The starting point to solve our problems is not money; it needs to start with ideas,” he says, “we need to find highly skilled people who can actually solve problems.” That job involves adding economic value to the rich arrays of raw materials that the continent is blessed with. It frustrates Jossou that Niger, a country rich in raw material for uranium, has no nuclear reactors of its own. It ships most of its ore to France. “The path forward is to find a way to refine these materials in Africa and to be able to power the industries on that continent as well,” Jossou says.

    Jossou is determined to do his part to eliminate these roadblocks.

    Anchored in mentorship, Jossou’s solution aims to train talent from Africa in his own lab. He has applied for a MIT Global Experiences MISTI grant to facilitate travel and research studies for Ghanaian scientists. “The goal is to conduct research in our facility and perhaps add value to indigenous materials,” Jossou says.

    Adding value has been a consistent theme of Jossou’s career. He remembers wanting to become a neurosurgeon after reading “Gifted Hands,” moved by the personal story of the author, Ben Carson. As Jossou grew older, however, he realized that becoming a doctor wasn’t necessarily what he wanted. Instead, he was looking to add value. “What I wanted was really to take on a career that allows me to solve a societal problem.” The societal problem of clean and safe energy for all is precisely what Jossou is working on today. More

  • in

    Making the clean energy transition work for everyone

    The clean energy transition is already underway, but how do we make sure it happens in a manner that is affordable, sustainable, and fair for everyone?

    That was the overarching question at this year’s MIT Energy Conference, which took place March 11 and 12 in Boston and was titled “Short and Long: A Balanced Approach to the Energy Transition.”

    Each year, the student-run conference brings together leaders in the energy sector to discuss the progress and challenges they see in their work toward a greener future. Participants come from research, industry, government, academia, and the investment community to network and exchange ideas over two whirlwind days of keynote talks, fireside chats, and panel discussions.

    Several participants noted that clean energy technologies are already cost-competitive with fossil fuels, but changing the way the world works requires more than just technology.

    “None of this is easy, but I think developing innovative new technologies is really easy compared to the things we’re talking about here, which is how to blend social justice, soft engineering, and systems thinking that puts people first,” Daniel Kammen, a distinguished professor of energy at the University of California at Berkeley, said in a keynote talk. “While clean energy has a long way to go, it is more than ready to transition us from fossil fuels.”

    The event also featured a keynote discussion between MIT President Sally Kornbluth and MIT’s Kyocera Professor of Ceramics Yet-Ming Chiang, in which Kornbluth discussed her first year at MIT as well as a recently announced, campus-wide effort to solve critical climate problems known as the Climate Project at MIT.

    “The reason I wanted to come to MIT was I saw that MIT has the potential to solve the world’s biggest problems, and first among those for me was the climate crisis,” Kornbluth said. “I’m excited about where we are, I’m excited about the enthusiasm of the community, and I think we’ll be able to make really impactful discoveries through this project.”

    Fostering new technologies

    Several panels convened experts in new or emerging technology fields to discuss what it will take for their solutions to contribute to deep decarbonization.

    “The fun thing and challenging thing about first-of-a-kind technologies is they’re all kind of different,” said Jonah Wagner, principal assistant director for industrial innovation and clean energy in the U.S. Office of Science and Technology Policy. “You can map their growth against specific challenges you expect to see, but every single technology is going to face their own challenges, and every single one will have to defy an engineering barrier to get off the ground.”

    Among the emerging technologies discussed was next-generation geothermal energy, which uses new techniques to extract heat from the Earth’s crust in new places.

    A promising aspect of the technology is that it can leverage existing infrastructure and expertise from the oil and gas industry. Many newly developed techniques for geothermal production, for instance, use the same drills and rigs as those used for hydraulic fracturing.

    “The fact that we have a robust ecosystem of oil and gas labor and technology in the U.S. makes innovation in geothermal much more accessible compared to some of the challenges we’re seeing in nuclear or direct-air capture, where some of the supply chains are disaggregated around the world,” said Gabrial Malek, chief of staff at the geothermal company Fervo Energy.

    Another technology generating excitement — if not net energy quite yet — is fusion, the process of combining, or fusing, light atoms together to form heavier ones for a net energy gain, in the same process that powers the sun. MIT spinout Commonwealth Fusion Systems (CFS) has already validated many aspects of its approach for achieving fusion power, and the company’s unique partnership with MIT was discussed in a panel on the industry’s progress.

    “We’re standing on the shoulders of decades of research from the scientific community, and we want to maintain those ties even as we continue developing our technology,” CFS Chief Science Officer Brandon Sorbom PhD ’17 said, noting that CFS is one of the largest company sponsors of research at MIT and collaborates with institutions around the world. “Engaging with the community is a really valuable lever to get new ideas and to sanity check our own ideas.”

    Sorbom said that as CFS advances fusion energy, the company is thinking about how it can replicate its processes to lower costs and maximize the technology’s impact around the planet.

    “For fusion to work, it has to work for everyone,” Sorbom said. “I think the affordability piece is really important. We can’t just build this technological jewel that only one class of nations can afford. It has to be a technology that can be deployed throughout the entire world.”

    The event also gave students — many from MIT — a chance to learn more about careers in energy and featured a startup showcase, in which dozens of companies displayed their energy and sustainability solutions.

    “More than 700 people are here from every corner of the energy industry, so there are so many folks to connect with and help me push my vision into reality,” says GreenLIB CEO Fred Rostami, whose company recycles lithium-ion batteries. “The good thing about the energy transition is that a lot of these technologies and industries overlap, so I think we can enable this transition by working together at events like this.”

    A focused climate strategy

    Kornbluth noted that when she came to MIT, a large percentage of students and faculty were already working on climate-related technologies. With the Climate Project at MIT, she wanted to help ensure the whole of those efforts is greater than the sum of its parts.

    The project is organized around six distinct missions, including decarbonizing energy and industry, empowering frontline communities, and building healthy, resilient cities. Kornbluth says the mission areas will help MIT community members collaborate around multidisciplinary challenges. Her team, which includes a committee of faculty advisors, has begun to search for the leads of each mission area, and Kornbluth said she is planning to appoint a vice president for climate at the Institute.

    “I want someone who has the purview of the whole Institute and will report directly to me to help make sure this project stays on track,” Kornbluth explained.

    In his conversation about the initiative with Kornbluth, Yet-Ming Chiang said projects will be funded based on their potential to reduce emissions and make the planet more sustainable at scale.

    “Projects should be very high risk, with very high impact,” Chiang explained. “They should have a chance to prove themselves, and those efforts should not be limited by resources, only by time.”

    In discussing her vision of the climate project, Kornbluth alluded to the “short and long” theme of the conference.

    “It’s about balancing research and commercialization,” Kornbluth said. “The climate project has a very variable timeframe, and I think universities are the sector that can think about the things that might be 30 years out. We have to think about the incentives across the entire innovation pipeline and how we can keep an eye on the long term while making sure the short-term things get out rapidly.” More

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More