More stories

  • in

    Q&A: Claire Walsh on how J-PAL’s King Climate Action Initiative tackles the twin climate and poverty crises

    The King Climate Action Initiative (K-CAI) is the flagship climate change program of the Abdul Latif Jameel Poverty Action Lab (J-PAL), which innovates, tests, and scales solutions at the nexus of climate change and poverty alleviation, together with policy partners worldwide.

    Claire Walsh is the associate director of policy at J-PAL Global at MIT. She is also the project director of K-CAI. Here, Walsh talks about the work of K-CAI since its launch in 2020, and describes the ways its projects are making a difference. This is part of an ongoing series exploring how the MIT School of Humanities, Arts, and Social Sciences is addressing the climate crisis.

    Q: According to the King Climate Action Initiative (K-CAI), any attempt to address poverty effectively must also simultaneously address climate change. Why is that?

    A: Climate change will disproportionately harm people in poverty, particularly in low- and middle-income countries, because they tend to live in places that are more exposed to climate risk. These are nations in sub-Saharan Africa and South and Southeast Asia where low-income communities rely heavily on agriculture for their livelihoods, so extreme weather — heat, droughts, and flooding — can be devastating for people’s jobs and food security. In fact, the World Bank estimates that up to 130 million more people may be pushed into poverty by climate change by 2030.

    This is unjust because these countries have historically emitted the least; their people didn’t cause the climate crisis. At the same time, they are trying to improve their economies and improve people’s welfare, so their energy demands are increasing, and they are emitting more. But they don’t have the same resources as wealthy nations for mitigation or adaptation, and many developing countries understandably don’t feel eager to put solving a problem they didn’t create at the top of their priority list. This makes finding paths forward to cutting emissions on a global scale politically challenging.

    For these reasons, the problems of enhancing the well-being of people experiencing poverty, addressing inequality, and reducing pollution and greenhouse gases are inextricably linked.

    Q: So how does K-CAI tackle this hybrid challenge?

    A: Our initiative is pretty unique. We are a competitive, policy-based research and development fund that focuses on innovating, testing, and scaling solutions. We support researchers from MIT and other universities, and their collaborators, who are actually implementing programs, whether NGOs [nongovernmental organizations], government, or the private sector. We fund pilots of small-scale ideas in a real-world setting to determine if they hold promise, followed by larger randomized, controlled trials of promising solutions in climate change mitigation, adaptation, pollution reduction, and energy access. Our goal is to determine, through rigorous research, if these solutions are actually working — for example, in cutting emissions or protecting forests or helping vulnerable communities adapt to climate change. And finally, we offer path-to-scale grants which enable governments and NGOs to expand access to programs that have been tested and have strong evidence of impact.

    We think this model is really powerful. Since we launched in 2020, we have built a portfolio of over 30 randomized evaluations and 13 scaling projects in more than 35 countries. And to date, these projects have informed the scale ups of evidence-based climate policies that have reached over 15 million people.

    Q: It seems like K-CAI is advancing a kind of policy science, demanding proof of a program’s capacity to deliver results at each stage. 

    A: This is one of the factors that drew me to J-PAL back in 2012. I majored in anthropology and studied abroad in Uganda. From those experiences I became very passionate about pursuing a career focused on poverty reduction. To me, it is unfair that in a world full of so much wealth and so much opportunity there exists so much extreme poverty. I wanted to dedicate my career to that, but I’m also a very detail-oriented nerd who really cares about whether a program that claims to be doing something for people is accomplishing what it claims.

    It’s been really rewarding to see demand from governments and NGOs for evidence-informed policymaking grow over my 12 years at J-PAL. This policy science approach holds exciting promise to help transform public policy and climate policy in the coming decades.  

    Q: Can you point to K-CAI-funded projects that meet this high bar and are now making a significant impact?

    A: Several examples jump to mind. In the state of Gujarat, India, pollution regulators are trying to cut particulate matter air pollution, which is devastating to human health. The region is home to many major industries whose emissions negatively affect most of the state’s 70 million residents.

    We partnered with state pollution regulators — kind of a regional EPA [Environmental Protection Agency] — to test an emissions trading scheme that is used widely in the U.S. and Europe but not in low- and middle-income countries. The government monitors pollution levels using technology installed at factories that sends data in real time, so the regulator knows exactly what their emissions look like. The regulator sets a cap on the overall level of pollution, allocates permits to pollute, and industries can trade emissions permits.

    In 2019, researchers in the J-PAL network conducted the world’s first randomized, controlled trial of this emissions trading scheme and found that it cut pollution by 20 to 30 percent — a surprising reduction. It also reduced firms’ costs, on average, because the costs of compliance went down. The state government was eager to scale up the pilot, and in the past two years, two other cities, including Ahmedabad, the biggest city in the state, have adopted the concept.

    We are also supporting a project in Niger, whose economy is hugely dependent on rain-fed agriculture but with climate change is experiencing rapid desertification. Researchers in the J-PAL network have been testing training farmers in a simple, inexpensive rainwater harvesting technique, where farmers dig a half-moon-shaped hole called a demi-lune right before the rainy season. This demi-lune feeds crops that are grown directly on top of it, and helps return land that resembled flat desert to arable production.

    Researchers found that training farmers in this simple technology increased adoption from 4 percent to 94 percent and that demi-lunes increased agricultural output and revenue for farmers from the first year. K-CAI is funding a path-to-scale grant so local implementers can teach this technique to over 8,000 farmers and build a more cost-effective program model. If this takes hold, the team will work with local partners to scale the training to other relevant regions of the country and potentially other countries in the Sahel.

    One final example that we are really proud of, because we first funded it as a pilot and now it’s in the path to scale phase: We supported a team of researchers working with partners in Bangladesh trying to reduce carbon emissions and other pollution from brick manufacturing, an industry that generates 17 percent of the country’s carbon emissions. The scale of manufacturing is so great that at some times of year, Dhaka (the capital of Bangladesh) looks like Mordor.

    Workers form these bricks and stack hundreds of thousands of them, which they then fire by burning coal. A team of local researchers and collaborators from our J-PAL network found that you can reduce the amount of coal needed for the kilns by making some low-cost changes to the manufacturing process, including stacking the bricks in a way that increases airflow in the kiln and feeding the coal fires more frequently in smaller rather than larger batches.

    In the randomized, controlled trial K-CAI supported, researchers found that this cut carbon and pollution emissions significantly, and now the government has invited the team to train 1,000 brick manufacturers in Dhaka in these techniques.

    Q: These are all fascinating and powerful instances of implementing ideas that address a range of problems in different parts of the world. But can K-CAI go big enough and fast enough to take a real bite out of the twin poverty and climate crisis?

    A: We’re not trying to find silver bullets. We are trying to build a large playbook of real solutions that work to solve specific problems in specific contexts. As you build those up in the hundreds, you have a deep bench of effective approaches to solve problems that can add up in a meaningful way. And because J-PAL works with governments and NGOs that have the capacity to take the research into action, since 2003, over 600 million people around the world have been reached by policies and programs that are informed by evidence that J-PAL-affiliated researchers produced. While global challenges seem daunting, J-PAL has shown that in 20 years we can achieve a great deal, and there is huge potential for future impact.

    But unfortunately, globally, there is an underinvestment in policy innovation to combat climate change that may generate quicker, lower-cost returns at a large scale — especially in policies that determine which technologies get adopted or commercialized. For example, a lot of the huge fall in prices of renewable energy was enabled by early European government investments in solar and wind, and then continuing support for innovation in renewable energy.

    That’s why I think social sciences have so much to offer in the fight against climate change and poverty; we are working where technology meets policy and where technology meets real people, which often determines their success or failure. The world should be investing in policy, economic, and social innovation just as much as it is investing in technological innovation.

    Q: Do you need to be an optimist in your job?

    A: I am half-optimist, half-pragmatist. I have no control over the climate change outcome for the world. And regardless of whether we can successfully avoid most of the potential damages of climate change, when I look back, I’m going to ask myself, “Did I fight or not?” The only choice I have is whether or not I fought, and I want to be a fighter. More

  • in

    Extracting hydrogen from rocks

    It’s commonly thought that the most abundant element in the universe, hydrogen, exists mainly alongside other elements — with oxygen in water, for example, and with carbon in methane. But naturally occurring underground pockets of pure hydrogen are punching holes in that notion — and generating attention as a potentially unlimited source of carbon-free power. One interested party is the U.S. Department of Energy, which last month awarded $20 million in research grants to 18 teams from laboratories, universities, and private companies to develop technologies that can lead to cheap, clean fuel from the subsurface. Geologic hydrogen, as it’s known, is produced when water reacts with iron-rich rocks, causing the iron to oxidize. One of the grant recipients, MIT Assistant Professor Iwnetim Abate’s research group, will use its $1.3 million grant to determine the ideal conditions for producing hydrogen underground — considering factors such as catalysts to initiate the chemical reaction, temperature, pressure, and pH levels. The goal is to improve efficiency for large-scale production, meeting global energy needs at a competitive cost. The U.S. Geological Survey estimates there are potentially billions of tons of geologic hydrogen buried in the Earth’s crust. Accumulations have been discovered worldwide, and a slew of startups are searching for extractable deposits. Abate is looking to jump-start the natural hydrogen production process, implementing “proactive” approaches that involve stimulating production and harvesting the gas.                                                                                                                         “We aim to optimize the reaction parameters to make the reaction faster and produce hydrogen in an economically feasible manner,” says Abate, the Chipman Development Professor in the Department of Materials Science and Engineering (DMSE). Abate’s research centers on designing materials and technologies for the renewable energy transition, including next-generation batteries and novel chemical methods for energy storage. 

    Sparking innovation

    Interest in geologic hydrogen is growing at a time when governments worldwide are seeking carbon-free energy alternatives to oil and gas. In December, French President Emmanuel Macron said his government would provide funding to explore natural hydrogen. And in February, government and private sector witnesses briefed U.S. lawmakers on opportunities to extract hydrogen from the ground. Today commercial hydrogen is manufactured at $2 a kilogram, mostly for fertilizer and chemical and steel production, but most methods involve burning fossil fuels, which release Earth-heating carbon. “Green hydrogen,” produced with renewable energy, is promising, but at $7 per kilogram, it’s expensive. “If you get hydrogen at a dollar a kilo, it’s competitive with natural gas on an energy-price basis,” says Douglas Wicks, a program director at Advanced Research Projects Agency – Energy (ARPA-E), the Department of Energy organization leading the geologic hydrogen grant program. Recipients of the ARPA-E grants include Colorado School of Mines, Texas Tech University, and Los Alamos National Laboratory, plus private companies including Koloma, a hydrogen production startup that has received funding from Amazon and Bill Gates. The projects themselves are diverse, ranging from applying industrial oil and gas methods for hydrogen production and extraction to developing models to understand hydrogen formation in rocks. The purpose: to address questions in what Wicks calls a “total white space.” “In geologic hydrogen, we don’t know how we can accelerate the production of it, because it’s a chemical reaction, nor do we really understand how to engineer the subsurface so that we can safely extract it,” Wicks says. “We’re trying to bring in the best skills of each of the different groups to work on this under the idea that the ensemble should be able to give us good answers in a fairly rapid timeframe.” Geochemist Viacheslav Zgonnik, one of the foremost experts in the natural hydrogen field, agrees that the list of unknowns is long, as is the road to the first commercial projects. But he says efforts to stimulate hydrogen production — to harness the natural reaction between water and rock — present “tremendous potential.” “The idea is to find ways we can accelerate that reaction and control it so we can produce hydrogen on demand in specific places,” says Zgonnik, CEO and founder of Natural Hydrogen Energy, a Denver-based startup that has mineral leases for exploratory drilling in the United States. “If we can achieve that goal, it means that we can potentially replace fossil fuels with stimulated hydrogen.”

    “A full-circle moment”

    For Abate, the connection to the project is personal. As a child in his hometown in Ethiopia, power outages were a usual occurrence — the lights would be out three, maybe four days a week. Flickering candles or pollutant-emitting kerosene lamps were often the only source of light for doing homework at night. “And for the household, we had to use wood and charcoal for chores such as cooking,” says Abate. “That was my story all the way until the end of high school and before I came to the U.S. for college.” In 1987, well-diggers drilling for water in Mali in Western Africa uncovered a natural hydrogen deposit, causing an explosion. Decades later, Malian entrepreneur Aliou Diallo and his Canadian oil and gas company tapped the well and used an engine to burn hydrogen and power electricity in the nearby village. Ditching oil and gas, Diallo launched Hydroma, the world’s first hydrogen exploration enterprise. The company is drilling wells near the original site that have yielded high concentrations of the gas. “So, what used to be known as an energy-poor continent now is generating hope for the future of the world,” Abate says. “Learning about that was a full-circle moment for me. Of course, the problem is global; the solution is global. But then the connection with my personal journey, plus the solution coming from my home continent, makes me personally connected to the problem and to the solution.”

    Experiments that scale

    Abate and researchers in his lab are formulating a recipe for a fluid that will induce the chemical reaction that triggers hydrogen production in rocks. The main ingredient is water, and the team is testing “simple” materials for catalysts that will speed up the reaction and in turn increase the amount of hydrogen produced, says postdoc Yifan Gao. “Some catalysts are very costly and hard to produce, requiring complex production or preparation,” Gao says. “A catalyst that’s inexpensive and abundant will allow us to enhance the production rate — that way, we produce it at an economically feasible rate, but also with an economically feasible yield.” The iron-rich rocks in which the chemical reaction happens can be found across the United States and the world. To optimize the reaction across a diversity of geological compositions and environments, Abate and Gao are developing what they call a high-throughput system, consisting of artificial intelligence software and robotics, to test different catalyst mixtures and simulate what would happen when applied to rocks from various regions, with different external conditions like temperature and pressure. “And from that we measure how much hydrogen we are producing for each possible combination,” Abate says. “Then the AI will learn from the experiments and suggest to us, ‘Based on what I’ve learned and based on the literature, I suggest you test this composition of catalyst material for this rock.’” The team is writing a paper on its project and aims to publish its findings in the coming months. The next milestones for the project, after developing the catalyst recipe, is designing a reactor that will serve two purposes. First, fitted with technologies such as Raman spectroscopy, it will allow researchers to identify and optimize the chemical conditions that lead to improved rates and yield of hydrogen production. The lab-scale device will also inform the design of a real-world reactor that can accelerate hydrogen production in the field. “That would be a plant-scale reactor that would be implanted into the subsurface,” Abate says. The cross-disciplinary project is also tapping the expertise of Yang Shao-Horn, of MIT’s Department of Mechanical Engineering and DMSE, for computational analysis of the catalyst, and Esteban Gazel, a Cornell University scientist who will lend his expertise in geology and geochemistry. He’ll focus on understanding the iron-rich ultramafic rock formations across the United States and the globe and how they react with water. For Wicks at ARPA-E, the questions Abate and the other grant recipients are asking are just the first, critical steps in uncharted energy territory. “If we can understand how to stimulate these rocks into generating hydrogen, safely getting it up, it really unleashes the potential energy source,” he says. Then the emerging industry will look to oil and gas for the drilling, piping, and gas extraction know-how. “As I like to say, this is enabling technology that we hope to, in a very short term, enable us to say, ‘Is there really something there?’” More

  • in

    Propelling atomically layered magnets toward green computers

    Globally, computation is booming at an unprecedented rate, fueled by the boons of artificial intelligence. With this, the staggering energy demand of the world’s computing infrastructure has become a major concern, and the development of computing devices that are far more energy-efficient is a leading challenge for the scientific community. 

    Use of magnetic materials to build computing devices like memories and processors has emerged as a promising avenue for creating “beyond-CMOS” computers, which would use far less energy compared to traditional computers. Magnetization switching in magnets can be used in computation the same way that a transistor switches from open or closed to represent the 0s and 1s of binary code. 

    While much of the research along this direction has focused on using bulk magnetic materials, a new class of magnetic materials — called two-dimensional van der Waals magnets — provides superior properties that can improve the scalability and energy efficiency of magnetic devices to make them commercially viable. 

    Although the benefits of shifting to 2D magnetic materials are evident, their practical induction into computers has been hindered by some fundamental challenges. Until recently, 2D magnetic materials could operate only at very low temperatures, much like superconductors. So bringing their operating temperatures above room temperature has remained a primary goal. Additionally, for use in computers, it is important that they can be controlled electrically, without the need for magnetic fields. Bridging this fundamental gap, where 2D magnetic materials can be electrically switched above room temperature without any magnetic fields, could potentially catapult the translation of 2D magnets into the next generation of “green” computers.

    A team of MIT researchers has now achieved this critical milestone by designing a “van der Waals atomically layered heterostructure” device where a 2D van der Waals magnet, iron gallium telluride, is interfaced with another 2D material, tungsten ditelluride. In an open-access paper published March 15 in Science Advances, the team shows that the magnet can be toggled between the 0 and 1 states simply by applying pulses of electrical current across their two-layer device. 

    Play video

    The Future of Spintronics: Manipulating Spins in Atomic Layers without External Magnetic FieldsVideo: Deblina Sarkar

    “Our device enables robust magnetization switching without the need for an external magnetic field, opening up unprecedented opportunities for ultra-low power and environmentally sustainable computing technology for big data and AI,” says lead author Deblina Sarkar, the AT&T Career Development Assistant Professor at the MIT Media Lab and Center for Neurobiological Engineering, and head of the Nano-Cybernetic Biotrek research group. “Moreover, the atomically layered structure of our device provides unique capabilities including improved interface and possibilities of gate voltage tunability, as well as flexible and transparent spintronic technologies.”

    Sarkar is joined on the paper by first author Shivam Kajale, a graduate student in Sarkar’s research group at the Media Lab; Thanh Nguyen, a graduate student in the Department of Nuclear Science and Engineering (NSE); Nguyen Tuan Hung, an MIT visiting scholar in NSE and an assistant professor at Tohoku University in Japan; and Mingda Li, associate professor of NSE.

    Breaking the mirror symmetries 

    When electric current flows through heavy metals like platinum or tantalum, the electrons get segregated in the materials based on their spin component, a phenomenon called the spin Hall effect, says Kajale. The way this segregation happens depends on the material, and particularly its symmetries.

    “The conversion of electric current to spin currents in heavy metals lies at the heart of controlling magnets electrically,” Kajale notes. “The microscopic structure of conventionally used materials, like platinum, have a kind of mirror symmetry, which restricts the spin currents only to in-plane spin polarization.”

    Kajale explains that two mirror symmetries must be broken to produce an “out-of-plane” spin component that can be transferred to a magnetic layer to induce field-free switching. “Electrical current can ‘break’ the mirror symmetry along one plane in platinum, but its crystal structure prevents the mirror symmetry from being broken in a second plane.”

    In their earlier experiments, the researchers used a small magnetic field to break the second mirror plane. To get rid of the need for a magnetic nudge, Kajale and Sarkar and colleagues looked instead for a material with a structure that could break the second mirror plane without outside help. This led them to another 2D material, tungsten ditelluride. The tungsten ditelluride that the researchers used has an orthorhombic crystal structure. The material itself has one broken mirror plane. Thus, by applying current along its low-symmetry axis (parallel to the broken mirror plane), the resulting spin current has an out-of-plane spin component that can directly induce switching in the ultra-thin magnet interfaced with the tungsten ditelluride. 

    “Because it’s also a 2D van der Waals material, it can also ensure that when we stack the two materials together, we get pristine interfaces and a good flow of electron spins between the materials,” says Kajale. 

    Becoming more energy-efficient 

    Computer memory and processors built from magnetic materials use less energy than traditional silicon-based devices. And the van der Waals magnets can offer higher energy efficiency and better scalability compared to bulk magnetic material, the researchers note. 

    The electrical current density used for switching the magnet translates to how much energy is dissipated during switching. A lower density means a much more energy-efficient material. “The new design has one of the lowest current densities in van der Waals magnetic materials,” Kajale says. “This new design has an order of magnitude lower in terms of the switching current required in bulk materials. This translates to something like two orders of magnitude improvement in energy efficiency.”

    The research team is now looking at similar low-symmetry van der Waals materials to see if they can reduce current density even further. They are also hoping to collaborate with other researchers to find ways to manufacture the 2D magnetic switch devices at commercial scale. 

    This work was carried out, in part, using the facilities at MIT.nano. It was funded by the Media Lab, the U.S. National Science Foundation, and the U.S. Department of Energy. More

  • in

    Shining a light on oil fields to make them more sustainable

    Operating an oil field is complex and there is a staggeringly long list of things that can go wrong.

    One of the most common problems is spills of the salty brine that’s a toxic byproduct of pumping oil. Another is over- or under-pumping that can lead to machine failure and methane leaks. (The oil and gas industry is the largest industrial emitter of methane in the U.S.) Then there are extreme weather events, which range from winter frosts to blazing heat, that can put equipment out of commission for months. One of the wildest problems Sebastien Mannai SM ’14, PhD ’18 has encountered are hogs that pop open oil tanks with their snouts to enjoy on-demand oil baths.

    Mannai helps oil field owners detect and respond to these problems while optimizing the operation of their machinery to prevent the issues from occurring in the first place. He is the founder and CEO of Amplified Industries, a company selling oil field monitoring and control tools that help make the industry more efficient and sustainable.

    Amplified Industries’ sensors and analytics give oil well operators real-time alerts when things go wrong, allowing them to respond to issues before they become disasters.

    “We’re able to find 99 percent of the issues affecting these machines, from mechanical failures to human errors, including issues happening thousands of feet underground,” Mannai explains. “With our AI solution, operators can put the wells on autopilot, and the system automatically adjusts or shuts the well down as soon as there’s an issue.”

    Amplified currently works with private companies in states spanning from Texas to Wyoming, that own and operate as many as 3,000 wells. Such companies make up the majority of oil well operators in the U.S. and operate both new and older, more failure-prone equipment that has been in the field for decades.

    Such operators also have a harder time responding to environmental regulations like the Environmental Protection Agency’s new methane guidelines, which seek to dramatically reduce emissions of the potent greenhouse gas in the industry over the next few years.

    “These operators don’t want to be releasing methane,” Mannai explains. “Additionally, when gas gets into the pumping equipment, it leads to premature failures. We can detect gas and slow the pump down to prevent it. It’s the best of both worlds: The operators benefit because their machines are working better, saving them money while also giving them a smaller environmental footprint with fewer spills and methane leaks.”

    Leveraging “every MIT resource I possibly could”

    Mannai learned about the cutting-edge technology used in the space and aviation industries as he pursued his master’s degree at the Gas Turbine Laboratory in MIT’s Department of Aeronautics and Astronautics. Then, during his PhD at MIT, he worked with an oil services company and discovered the oil and gas industry was still relying on decades-old technologies and equipment.

    “When I first traveled to the field, I could not believe how old-school the actual operations were,” says Mannai, who has previously worked in rocket engine and turbine factories. “A lot of oil wells have to be adjusted by feel and rules of thumb. The operators have been let down by industrial automation and data companies.”

    Monitoring oil wells for problems typically requires someone in a pickup truck to drive hundreds of miles between wells looking for obvious issues, Mannai says. The sensors that are deployed are expensive and difficult to replace. Over time, they’re also often damaged in the field to the point of being unusable, forcing technicians to make educated guesses about the status of each well.

    “We often see that equipment unplugged or programmed incorrectly because it is incredibly over-complicated and ill-designed for the reality of the field,” Mannai says. “Workers on the ground often have to rip it out and bypass the control system to pump by hand. That’s how you end up with so many spills and wells pumping at suboptimal levels.”

    To build a better oil field monitoring system, Mannai received support from the MIT Sandbox Innovation Fund and the Venture Mentoring Service (VMS). He also participated in the delta V summer accelerator at the Martin Trust Center for MIT Entrepreneurship, the fuse program during IAP, and the MIT I-Corps program, and took a number of classes at the MIT Sloan School of Management. In 2019, Amplified Industries — which operated under the name Acoustic Wells until recently — won the MIT $100K Entrepreneurship competition.

    “My approach was to sign up to every possible entrepreneurship related program and to leverage every MIT resource I possibly could,” Mannai says. “MIT was amazing for us.”

    Mannai officially launched the company after his postdoc at MIT, and Amplified raised its first round of funding in early 2020. That year, Amplified’s small team moved into the Greentown Labs startup incubator in Somerville.

    Mannai says building the company’s battery-powered, low-cost sensors was a huge challenge. The sensors run machine-learning inference models and their batteries last for 10 years. They also had to be able to handle extreme conditions, from the scorching hot New Mexico desert to the swamps of Louisiana and the freezing cold winters in North Dakota.

    “We build very rugged, resilient hardware; it’s a must in those environments” Mannai says. “But it’s also very simple to deploy, so if a device does break, it’s like changing a lightbulb: We ship them a new one and it takes them a couple of minutes to swap it out.”

    Customers equip each well with four or five of Amplified’s sensors, which attach to the well’s cables and pipes to measure variables like tension, pressure, and amps. Vast amounts of data are then sent to Amplified’s cloud and processed by their analytics engine. Signal processing methods and AI models are used to diagnose problems and control the equipment in real-time, while generating notifications for the operators when something goes wrong. Operators can then remotely adjust the well or shut it down.

    “That’s where AI is important, because if you just record everything and put it in a giant dashboard, you create way more work for people,” Mannai says. “The critical part is the ability to process and understand this newly recorded data and make it readily usable in the real world.”

    Amplified’s dashboard is customized for different people in the company, so field technicians can quickly respond to problems and managers or owners can get a high-level view of how everything is running.

    Mannai says often when Amplified’s sensors are installed, they’ll immediately start detecting problems that were unknown to engineers and technicians in the field. To date, Amplified has prevented hundreds of thousands of gallons worth of brine water spills, which are particularly damaging to surrounding vegetation because of their high salt and sulfur content.

    Preventing those spills is only part of Amplified’s positive environmental impact; the company is now turning its attention toward the detection of methane leaks.

    Helping a changing industry

    The EPA’s proposed new Waste Emissions Charge for oil and gas companies would start at $900 per metric ton of reported methane emissions in 2024 and increase to $1,500 per metric ton in 2026 and beyond.

    Mannai says Amplified is well-positioned to help companies comply with the new rules. Its equipment has already showed it can detect various kinds of leaks across the field, purely based on analytics of existing data.

    “Detecting methane leaks typically requires someone to walk around every valve and piece of piping with a thermal camera or sniffer, but these operators often have thousands of valves and hundreds of miles of pipes,” Mannai says. “What we see in the field is that a lot of times people don’t know where the pipes are because oil wells change owners so frequently, or they will miss an intermittent leak.”

    Ultimately Mannai believes a strong data backend and modernized sensing equipment will become the backbone of the industry, and is a necessary prerequisite to both improving efficiency and cleaning up the industry.

    “We’re selling a service that ensures your equipment is working optimally all the time,” Mannai says. “That means a lot fewer fines from the EPA, but it also means better-performing equipment. There’s a mindset change happening across the industry, and we’re helping make that transition as easy and affordable as possible.” More

  • in

    A delicate dance

    In early 2022, economist Catherine Wolfram was at her desk in the U.S. Treasury building. She could see the east wing of the White House, just steps away.

    Russia had just invaded Ukraine, and Wolfram was thinking about Russia, oil, and sanctions. She and her colleagues had been tasked with figuring out how to restrict the revenues that Russia was using to fuel its brutal war while keeping Russian oil available and affordable to the countries that depended on it.

    Now the William F. Pounds Professor of Energy Economics at MIT, Wolfram was on leave from academia to serve as deputy assistant secretary for climate and energy economics.

    Working for Treasury Secretary Janet L. Yellen, Wolfram and her colleagues developed dozens of models and forecasts and projections. It struck her, she said later, that “huge decisions [affecting the global economy] would be made on the basis of spreadsheets that I was helping create.” Wolfram composed a memo to the Biden administration and hoped her projections would pan out the way she believed they would.

    Tackling conundrums that weigh competing, sometimes contradictory, interests has defined much of Wolfram’s career.

    Wolfram specializes in the economics of energy markets. She looks at ways to decarbonize global energy systems while recognizing that energy drives economic development, especially in the developing world.

    “The way we’re currently making energy is contributing to climate change. There’s a delicate dance we have to do to make sure that we treat this important industry carefully, but also transform it rapidly to a cleaner, decarbonized system,” she says.

    Economists as influencers

    While Wolfram was growing up in a suburb of St. Paul, Minnesota, her father was a law professor and her mother taught English as a second language. Her mother helped spawn Wolfram’s interest in other cultures and her love of travel, but it was an experience closer to home that sparked her awareness of the effect of human activities on the state of the planet.

    Minnesota’s nickname is “Land of 10,000 Lakes.” Wolfram remembers swimming in a nearby lake sometimes covered by a thick sludge of algae. “Thinking back on it, it must’ve had to do with fertilizer runoff,” she says. “That was probably the first thing that made me think about the environment and policy.”

    In high school, Wolfram liked “the fact that you could use math to understand the world. I also was interested in the types of questions about human behavior that economists were thinking about.

    “I definitely think economics is good at sussing out how different actors are likely to react to a particular policy and then designing policies with that in mind.”

    After receiving a bachelor’s degree in economics from Harvard University in 1989, Wolfram worked with a Massachusetts agency that governed rate hikes for utilities. Seeing its reliance on research, she says, illuminated the role academics could play in policy setting. It made her think she could make a difference from within academia.

    While pursuing a PhD in economics from MIT, Wolfram counted Paul L. Joskow, the Elizabeth and James Killian Professor of Economics and former director of the MIT Center for Energy and Environmental Policy Research, and Nancy L. Rose, the Charles P. Kindleberger Professor of Applied Economics, among her mentors and influencers.

    After spending 1996 to 2000 as an assistant professor of economics at Harvard, she joined the faculty at the Haas School of Business at the University of California at Berkeley.

    At Berkeley, it struck Wolfram that while she labored over ways to marginally boost the energy efficiency of U.S. power plants, the economies of China and India were growing rapidly, with a corresponding growth in energy use and carbon dioxide emissions. “It hit home that to understand the climate issue, I needed to understand energy demand in the developing world,” she says.

    The problem was that the developing world didn’t always offer up the kind of neatly packaged, comprehensive data economists relied on. She wondered if, by relying on readily accessible data, the field was looking under the lamppost — while losing sight of what the rest of the street looked like.

    To make up for a lack of available data on the state of electrification in sub-Saharan Africa, for instance, Wolfram developed and administered surveys to individual, remote rural households using on-the-ground field teams.

    Her results suggested that in the world’s poorest countries, the challenges involved in expanding the grid in rural areas should be weighed against potentially greater economic and social returns on investments in the transportation, education, or health sectors.

    Taking the lead

    Within months of Wolfram’s memo to the Biden administration, leaders of the intergovernmental political forum Group of Seven (G7) agreed to the price cap. Tankers from coalition countries would only transport Russian crude sold at or below the price cap level, initially set at $60 per barrel.

    “A price cap was not something that had ever been done before,” Wolfram says. “In some ways, we were making it up out of whole cloth. It was exciting to see that I wrote one of the original memos about it, and then literally three-and-a-half months later, the G7 was making an announcement.

    “As economists and as policymakers, we must set the parameters and get the incentives right. The price cap was basically asking developing countries to buy cheap oil, which was consistent with their incentives.”

    In May 2023, the U.S. Department of the Treasury reported that despite widespread initial skepticism about the price cap, market participants and geopolitical analysts believe it is accomplishing its goals of restricting Russia’s oil revenues while maintaining the supply of Russian oil and keeping energy costs in check for consumers and businesses around the world.

    Wolfram held the U.S. Treasury post from March 2021 to October 2022 while on leave from UC Berkeley. In July 2023, she joined MIT Sloan School of Management partly to be geographically closer to the policymakers of the nation’s capital. She’s also excited about the work taking place elsewhere at the Institute to stay ahead of climate change.

    Her time in D.C. was eye-opening, particularly in terms of the leadership power of the United States. She worries that the United States is falling prey to “lost opportunities” in terms of addressing climate change. “We were showing real leadership on the price cap, and if we could only do that on climate, I think we could make faster inroads on a global agreement,” she says.

    Now focused on structuring global agreements in energy policy among developed and developing countries, she’s considering how the United States can take advantage of its position as a world leader. “We need to be thinking about how what we do in the U.S. affects the rest of the world from a climate perspective. We can’t go it alone.

    “The U.S. needs to be more aligned with the European Union, Canada, and Japan to try to find areas where we’re taking a common approach to addressing climate change,” she says. She will touch on some of those areas in the class she will teach in spring 2024 titled “Climate and Energy in the Global Economy,” offered through MIT Sloan.

    Looking ahead, she says, “I’m a techno optimist. I believe in human innovation. I’m optimistic that we’ll find ways to live with climate change and, hopefully, ways to minimize it.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Engineers find a new way to convert carbon dioxide into useful products

    MIT chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

    If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

    “This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonization because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering and the senior author of the study.

    The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

    Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appears in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

    Breaking down CO2

    Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

    To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

    During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

    To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

    “DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

    To attach single strands of DNA to a carbon electrode, the researchers used two “chemical handles,” one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode — just like Velcro.

    Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

    An efficient reaction

    Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100 percent, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40 percent.

    This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

    By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

    The research was funded by the U.S. Army Research Office, the CIFAR Azrieli Global Scholars Program, the MIT Energy Initiative, and the MIT Deshpande Center. More

  • in

    Lessons from Fukushima: Prepare for the unlikely

    When a devastating earthquake and tsunami overwhelmed the protective systems at the Fukushima Dai’ichi nuclear power plant complex in Japan in March 2011, it triggered a sequence of events leading to one of the worst releases of radioactive materials in the world to date. Although nuclear energy is having a revival as a low-emissions energy source to mitigate climate change, the Fukushima accident is still cited as a reason for hesitancy in adopting it.

    A new study synthesizes information from multidisciplinary sources to understand how the Fukushima Dai’ichi disaster unfolded, and points to the importance of mitigation measures and last lines of defense — even against accidents considered highly unlikely. These procedures have received relatively little attention, but they are critical in determining how severe the consequences of a reactor failure will be, the researchers say.

    The researchers note that their synthesis is one of the few attempts to look at data across disciplinary boundaries, including: the physics and engineering of what took place within the plant’s systems, the plant operators’ actions throughout the emergency, actions by emergency responders, the meteorology of radionuclide releases and transport, and the environmental and health consequences documented since the event.

    The study appears in the journal iScience, in an open-access paper by postdoc Ali Ayoub and Professor Haruko Wainwright at MIT, along with others in Switzerland, Japan, and New Mexico.

    Since 2013, Wainwright has been leading the research to integrate all the radiation monitoring data in the Fukushima region into integrated maps. “I was staring at the contamination map for nearly 10 years, wondering what created the main plume extending in the northwest direction, but I could not find exact information,” Wainwright says. “Our study is unique because we started from the consequence, the contamination map, and tried to identify the key factors for the consequence. Other people study the Fukushima accident from the root cause, the tsunami.”

    One thing they found was that while all the operating reactors, units 1, 2, and 3, suffered core meltdowns as a result of the failure of emergency cooling systems, units 1 and 3 — although they did experience hydrogen explosions — did not release as much radiation to the environment because their venting systems essentially worked to relieve pressure inside the containment vessels as intended. But the same system in unit 2 failed badly.

    “People think that the hydrogen explosion or the core meltdown were the worst things, or the major driver of the radiological consequences of the accident,” Wainright says, “but our analysis found that’s not the case.” Much more significant in terms of the radiological release was the failure of the one venting mechanism.

    “There is a pressure-release mechanism that goes through water where a lot of the radionuclides get filtered out,” she explains. That system was effective in units 1 and 3, filtering out more than 90 percent of the radioactive elements before the gas was vented. However, “in unit 2, that pressure release mechanism got stuck, and the operators could not manually open it.” A hydrogen explosion in unit 1 had damaged the pressure relief mechanism of unit 2. This led to a breach of the containment structure and direct, unfiltered venting to the atmosphere, which, according to the new study, was what produced the greatest amount of contamination from the whole weeks-long event.

    Another factor was the timing of the attempt to vent the pressure buildup in the reactor. Guidelines at the time, and to this day in many reactors, specified that no venting should take place until the pressure inside the reactor containment vessel reached a specified threshold, with no regard to the wind directions at the time. In the case of Fukushima, an earlier venting could have dramatically reduced the impact: Much of the release happened when winds were blowing directly inland, but earlier the wind had been blowing offshore.

    “That pressure-release mechanism has not been a major focus of the engineering community,” she says. While there is appropriate attention to measures that prevent a core meltdown in the first place, “this sort of last line of defense has not been the main focus and should get more attention.”

    Wainwright says the study also underlines several successes in the management of the Fukushima accident. Many of the safety systems did work as they were designed. For example, even though the oldest reactor, unit 1, suffered the greatest internal damage, it released little radioactive material. Most people were able to evacuate from the 20-kilometer (12-mile) zone before the largest release happened. The mitigation measures were “somewhat successful,” Wainwright says. But there was tremendous confusion and anger during and after the accident because there were no preparations in place for such an event.

    Much work has focused on ways to prevent the kind of accidents that happened at Fukushima — for example, in the U.S. reactor operators can deploy portable backup power supplies to maintain proper reactor cooling at any reactor site. But the ongoing situation at the Zaporizhzhia nuclear complex in Ukraine, where nuclear safety is challenged by acts of war, demonstrates that despite engineers’ and operators’ best efforts to prevent it, “the totally unexpected could still happen,” Wainwright says.

    “The big-picture message is that we should have equal attention to both prevention and mitigation of accidents,” she says. “This is the essence of resilience, and it applies beyond nuclear power plants to all essential infrastructure of a functioning society, for example, the electric grid, the food and water supply, the transportation sector, etc.”

    One thing the researchers recommend is that in designing evacuation protocols, planners should make more effort to learn from much more frequent disasters such as wildfires and hurricanes. “We think getting more interdisciplinary, transdisciplinary knowledge from other kinds of disasters would be essential,” she says. Most of the emergency response strategies presently in place, she says, were designed in the 1980s and ’90s, and need to be modernized. “Consequences can be mitigated. A nuclear accident does not have to be a catastrophe, as is often portrayed in popular culture,” Wainright says.

    The research team included Giovanni Sansavini at ETH Zurich in Switzerland; Randall Gauntt at Sandia National Laboratories in New Mexico; and Kimiaki Saito at the Japan Atomic Energy Agency. More

  • in

    Future nuclear power reactors could rely on molten salts — but what about corrosion?

    Most discussions of how to avert climate change focus on solar and wind generation as key to the transition to a future carbon-free power system. But Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering at MIT and associate director of the MIT Plasma Science and Fusion Center (PSFC), is impatient with such talk. “We can say we should have only wind and solar someday. But we don’t have the luxury of ‘someday’ anymore, so we can’t ignore other helpful ways to combat climate change,” he says. “To me, it’s an ‘all-hands-on-deck’ thing. Solar and wind are clearly a big part of the solution. But I think that nuclear power also has a critical role to play.”

    For decades, researchers have been working on designs for both fission and fusion nuclear reactors using molten salts as fuels or coolants. While those designs promise significant safety and performance advantages, there’s a catch: Molten salt and the impurities within it often corrode metals, ultimately causing them to crack, weaken, and fail. Inside a reactor, key metal components will be exposed not only to molten salt but also simultaneously to radiation, which generally has a detrimental effect on materials, making them more brittle and prone to failure. Will irradiation make metal components inside a molten salt-cooled nuclear reactor corrode even more quickly?

    Short and Weiyue Zhou PhD ’21, a postdoc in the PSFC, have been investigating that question for eight years. Their recent experimental findings show that certain alloys will corrode more slowly when they’re irradiated — and identifying them among all the available commercial alloys can be straightforward.

    The first challenge — building a test facility

    When Short and Zhou began investigating the effect of radiation on corrosion, practically no reliable facilities existed to look at the two effects at once. The standard approach was to examine such mechanisms in sequence: first corrode, then irradiate, then examine the impact on the material. That approach greatly simplifies the task for the researchers, but with a major trade-off. “In a reactor, everything is going to be happening at the same time,” says Short. “If you separate the two processes, you’re not simulating a reactor; you’re doing some other experiment that’s not as relevant.”

    So, Short and Zhou took on the challenge of designing and building an experimental setup that could do both at once. Short credits a team at the University of Michigan for paving the way by designing a device that could accomplish that feat in water, rather than molten salts. Even so, Zhou notes, it took them three years to come up with a device that would work with molten salts. Both researchers recall failure after failure, but the persistent Zhou ultimately tried a totally new design, and it worked. Short adds that it also took them three years to precisely replicate the salt mixture used by industry — another factor critical to getting a meaningful result. The hardest part was achieving and ensuring that the purity was correct by removing critical impurities such as moisture, oxygen, and certain other metals.

    As they were developing and testing their setup, Short and Zhou obtained initial results showing that proton irradiation did not always accelerate corrosion but sometimes actually decelerated it. They and others had hypothesized that possibility, but even so, they were surprised. “We thought we must be doing something wrong,” recalls Short. “Maybe we mixed up the samples or something.” But they subsequently made similar observations for a variety of conditions, increasing their confidence that their initial observations were not outliers.

    The successful setup

    Central to their approach is the use of accelerated protons to mimic the impact of the neutrons inside a nuclear reactor. Generating neutrons would be both impractical and prohibitively expensive, and the neutrons would make everything highly radioactive, posing health risks and requiring very long times for an irradiated sample to cool down enough to be examined. Using protons would enable Short and Zhou to examine radiation-altered corrosion both rapidly and safely.

    Key to their experimental setup is a test chamber that they attach to a proton accelerator. To prepare the test chamber for an experiment, they place inside it a thin disc of the metal alloy being tested on top of a a pellet of salt. During the test, the entire foil disc is exposed to a bath of molten salt. At the same time, a beam of protons bombards the sample from the side opposite the salt pellet, but the proton beam is restricted to a circle in the middle of the foil sample. “No one can argue with our results then,” says Short. “In a single experiment, the whole sample is subjected to corrosion, and only a circle in the center of the sample is simultaneously irradiated by protons. We can see the curvature of the proton beam outline in our results, so we know which region is which.”

    The results with that arrangement were unchanged from the initial results. They confirmed the researchers’ preliminary findings, supporting their controversial hypothesis that rather than accelerating corrosion, radiation would actually decelerate corrosion in some materials under some conditions. Fortunately, they just happen to be the same conditions that will be experienced by metals in molten salt-cooled reactors.

    Why is that outcome controversial? A closeup look at the corrosion process will explain. When salt corrodes metal, the salt finds atomic-level openings in the solid, seeps in, and dissolves salt-soluble atoms, pulling them out and leaving a gap in the material — a spot where the material is now weak. “Radiation adds energy to atoms, causing them to be ballistically knocked out of their positions and move very fast,” explains Short. So, it makes sense that irradiating a material would cause atoms to move into the salt more quickly, increasing the rate of corrosion. Yet in some of their tests, the researchers found the opposite to be true.

    Experiments with “model” alloys

    The researchers’ first experiments in their novel setup involved “model” alloys consisting of nickel and chromium, a simple combination that would give them a first look at the corrosion process in action. In addition, they added europium fluoride to the salt, a compound known to speed up corrosion. In our everyday world, we often think of corrosion as taking years or decades, but in the more extreme conditions of a molten salt reactor it can noticeably occur in just hours. The researchers used the europium fluoride to speed up corrosion even more without changing the corrosion process. This allowed for more rapid determination of which materials, under which conditions, experienced more or less corrosion with simultaneous proton irradiation.

    The use of protons to emulate neutron damage to materials meant that the experimental setup had to be carefully designed and the operating conditions carefully selected and controlled. Protons are hydrogen atoms with an electrical charge, and under some conditions the hydrogen could chemically react with atoms in the sample foil, altering the corrosion response, or with ions in the salt, making the salt more corrosive. Therefore, the proton beam had to penetrate the foil sample but then stop in the salt as soon as possible. Under these conditions, the researchers found they could deliver a relatively uniform dose of radiation inside the foil layer while also minimizing chemical reactions in both the foil and the salt.

    Tests showed that a proton beam accelerated to 3 million electron-volts combined with a foil sample between 25 and 30 microns thick would work well for their nickel-chromium alloys. The temperature and duration of the exposure could be adjusted based on the corrosion susceptibility of the specific materials being tested.

    Optical images of samples examined after tests with the model alloys showed a clear boundary between the area that was exposed only to the molten salt and the area that was also exposed to the proton beam. Electron microscope images focusing on that boundary showed that the area that had been exposed only to the molten salt included dark patches where the molten salt had penetrated all the way through the foil, while the area that had also been exposed to the proton beam showed almost no such dark patches.

    To confirm that the dark patches were due to corrosion, the researchers cut through the foil sample to create cross sections. In them, they could see tunnels that the salt had dug into the sample. “For regions not under radiation, we see that the salt tunnels link the one side of the sample to the other side,” says Zhou. “For regions under radiation, we see that the salt tunnels stop more or less halfway and rarely reach the other side. So we verified that they didn’t penetrate the whole way.”

    The results “exceeded our wildest expectations,” says Short. “In every test we ran, the application of radiation slowed corrosion by a factor of two to three times.”

    More experiments, more insights

    In subsequent tests, the researchers more closely replicated commercially available molten salt by omitting the additive (europium fluoride) that they had used to speed up corrosion, and they tweaked the temperature for even more realistic conditions. “In carefully monitored tests, we found that by raising the temperature by 100 degrees Celsius, we could get corrosion to happen about 1,000 times faster than it would in a reactor,” says Short.

    Images from experiments with the nickel-chromium alloy plus the molten salt without the corrosive additive yielded further insights. Electron microscope images of the side of the foil sample facing the molten salt showed that in sections only exposed to the molten salt, the corrosion is clearly focused on the weakest part of the structure — the boundaries between the grains in the metal. In sections that were exposed to both the molten salt and the proton beam, the corrosion isn’t limited to the grain boundaries but is more spread out over the surface. Experimental results showed that these cracks are shallower and less likely to cause a key component to break.

    Short explains the observations. Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are areas — called grain boundaries — where the atoms don’t line up as well. In the corrosion-only images, dark lines track the grain boundaries. Molten salt has seeped into the grain boundaries and pulled out salt-soluble atoms. In the corrosion-plus-irradiation images, the damage is more general. It’s not only the grain boundaries that get attacked but also regions within the grains.

    So, when the material is irradiated, the molten salt also removes material from within the grains. Over time, more material comes out of the grains themselves than from the spaces between them. The removal isn’t focused on the grain boundaries; it’s spread out over the whole surface. As a result, any cracks that form are shallower and more spread out, and the material is less likely to fail.

    Testing commercial alloys

    The experiments described thus far involved model alloys — simple combinations of elements that are good for studying science but would never be used in a reactor. In the next series of experiments, the researchers focused on three commercially available alloys that are composed of nickel, chromium, iron, molybdenum, and other elements in various combinations.

    Results from the experiments with the commercial alloys showed a consistent pattern — one that confirmed an idea that the researchers had going in: the higher the concentration of salt-soluble elements in the alloy, the worse the radiation-induced corrosion damage. Radiation will increase the rate at which salt-soluble atoms such as chromium leave the grain boundaries, hastening the corrosion process. However, if there are more not-soluble elements such as nickel present, those atoms will go into the salt more slowly. Over time, they’ll accumulate at the grain boundary and form a protective coating that blocks the grain boundary — a “self-healing mechanism that decelerates the rate of corrosion,” say the researchers.

    Thus, if an alloy consists mostly of atoms that don’t dissolve in molten salt, irradiation will cause them to form a protective coating that slows the corrosion process. But if an alloy consists mostly of atoms that dissolve in molten salt, irradiation will make them dissolve faster, speeding up corrosion. As Short summarizes, “In terms of corrosion, irradiation makes a good alloy better and a bad alloy worse.”

    Real-world relevance plus practical guidelines

    Short and Zhou find their results encouraging. In a nuclear reactor made of “good” alloys, the slowdown in corrosion will probably be even more pronounced than what they observed in their proton-based experiments because the neutrons that inflict the damage won’t chemically react with the salt to make it more corrosive. As a result, reactor designers could push the envelope more in their operating conditions, allowing them to get more power out of the same nuclear plant without compromising on safety.

    However, the researchers stress that there’s much work to be done. Many more projects are needed to explore and understand the exact corrosion mechanism in specific alloys under different irradiation conditions. In addition, their findings need to be replicated by groups at other institutions using their own facilities. “What needs to happen now is for other labs to build their own facilities and start verifying whether they get the same results as we did,” says Short. To that end, Short and Zhou have made the details of their experimental setup and all of their data freely available online. “We’ve also been actively communicating with researchers at other institutions who have contacted us,” adds Zhou. “When they’re planning to visit, we offer to show them demonstration experiments while they’re here.”

    But already their findings provide practical guidance for other researchers and equipment designers. For example, the standard way to quantify corrosion damage is by “mass loss,” a measure of how much weight the material has lost. But Short and Zhou consider mass loss a flawed measure of corrosion in molten salts. “If you’re a nuclear plant operator, you usually care whether your structural components are going to break,” says Short. “Our experiments show that radiation can change how deep the cracks are, when all other things are held constant. The deeper the cracks, the more likely a structural component is to break, leading to a reactor failure.”

    In addition, the researchers offer a simple rule for identifying good metal alloys for structural components in molten salt reactors. Manufacturers provide extensive lists of available alloys with different compositions, microstructures, and additives. Faced with a list of options for critical structures, the designer of a new nuclear fission or fusion reactor can simply examine the composition of each alloy being offered. The one with the highest content of corrosion-resistant elements such as nickel will be the best choice. Inside a nuclear reactor, that alloy should respond to a bombardment of radiation not by corroding more rapidly but by forming a protective layer that helps block the corrosion process. “That may seem like a trivial result, but the exact threshold where radiation decelerates corrosion depends on the salt chemistry, the density of neutrons in the reactor, their energies, and a few other factors,” says Short. “Therefore, the complete guidelines are a bit more complicated. But they’re presented in a straightforward way that users can understand and utilize to make a good choice for the molten salt–based reactor they’re designing.”

    This research was funded, in part, by Eni S.p.A. through the MIT Plasma Science and Fusion Center’s Laboratory for Innovative Fusion Technologies. Earlier work was funded, in part, by the Transatomic Power Corporation and by the U.S. Department of Energy Nuclear Energy University Program. Equipment development and testing was supported by the Transatomic Power Corporation.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More