More stories

  • in

    Getting to systemic sustainability

    Add up the commitments from the Paris Agreement, the Glasgow Climate Pact, and various commitments made by cities, countries, and businesses, and the world would be able to hold the global average temperature increase to 1.9 degrees Celsius above preindustrial levels, says Ani Dasgupta, the president and chief executive officer of the World Resources Institute (WRI).While that is well above the 1.5 C threshold that many scientists agree would limit the most severe impacts of climate change, it is below the 2.0 degree threshold that could lead to even more catastrophic impacts, such as the collapse of ice sheets and a 30-foot rise in sea levels.However, Dasgupta notes, actions have so far not matched up with commitments.“There’s a huge gap between commitment and outcomes,” Dasgupta said during his talk, “Energizing the global transition,” at the 2024 Earth Day Colloquium co-hosted by the MIT Energy Initiative and MIT Department of Earth, Atmospheric and Planetary Sciences, and sponsored by the Climate Nucleus.Dasgupta noted that oil companies did $6 trillion worth of business across the world last year — $1 trillion more than they were planning. About 7 percent of the world’s remaining tropical forests were destroyed during that same time, he added, and global inequality grew even worse than before.“None of these things were illegal, because the system we have today produces these outcomes,” he said. “My point is that it’s not one thing that needs to change. The whole system needs to change.”People, climate, and natureDasgupta, who previously held positions in nonprofits in India and at the World Bank, is a recognized leader in sustainable cities, poverty alleviation, and building cultures of inclusion. Under his leadership, WRI, a global research nonprofit that studies sustainable practices with the goal of fundamentally transforming the world’s food, land and water, energy, and cities, adopted a new five-year strategy called “Getting the Transition Right for People, Nature, and Climate 2023-2027.” It focuses on creating new economic opportunities to meet people’s essential needs, restore nature, and rapidly lower emissions, while building resilient communities. In fact, during his talk, Dasgupta said that his organization has moved away from talking about initiatives in terms of their impact on greenhouse gas emissions — instead taking a more holistic view of sustainability.“There is no net zero without nature,” Dasgupta said. He showed a slide with a graphic illustrating potential progress toward net-zero goals. “If nature gets diminished, that chart becomes even steeper. It’s very steep right now, but natural systems absorb carbon dioxide. So, if the natural systems keep getting destroyed, that curve becomes harder and harder.”A focus on people is necessary, Dasgupta said, in part because of the unequal climate impacts that the rich and the poor are likely to face in the coming years. “If you made it to this room, you will not be impacted by climate change,” he said. “You have resources to figure out what to do about it. The people who get impacted are people who don’t have resources. It is immensely unfair. Our belief is, if we don’t do climate policy that helps people directly, we won’t be able to make progress.”Where to start?Although Dasgupta stressed that systemic change is needed to bring carbon emissions in line with long-term climate goals, he made the case that it is unrealistic to implement this change around the globe all at once. “This transition will not happen in 196 countries at the same time,” he said. “The question is, how do we get to the tipping point so that it happens at scale? We’ve worked the past few years to ask the question, what is it you need to do to create this tipping point for change?”Analysts at WRI looked for countries that are large producers of carbon, those with substantial tropical forest cover, and those with large quantities of people living in poverty. “We basically tried to draw a map of, where are the biggest challenges for climate change?” Dasgupta said.That map features a relative handful of countries, including the United States, Mexico, China, Brazil, South Africa, India, and Indonesia. Dasgupta said, “Our argument is that, if we could figure out and focus all our efforts to help these countries transition, that will create a ripple effect — of understanding technology, understanding the market, understanding capacity, and understanding the politics of change that will unleash how the rest of these regions will bring change.”Spotlight on the subcontinentDasgupta used one of these countries, his native India, to illustrate the nuanced challenges and opportunities presented by various markets around the globe. In India, he noted, there are around 3 million projected jobs tied to the country’s transition to renewable energy. However, that number is dwarfed by the 10 to 12 million jobs per year the Indian economy needs to create simply to keep up with population growth.“Every developing country faces this question — how to keep growing in a way that reduces their carbon footprint,” Dasgupta said.Five states in India worked with WRI to pool their buying power and procure 5,000 electric buses, saving 60 percent of the cost as a result. Over the next two decades, Dasgupta said, the fleet of electric buses in those five states is expected to increase to 800,000.In the Indian state of Rajasthan, Dasgupta said, 59 percent of power already comes from solar energy. At times, Rajasthan produces more solar than it can use, and officials are exploring ways to either store the excess energy or sell it to other states. But in another state, Jharkhand, where much of the country’s coal is sourced, only 5 percent of power comes from solar. Officials in Jharkhand have reached out to WRI to discuss how to transition their energy economy, as they recognize that coal will fall out of favor in the future, Dasgupta said.“The complexities of the transition are enormous in a country this big,” Dasgupta said. “This is true in most large countries.”The road aheadDespite the challenges ahead, the colloquium was also marked by notes of optimism. In his opening remarks, Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, pointed out how much progress has been made on environmental cleanup since the first Earth Day in 1970. “The world was a very different, much dirtier, place in many ways,” Stoner said. “Our air was a mess, our waterways were a mess, and it was beginning to be noticeable. Since then, Earth Day has become an important part of the fabric of American and global society.”While Dasgupta said that the world presently lacks the “orchestration” among various stakeholders needed to bring climate change under control, he expressed hope that collaboration in key countries could accelerate progress.“I strongly believe that what we need is a very different way of collaborating radically — across organizations like yours, organizations like ours, businesses, and governments,” Dasgupta said. “Otherwise, this transition will not happen at the scale and speed we need.” More

  • in

    H2 underground

    In 1987 in a village in Mali, workers were digging a water well when they felt a rush of air. One of the workers was smoking a cigarette, and the air caught fire, burning a clear blue flame. The well was capped at the time, but in 2012, it was tapped to provide energy for the village, powering a generator for nine years.The fuel source: geologic hydrogen.For decades, hydrogen has been discussed as a potentially revolutionary fuel. But efforts to produce “green” hydrogen (splitting water into hydrogen and oxygen using renewable electricity), “grey” hydrogen (making hydrogen from methane and releasing the biproduct carbon dioxide (CO2) into the atmosphere), “brown” hydrogen (produced through the gasification of coal), and “blue” hydrogen (making hydrogen from methane but capturing the CO2) have thus far proven either expensive and/or energy-intensive. Enter geologic hydrogen. Also known as “orange,” “gold,” “white,” “natural,” and even “clear” hydrogen, geologic hydrogen is generated by natural geochemical processes in the Earth’s crust. While there is still much to learn, a growing number of researchers and industry leaders are hopeful that it may turn out to be an abundant and affordable resource lying right beneath our feet.“There’s a tremendous amount of uncertainty about this,” noted Robert Stoner, the founding director of the MIT Tata Center for Technology and Design, in his opening remarks at the MIT Energy Initiative (MITEI) Spring Symposium. “But the prospect of readily producible clean hydrogen showing up all over the world is a potential near-term game changer.”A new hope for hydrogenThis April, MITEI gathered researchers, industry leaders, and academic experts from around MIT and the world to discuss the challenges and opportunities posed by geologic hydrogen in a daylong symposium entitled “Geologic hydrogen: Are orange and gold the new green?” The field is so new that, until a year ago, the U.S. Department of Energy (DOE)’s website incorrectly claimed that hydrogen only occurs naturally on Earth in compound forms, chemically bonded to other elements.“There’s a common misconception that hydrogen doesn’t occur naturally on Earth,” said Geoffrey Ellis, a research geologist with the U.S. Geological Survey. He noted that natural hydrogen production tends to occur in different locations from where oil and natural gas are likely to be discovered, which explains why geologic hydrogen discoveries have been relatively rare, at least until recently.“Petroleum exploration is not targeting hydrogen,” Ellis said. “Companies are simply not really looking for it, they’re not interested in it, and oftentimes they don’t measure for it. The energy industry spends billions of dollars every year on exploration with very sophisticated technology, and still they drill dry holes all the time. So I think it’s naive to think that we would suddenly be finding hydrogen all the time when we’re not looking for it.”In fact, the number of researchers and startup energy companies with targeted efforts to characterize geologic hydrogen has increased over the past several years — and these searches have uncovered new prospects, said Mary Haas, a venture partner at Breakthrough Energy Ventures. “We’ve seen a dramatic uptick in exploratory activity, now that there is a focused effort by a small community worldwide. At Breakthrough Energy, we are excited about the potential of this space, as well as our role in accelerating its progress,” she said. Haas noted that if geologic hydrogen could be produced at $1 per kilogram, this would be consistent with the DOE’s targeted “liftoff” point for the energy source. “If that happens,” she said, “it would be transformative.”Haas noted that only a small portion of identified hydrogen sites are currently under commercial exploration, and she cautioned that it’s not yet clear how large a role the resource might play in the transition to green energy. But, she said, “It’s worthwhile and important to find out.”Inventing a new energy subsectorGeologic hydrogen is produced when water reacts with iron-rich minerals in rock. Researchers and industry are exploring how to stimulate this natural production by pumping water into promising deposits.In any new exploration area, teams must ask a series of questions to qualify the site, said Avon McIntyre, the executive director of HyTerra Ltd., an Australian company focused on the exploration and production of geologic hydrogen. These questions include: Is the geology favorable? Does local legislation allow for exploration and production? Does the site offer a clear path to value? And what are the carbon implications of producing hydrogen at the site?“We have to be humble,” McIntyre said. “We can’t be too prescriptive and think that we’ll leap straight into success. We have a unique opportunity to stop and think about what this industry will look like, how it will work, and how we can bring together various disciplines.” This was a theme that arose multiple times over the course of the symposium: the idea that many different stakeholders — including those from academia, industry, and government — will need to work together to explore the viability of geologic hydrogen and bring it to market at scale.In addition to the potential for hydrogen production to give rise to greenhouse gas emissions (in cases, for instance, where hydrogen deposits are contaminated with natural gas), researchers and industry must also consider landscape deformation and even potential seismic implications, said Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the MIT Department of Earth, Atmospheric and Planetary Sciences.The surface impacts of hydrogen exploration and production will likely be similar to those caused by the hydro-fracturing process (“fracking”) used in oil and natural gas extraction, Hager said.“There will be unavoidable surface deformation. In most places, you don’t want this if there’s infrastructure around,” Hager said. “Seismicity in the stimulated zone itself should not be a problem, because the areas are tested first. But we need to avoid stressing surrounding brittle rocks.”McIntyre noted that the commercial case for hydrogen remains a challenge to quantify, without even a “spot” price that companies can use to make economic calculations. Early on, he said, capturing helium at hydrogen exploration sites could be a path to early cash flow, but that may ultimately serve as a “distraction” as teams attempt to scale up to the primary goal of hydrogen production. He also noted that it is not even yet clear whether hard rock, soft rock, or underwater environments hold the most potential for geologic hydrogen, but all show promise.“If you stack all of these things together,” McIntyre said, “what we end up doing may look very different from what we think we’re going to do right now.”The path aheadWhile the long-term prospects for geologic hydrogen are shrouded in uncertainty, most speakers at the symposium struck a tone of optimism. Ellis noted that the DOE has dedicated $20 million in funding to a stimulated hydrogen program. Paris Smalls, the co-founder and CEO of Eden GeoPower Inc., said “we think there is a path” to producing geologic hydrogen below the $1 per kilogram threshold. And Iwnetim Abate, an assistant professor in the MIT Department of Materials Science and Engineering, said that geologic hydrogen opens up the idea of Earth as a “factory to produce clean fuels,” utilizing the subsurface heat and pressure instead of relying on burning fossil fuels or natural gas for the same purpose.“Earth has had 4.6 billion years to do these experiments,” said Oliver Jagoutz, a professor of geology in the MIT Department of Earth, Atmospheric and Planetary Sciences. “So there is probably a very good solution out there.”Alexis Templeton, a professor of geological sciences at the University of Colorado at Boulder, made the case for moving quickly. “Let’s go to pilot, faster than you might think,” she said. “Why? Because we do have some systems that we understand. We could test the engineering approaches and make sure that we are doing the right tool development, the right technology development, the right experiments in the lab. To do that, we desperately need data from the field.”“This is growing so fast,” Templeton added. “The momentum and the development of geologic hydrogen is really quite substantial. We need to start getting data at scale. And then, I think, more people will jump off the sidelines very quickly.”  More

  • in

    Elaine Liu: Charging ahead

    MIT senior Elaine Siyu Liu doesn’t own an electric car, or any car. But she sees the impact of electric vehicles (EVs) and renewables on the grid as two pieces of an energy puzzle she wants to solve.The U.S. Department of Energy reports that the number of public and private EV charging ports nearly doubled in the past three years, and many more are in the works. Users expect to plug in at their convenience, charge up, and drive away. But what if the grid can’t handle it?Electricity demand, long stagnant in the United States, has spiked due to EVs, data centers that drive artificial intelligence, and industry. Grid planners forecast an increase of 2.6 percent to 4.7 percent in electricity demand over the next five years, according to data reported to federal regulators. Everyone from EV charging-station operators to utility-system operators needs help navigating a system in flux.That’s where Liu’s work comes in.Liu, who is studying mathematics and electrical engineering and computer science (EECS), is interested in distribution — how to get electricity from a centralized location to consumers. “I see power systems as a good venue for theoretical research as an application tool,” she says. “I’m interested in it because I’m familiar with the optimization and probability techniques used to map this level of problem.”Liu grew up in Beijing, then after middle school moved with her parents to Canada and enrolled in a prep school in Oakville, Ontario, 30 miles outside Toronto.Liu stumbled upon an opportunity to take part in a regional math competition and eventually started a math club, but at the time, the school’s culture surrounding math surprised her. Being exposed to what seemed to be some students’ aversion to math, she says, “I don’t think my feelings about math changed. I think my feelings about how people feel about math changed.”Liu brought her passion for math to MIT. The summer after her sophomore year, she took on the first of the two Undergraduate Research Opportunity Program projects she completed with electric power system expert Marija Ilić, a joint adjunct professor in EECS and a senior research scientist at the MIT Laboratory for Information and Decision Systems.Predicting the gridSince 2022, with the help of funding from the MIT Energy Initiative (MITEI), Liu has been working with Ilić on identifying ways in which the grid is challenged.One factor is the addition of renewables to the energy pipeline. A gap in wind or sun might cause a lag in power generation. If this lag occurs during peak demand, it could mean trouble for a grid already taxed by extreme weather and other unforeseen events.If you think of the grid as a network of dozens of interconnected parts, once an element in the network fails — say, a tree downs a transmission line — the electricity that used to go through that line needs to be rerouted. This may overload other lines, creating what’s known as a cascade failure.“This all happens really quickly and has very large downstream effects,” Liu says. “Millions of people will have instant blackouts.”Even if the system can handle a single downed line, Liu notes that “the nuance is that there are now a lot of renewables, and renewables are less predictable. You can’t predict a gap in wind or sun. When such things happen, there’s suddenly not enough generation and too much demand. So the same kind of failure would happen, but on a larger and more uncontrollable scale.”Renewables’ varying output has the added complication of causing voltage fluctuations. “We plug in our devices expecting a voltage of 110, but because of oscillations, you will never get exactly 110,” Liu says. “So even when you can deliver enough electricity, if you can’t deliver it at the specific voltage level that is required, that’s a problem.”Liu and Ilić are building a model to predict how and when the grid might fail. Lacking access to privatized data, Liu runs her models with European industry data and test cases made available to universities. “I have a fake power grid that I run my experiments on,” she says. “You can take the same tool and run it on the real power grid.”Liu’s model predicts cascade failures as they evolve. Supply from a wind generator, for example, might drop precipitously over the course of an hour. The model analyzes which substations and which households will be affected. “After we know we need to do something, this prediction tool can enable system operators to strategically intervene ahead of time,” Liu says.Dictating price and powerLast year, Liu turned her attention to EVs, which provide a different kind of challenge than renewables.In 2022, S&P Global reported that lawmakers argued that the U.S. Federal Energy Regulatory Commission’s (FERC) wholesale power rate structure was unfair for EV charging station operators.In addition to operators paying by the kilowatt-hour, some also pay more for electricity during peak demand hours. Only a few EVs charging up during those hours could result in higher costs for the operator even if their overall energy use is low.Anticipating how much power EVs will need is more complex than predicting energy needed for, say, heating and cooling. Unlike buildings, EVs move around, making it difficult to predict energy consumption at any given time. “If users don’t like the price at one charging station or how long the line is, they’ll go somewhere else,” Liu says. “Where to allocate EV chargers is a problem that a lot of people are dealing with right now.”One approach would be for FERC to dictate to EV users when and where to charge and what price they’ll pay. To Liu, this isn’t an attractive option. “No one likes to be told what to do,” she says.Liu is looking at optimizing a market-based solution that would be acceptable to top-level energy producers — wind and solar farms and nuclear plants — all the way down to the municipal aggregators that secure electricity at competitive rates and oversee distribution to the consumer.Analyzing the location, movement, and behavior patterns of all the EVs driven daily in Boston and other major energy hubs, she notes, could help demand aggregators determine where to place EV chargers and how much to charge consumers, akin to Walmart deciding how much to mark up wholesale eggs in different markets.Last year, Liu presented the work at MITEI’s annual research conference. This spring, Liu and Ilić are submitting a paper on the market optimization analysis to a journal of the Institute of Electrical and Electronics Engineers.Liu has come to terms with her early introduction to attitudes toward STEM that struck her as markedly different from those in China. She says, “I think the (prep) school had a very strong ‘math is for nerds’ vibe, especially for girls. There was a ‘why are you giving yourself more work?’ kind of mentality. But over time, I just learned to disregard that.”After graduation, Liu, the only undergraduate researcher in Ilić’s MIT Electric Energy Systems Group, plans to apply to fellowships and graduate programs in EECS, applied math, and operations research.Based on her analysis, Liu says that the market could effectively determine the price and availability of charging stations. Offering incentives for EV owners to charge during the day instead of at night when demand is high could help avoid grid overload and prevent extra costs to operators. “People would still retain the ability to go to a different charging station if they chose to,” she says. “I’m arguing that this works.” More

  • in

    William Green named director of MIT Energy Initiative

    MIT professor William H. Green has been named director of the MIT Energy Initiative (MITEI).In appointing Green, then-MIT Vice President for Research Maria Zuber highlighted his expertise in chemical kinetics — the understanding of the rates of chemical reactions — and the work of his research team in reaction kinetics, quantum chemistry, numerical methods, and fuel chemistry, as well as his work performing techno-economic assessments of proposed fuel and vehicle changes and biofuel production options.“Bill has been an active participant in MITEI; his broad view of energy science and technology will be a major asset and will position him well to contribute to the success of MIT’s exciting new Climate Project,” Zuber wrote in a letter announcing the appointment, which went into effect April 1. Green is the Hoyt C. Hottel Professor of Chemical Engineering and previously served as the executive officer of the MIT Department of Chemical Engineering from 2012 to 2015. He sees MITEI’s role today as bringing together the voices of engineering, science, industry, and policy to quickly drive the global energy transition.“MITEI has a very important role in fostering the energy and climate innovations happening at MIT and in building broader consensus, first in the engineering community and then ultimately to start the conversations that will lead to public acceptance and societal consensus,” says Green.Achieving consensus much more quickly is essential, says Green, who noted that it was during the 1992 Rio Summit that globally we recognized the problem of greenhouse gas emissions, yet almost a quarter-century passed before the Paris Agreement came into force. Eight years after the Paris Agreement, there is still disagreement over how to address this challenge in most sectors of the economy, and much work to be done to translate the Paris pledges into reality.“Many people feel we’re collectively too slow in dealing with the climate problem,” he says. “It’s very important to keep helping the research community be more effective and faster to provide the solutions that society needs, but we also need to work on being faster at reaching consensus around the good solutions we do have, and supporting them so they’ll actually be economically attractive so that investors can feel safe to invest in them, and to change regulations to make them feasible, when needed.”With experience in industry, policy, and academia, Green is well positioned to facilitate this acceleration. “I can see the situation from the point of view of a scientist, from the point of view of an engineer, from the point of view of the big companies, from the point of view of a startup company, and from the point of view of a parent concerned about the effects of climate change on the world my children are inheriting,” he says.Green also intends to extend MITEI’s engagement with a broader range of countries, industries, and economic sectors as MITEI focuses on decarbonization and accelerating the much-needed energy transition worldwide.Green received a PhD in physical chemistry from the University of California at Berkeley and a BA in chemistry from Swarthmore College. He joined MIT in 1997. He is the recipient of the AIChE’s R.H. Wilhelm Award in Chemical Reaction Engineering and is an inaugural Fellow of the Combustion Institute.He succeeds Robert Stoner, who served as interim director of MITEI beginning in July 2023, when longtime director Robert C. Armstrong retired after serving in the role for a decade. More

  • in

    Seizing solar’s bright future

    Consider the dizzying ascent of solar energy in the United States: In the past decade, solar capacity increased nearly 900 percent, with electricity production eight times greater in 2023 than in 2014. The jump from 2022 to 2023 alone was 51 percent, with a record 32 gigawatts (GW) of solar installations coming online. In the past four years, more solar has been added to the grid than any other form of generation. Installed solar now tops 179 GW, enough to power nearly 33 million homes. The U.S. Department of Energy (DOE) is so bullish on the sun that its decarbonization plans envision solar satisfying 45 percent of the nation’s electricity demands by 2050.But the continued rapid expansion of solar requires advances in technology, notably to improve the efficiency and durability of solar photovoltaic (PV) materials and manufacturing. That’s where Optigon, a three-year-old MIT spinout company, comes in.“Our goal is to build tools for research and industry that can accelerate the energy transition,” says Dane deQuilettes, the company’s co-founder and chief science officer. “The technology we have developed for solar will enable measurements and analysis of materials as they are being made both in lab and on the manufacturing line, dramatically speeding up the optimization of PV.”With roots in MIT’s vibrant solar research community, Optigon is poised for a 2024 rollout of technology it believes will drastically pick up the pace of solar power and other clean energy projects.Beyond siliconSilicon, the material mainstay of most PV, is limited by the laws of physics in the efficiencies it can achieve converting photons from the sun into electrical energy. Silicon-based solar cells can theoretically reach power conversion levels of just 30 percent, and real-world efficiency levels hover in the low 20s. But beyond the physical limitations of silicon, there is another issue at play for many researchers and the solar industry in the United States and elsewhere: China dominates the silicon PV market, from supply chains to manufacturing.Scientists are eagerly pursuing alternative materials, either for enhancing silicon’s solar conversion capacity or for replacing silicon altogether.In the past decade, a family of crystal-structured semiconductors known as perovskites has risen to the fore as a next-generation PV material candidate. Perovskite devices lend themselves to a novel manufacturing process using printing technology that could circumvent the supply chain juggernaut China has built for silicon. Perovskite solar cells can be stacked on each other or layered atop silicon PV, to achieve higher conversion efficiencies. Because perovskite technology is flexible and lightweight, modules can be used on roofs and other structures that cannot support heavier silicon PV, lowering costs and enabling a wider range of building-integrated solar devices.But these new materials require testing, both during R&D and then on assembly lines, where missing or defective optical, electrical, or dimensional properties in the nano-sized crystal structures can negatively impact the end product.“The actual measurement and data analysis processes have been really, really slow, because you have to use a bunch of separate tools that are all very manual,” says Optigon co-founder and chief executive officer Anthony Troupe ’21. “We wanted to come up with tools for automating detection of a material’s properties, for determining whether it could make a good or bad solar cell, and then for optimizing it.”“Our approach packed several non-contact, optical measurements using different types of light sources and detectors into a single system, which together provide a holistic, cross-sectional view of the material,” says Brandon Motes ’21, ME ’22, co-founder and chief technical officer.“This breakthrough in achieving millisecond timescales for data collection and analysis means we can take research-quality tools and actually put them on a full production system, getting extremely detailed information about products being built at massive, gigawatt scale in real-time,” says Troupe.This streamlined system takes measurements “in the snap of the fingers, unlike the traditional tools,” says Joseph Berry, director of the US Manufacturing of Advanced Perovskites Consortium and a senior research scientist at the National Renewable Energy Laboratory. “Optigon’s techniques are high precision and allow high throughput, which means they can be used in a lot of contexts where you want rapid feedback and the ability to develop materials very, very quickly.”According to Berry, Optigon’s technology may give the solar industry not just better materials, but the ability to pump out high-quality PV products at a brisker clip than is currently possible. “If Optigon is successful in deploying their technology, then we can more rapidly develop the materials that we need, manufacturing with the requisite precision again and again,” he says. “This could lead to the next generation of PV modules at a much, much lower cost.”Measuring makes the differenceWith Small Business Innovation Research funding from DOE to commercialize its products and a grant from the Massachusetts Clean Energy Center, Optigon has settled into a space at the climate technology incubator Greentown Labs in Somerville, Massachusetts. Here, the team is preparing for this spring’s launch of its first commercial product, whose genesis lies in MIT’s GridEdge Solar Research Program.Led by Vladimir Bulović, a professor of electrical engineering and the director of MIT.nano, the GridEdge program was established with funding from the Tata Trusts to develop lightweight, flexible, and inexpensive solar cells for distribution to rural communities around the globe. When deQuilettes joined the group in 2017 as a postdoc, he was tasked with directing the program and building the infrastructure to study and make perovskite solar modules.“We were trying to understand once we made the material whether or not it was good,” he recalls. “There were no good commercial metrology [the science of measurements] tools for materials beyond silicon, so we started to build our own.” Recognizing the group’s need for greater expertise on the problem, especially in the areas of electrical, software, and mechanical engineering, deQuilettes put a call out for undergraduate researchers to help build metrology tools for new solar materials.“Forty people inquired, but when I met Brandon and Anthony, something clicked; it was clear we had a complementary skill set,” says deQuilettes. “We started working together, with Anthony coming up with beautiful designs to integrate multiple measurements, and Brandon creating boards to control all of the hardware, including different types of lasers. We started filing multiple patents and that was when we saw it all coming together.”“We knew from the start that metrology could vastly improve not just materials, but production yields,” says Troupe. Adds deQuilettes, “Our goal was getting to the highest performance orders of magnitude faster than it would ordinarily take, so we developed tools that would not just be useful for research labs but for manufacturing lines to give live feedback on quality.”The device Optigon designed for industry is the size of a football, “with sensor packages crammed into a tiny form factor, taking measurements as material flows directly underneath,” says Motes. “We have also thought carefully about ways to make interaction with this tool as seamless and, dare I say, as enjoyable as possible, streaming data to both a dashboard an operator can watch and to a custom database.”Photovoltaics is just the startThe company may have already found its market niche. “A research group paid us to use our in-house prototype because they have such a burning need to get these sorts of measurements,” says Troupe, and according to Motes, “Potential customers ask us if they can buy the system now.” deQuilettes says, “Our hope is that we become the de facto company for doing any sort of characterization metrology in the United States and beyond.”Challenges lie ahead for Optigon: product launches, full-scale manufacturing, technical assistance, and sales. Greentown Labs offers support, as does MIT’s own rich community of solar researchers and entrepreneurs. But the founders are already thinking about next phases.“We are not limiting ourselves to the photovoltaics area,” says deQuilettes. “We’re planning on working in other clean energy materials such as batteries and fuel cells.”That’s because the team wants to make the maximum impact on the climate challenge. “We’ve thought a lot about the potential our tools will have on reducing carbon emissions, and we’ve done a really in-depth analysis looking at how our system can increase production yields of solar panels and other energy technologies, reducing materials and energy wasted in conventional optimization,” deQuilettes says. “If we look across all these sectors, we can expect to offset about 1,000 million metric tons of CO2 [carbon dioxide] per year in the not-too-distant future.”The team has written scale into its business plan. “We want to be the key enabler for bringing these new energy technologies to market,” says Motes. “We envision being deployed on every manufacturing line making these types of materials. It’s our goal to walk around and know that if we see a solar panel deployed, there’s a pretty high likelihood that it will be one we measured at some point.” More

  • in

    A delicate dance

    In early 2022, economist Catherine Wolfram was at her desk in the U.S. Treasury building. She could see the east wing of the White House, just steps away.

    Russia had just invaded Ukraine, and Wolfram was thinking about Russia, oil, and sanctions. She and her colleagues had been tasked with figuring out how to restrict the revenues that Russia was using to fuel its brutal war while keeping Russian oil available and affordable to the countries that depended on it.

    Now the William F. Pounds Professor of Energy Economics at MIT, Wolfram was on leave from academia to serve as deputy assistant secretary for climate and energy economics.

    Working for Treasury Secretary Janet L. Yellen, Wolfram and her colleagues developed dozens of models and forecasts and projections. It struck her, she said later, that “huge decisions [affecting the global economy] would be made on the basis of spreadsheets that I was helping create.” Wolfram composed a memo to the Biden administration and hoped her projections would pan out the way she believed they would.

    Tackling conundrums that weigh competing, sometimes contradictory, interests has defined much of Wolfram’s career.

    Wolfram specializes in the economics of energy markets. She looks at ways to decarbonize global energy systems while recognizing that energy drives economic development, especially in the developing world.

    “The way we’re currently making energy is contributing to climate change. There’s a delicate dance we have to do to make sure that we treat this important industry carefully, but also transform it rapidly to a cleaner, decarbonized system,” she says.

    Economists as influencers

    While Wolfram was growing up in a suburb of St. Paul, Minnesota, her father was a law professor and her mother taught English as a second language. Her mother helped spawn Wolfram’s interest in other cultures and her love of travel, but it was an experience closer to home that sparked her awareness of the effect of human activities on the state of the planet.

    Minnesota’s nickname is “Land of 10,000 Lakes.” Wolfram remembers swimming in a nearby lake sometimes covered by a thick sludge of algae. “Thinking back on it, it must’ve had to do with fertilizer runoff,” she says. “That was probably the first thing that made me think about the environment and policy.”

    In high school, Wolfram liked “the fact that you could use math to understand the world. I also was interested in the types of questions about human behavior that economists were thinking about.

    “I definitely think economics is good at sussing out how different actors are likely to react to a particular policy and then designing policies with that in mind.”

    After receiving a bachelor’s degree in economics from Harvard University in 1989, Wolfram worked with a Massachusetts agency that governed rate hikes for utilities. Seeing its reliance on research, she says, illuminated the role academics could play in policy setting. It made her think she could make a difference from within academia.

    While pursuing a PhD in economics from MIT, Wolfram counted Paul L. Joskow, the Elizabeth and James Killian Professor of Economics and former director of the MIT Center for Energy and Environmental Policy Research, and Nancy L. Rose, the Charles P. Kindleberger Professor of Applied Economics, among her mentors and influencers.

    After spending 1996 to 2000 as an assistant professor of economics at Harvard, she joined the faculty at the Haas School of Business at the University of California at Berkeley.

    At Berkeley, it struck Wolfram that while she labored over ways to marginally boost the energy efficiency of U.S. power plants, the economies of China and India were growing rapidly, with a corresponding growth in energy use and carbon dioxide emissions. “It hit home that to understand the climate issue, I needed to understand energy demand in the developing world,” she says.

    The problem was that the developing world didn’t always offer up the kind of neatly packaged, comprehensive data economists relied on. She wondered if, by relying on readily accessible data, the field was looking under the lamppost — while losing sight of what the rest of the street looked like.

    To make up for a lack of available data on the state of electrification in sub-Saharan Africa, for instance, Wolfram developed and administered surveys to individual, remote rural households using on-the-ground field teams.

    Her results suggested that in the world’s poorest countries, the challenges involved in expanding the grid in rural areas should be weighed against potentially greater economic and social returns on investments in the transportation, education, or health sectors.

    Taking the lead

    Within months of Wolfram’s memo to the Biden administration, leaders of the intergovernmental political forum Group of Seven (G7) agreed to the price cap. Tankers from coalition countries would only transport Russian crude sold at or below the price cap level, initially set at $60 per barrel.

    “A price cap was not something that had ever been done before,” Wolfram says. “In some ways, we were making it up out of whole cloth. It was exciting to see that I wrote one of the original memos about it, and then literally three-and-a-half months later, the G7 was making an announcement.

    “As economists and as policymakers, we must set the parameters and get the incentives right. The price cap was basically asking developing countries to buy cheap oil, which was consistent with their incentives.”

    In May 2023, the U.S. Department of the Treasury reported that despite widespread initial skepticism about the price cap, market participants and geopolitical analysts believe it is accomplishing its goals of restricting Russia’s oil revenues while maintaining the supply of Russian oil and keeping energy costs in check for consumers and businesses around the world.

    Wolfram held the U.S. Treasury post from March 2021 to October 2022 while on leave from UC Berkeley. In July 2023, she joined MIT Sloan School of Management partly to be geographically closer to the policymakers of the nation’s capital. She’s also excited about the work taking place elsewhere at the Institute to stay ahead of climate change.

    Her time in D.C. was eye-opening, particularly in terms of the leadership power of the United States. She worries that the United States is falling prey to “lost opportunities” in terms of addressing climate change. “We were showing real leadership on the price cap, and if we could only do that on climate, I think we could make faster inroads on a global agreement,” she says.

    Now focused on structuring global agreements in energy policy among developed and developing countries, she’s considering how the United States can take advantage of its position as a world leader. “We need to be thinking about how what we do in the U.S. affects the rest of the world from a climate perspective. We can’t go it alone.

    “The U.S. needs to be more aligned with the European Union, Canada, and Japan to try to find areas where we’re taking a common approach to addressing climate change,” she says. She will touch on some of those areas in the class she will teach in spring 2024 titled “Climate and Energy in the Global Economy,” offered through MIT Sloan.

    Looking ahead, she says, “I’m a techno optimist. I believe in human innovation. I’m optimistic that we’ll find ways to live with climate change and, hopefully, ways to minimize it.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Engineers find a new way to convert carbon dioxide into useful products

    MIT chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

    If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

    “This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonization because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering and the senior author of the study.

    The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

    Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appears in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

    Breaking down CO2

    Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

    To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

    During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

    To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

    “DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

    To attach single strands of DNA to a carbon electrode, the researchers used two “chemical handles,” one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode — just like Velcro.

    Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

    An efficient reaction

    Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100 percent, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40 percent.

    This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

    By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

    The research was funded by the U.S. Army Research Office, the CIFAR Azrieli Global Scholars Program, the MIT Energy Initiative, and the MIT Deshpande Center. More

  • in

    Future nuclear power reactors could rely on molten salts — but what about corrosion?

    Most discussions of how to avert climate change focus on solar and wind generation as key to the transition to a future carbon-free power system. But Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering at MIT and associate director of the MIT Plasma Science and Fusion Center (PSFC), is impatient with such talk. “We can say we should have only wind and solar someday. But we don’t have the luxury of ‘someday’ anymore, so we can’t ignore other helpful ways to combat climate change,” he says. “To me, it’s an ‘all-hands-on-deck’ thing. Solar and wind are clearly a big part of the solution. But I think that nuclear power also has a critical role to play.”

    For decades, researchers have been working on designs for both fission and fusion nuclear reactors using molten salts as fuels or coolants. While those designs promise significant safety and performance advantages, there’s a catch: Molten salt and the impurities within it often corrode metals, ultimately causing them to crack, weaken, and fail. Inside a reactor, key metal components will be exposed not only to molten salt but also simultaneously to radiation, which generally has a detrimental effect on materials, making them more brittle and prone to failure. Will irradiation make metal components inside a molten salt-cooled nuclear reactor corrode even more quickly?

    Short and Weiyue Zhou PhD ’21, a postdoc in the PSFC, have been investigating that question for eight years. Their recent experimental findings show that certain alloys will corrode more slowly when they’re irradiated — and identifying them among all the available commercial alloys can be straightforward.

    The first challenge — building a test facility

    When Short and Zhou began investigating the effect of radiation on corrosion, practically no reliable facilities existed to look at the two effects at once. The standard approach was to examine such mechanisms in sequence: first corrode, then irradiate, then examine the impact on the material. That approach greatly simplifies the task for the researchers, but with a major trade-off. “In a reactor, everything is going to be happening at the same time,” says Short. “If you separate the two processes, you’re not simulating a reactor; you’re doing some other experiment that’s not as relevant.”

    So, Short and Zhou took on the challenge of designing and building an experimental setup that could do both at once. Short credits a team at the University of Michigan for paving the way by designing a device that could accomplish that feat in water, rather than molten salts. Even so, Zhou notes, it took them three years to come up with a device that would work with molten salts. Both researchers recall failure after failure, but the persistent Zhou ultimately tried a totally new design, and it worked. Short adds that it also took them three years to precisely replicate the salt mixture used by industry — another factor critical to getting a meaningful result. The hardest part was achieving and ensuring that the purity was correct by removing critical impurities such as moisture, oxygen, and certain other metals.

    As they were developing and testing their setup, Short and Zhou obtained initial results showing that proton irradiation did not always accelerate corrosion but sometimes actually decelerated it. They and others had hypothesized that possibility, but even so, they were surprised. “We thought we must be doing something wrong,” recalls Short. “Maybe we mixed up the samples or something.” But they subsequently made similar observations for a variety of conditions, increasing their confidence that their initial observations were not outliers.

    The successful setup

    Central to their approach is the use of accelerated protons to mimic the impact of the neutrons inside a nuclear reactor. Generating neutrons would be both impractical and prohibitively expensive, and the neutrons would make everything highly radioactive, posing health risks and requiring very long times for an irradiated sample to cool down enough to be examined. Using protons would enable Short and Zhou to examine radiation-altered corrosion both rapidly and safely.

    Key to their experimental setup is a test chamber that they attach to a proton accelerator. To prepare the test chamber for an experiment, they place inside it a thin disc of the metal alloy being tested on top of a a pellet of salt. During the test, the entire foil disc is exposed to a bath of molten salt. At the same time, a beam of protons bombards the sample from the side opposite the salt pellet, but the proton beam is restricted to a circle in the middle of the foil sample. “No one can argue with our results then,” says Short. “In a single experiment, the whole sample is subjected to corrosion, and only a circle in the center of the sample is simultaneously irradiated by protons. We can see the curvature of the proton beam outline in our results, so we know which region is which.”

    The results with that arrangement were unchanged from the initial results. They confirmed the researchers’ preliminary findings, supporting their controversial hypothesis that rather than accelerating corrosion, radiation would actually decelerate corrosion in some materials under some conditions. Fortunately, they just happen to be the same conditions that will be experienced by metals in molten salt-cooled reactors.

    Why is that outcome controversial? A closeup look at the corrosion process will explain. When salt corrodes metal, the salt finds atomic-level openings in the solid, seeps in, and dissolves salt-soluble atoms, pulling them out and leaving a gap in the material — a spot where the material is now weak. “Radiation adds energy to atoms, causing them to be ballistically knocked out of their positions and move very fast,” explains Short. So, it makes sense that irradiating a material would cause atoms to move into the salt more quickly, increasing the rate of corrosion. Yet in some of their tests, the researchers found the opposite to be true.

    Experiments with “model” alloys

    The researchers’ first experiments in their novel setup involved “model” alloys consisting of nickel and chromium, a simple combination that would give them a first look at the corrosion process in action. In addition, they added europium fluoride to the salt, a compound known to speed up corrosion. In our everyday world, we often think of corrosion as taking years or decades, but in the more extreme conditions of a molten salt reactor it can noticeably occur in just hours. The researchers used the europium fluoride to speed up corrosion even more without changing the corrosion process. This allowed for more rapid determination of which materials, under which conditions, experienced more or less corrosion with simultaneous proton irradiation.

    The use of protons to emulate neutron damage to materials meant that the experimental setup had to be carefully designed and the operating conditions carefully selected and controlled. Protons are hydrogen atoms with an electrical charge, and under some conditions the hydrogen could chemically react with atoms in the sample foil, altering the corrosion response, or with ions in the salt, making the salt more corrosive. Therefore, the proton beam had to penetrate the foil sample but then stop in the salt as soon as possible. Under these conditions, the researchers found they could deliver a relatively uniform dose of radiation inside the foil layer while also minimizing chemical reactions in both the foil and the salt.

    Tests showed that a proton beam accelerated to 3 million electron-volts combined with a foil sample between 25 and 30 microns thick would work well for their nickel-chromium alloys. The temperature and duration of the exposure could be adjusted based on the corrosion susceptibility of the specific materials being tested.

    Optical images of samples examined after tests with the model alloys showed a clear boundary between the area that was exposed only to the molten salt and the area that was also exposed to the proton beam. Electron microscope images focusing on that boundary showed that the area that had been exposed only to the molten salt included dark patches where the molten salt had penetrated all the way through the foil, while the area that had also been exposed to the proton beam showed almost no such dark patches.

    To confirm that the dark patches were due to corrosion, the researchers cut through the foil sample to create cross sections. In them, they could see tunnels that the salt had dug into the sample. “For regions not under radiation, we see that the salt tunnels link the one side of the sample to the other side,” says Zhou. “For regions under radiation, we see that the salt tunnels stop more or less halfway and rarely reach the other side. So we verified that they didn’t penetrate the whole way.”

    The results “exceeded our wildest expectations,” says Short. “In every test we ran, the application of radiation slowed corrosion by a factor of two to three times.”

    More experiments, more insights

    In subsequent tests, the researchers more closely replicated commercially available molten salt by omitting the additive (europium fluoride) that they had used to speed up corrosion, and they tweaked the temperature for even more realistic conditions. “In carefully monitored tests, we found that by raising the temperature by 100 degrees Celsius, we could get corrosion to happen about 1,000 times faster than it would in a reactor,” says Short.

    Images from experiments with the nickel-chromium alloy plus the molten salt without the corrosive additive yielded further insights. Electron microscope images of the side of the foil sample facing the molten salt showed that in sections only exposed to the molten salt, the corrosion is clearly focused on the weakest part of the structure — the boundaries between the grains in the metal. In sections that were exposed to both the molten salt and the proton beam, the corrosion isn’t limited to the grain boundaries but is more spread out over the surface. Experimental results showed that these cracks are shallower and less likely to cause a key component to break.

    Short explains the observations. Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are areas — called grain boundaries — where the atoms don’t line up as well. In the corrosion-only images, dark lines track the grain boundaries. Molten salt has seeped into the grain boundaries and pulled out salt-soluble atoms. In the corrosion-plus-irradiation images, the damage is more general. It’s not only the grain boundaries that get attacked but also regions within the grains.

    So, when the material is irradiated, the molten salt also removes material from within the grains. Over time, more material comes out of the grains themselves than from the spaces between them. The removal isn’t focused on the grain boundaries; it’s spread out over the whole surface. As a result, any cracks that form are shallower and more spread out, and the material is less likely to fail.

    Testing commercial alloys

    The experiments described thus far involved model alloys — simple combinations of elements that are good for studying science but would never be used in a reactor. In the next series of experiments, the researchers focused on three commercially available alloys that are composed of nickel, chromium, iron, molybdenum, and other elements in various combinations.

    Results from the experiments with the commercial alloys showed a consistent pattern — one that confirmed an idea that the researchers had going in: the higher the concentration of salt-soluble elements in the alloy, the worse the radiation-induced corrosion damage. Radiation will increase the rate at which salt-soluble atoms such as chromium leave the grain boundaries, hastening the corrosion process. However, if there are more not-soluble elements such as nickel present, those atoms will go into the salt more slowly. Over time, they’ll accumulate at the grain boundary and form a protective coating that blocks the grain boundary — a “self-healing mechanism that decelerates the rate of corrosion,” say the researchers.

    Thus, if an alloy consists mostly of atoms that don’t dissolve in molten salt, irradiation will cause them to form a protective coating that slows the corrosion process. But if an alloy consists mostly of atoms that dissolve in molten salt, irradiation will make them dissolve faster, speeding up corrosion. As Short summarizes, “In terms of corrosion, irradiation makes a good alloy better and a bad alloy worse.”

    Real-world relevance plus practical guidelines

    Short and Zhou find their results encouraging. In a nuclear reactor made of “good” alloys, the slowdown in corrosion will probably be even more pronounced than what they observed in their proton-based experiments because the neutrons that inflict the damage won’t chemically react with the salt to make it more corrosive. As a result, reactor designers could push the envelope more in their operating conditions, allowing them to get more power out of the same nuclear plant without compromising on safety.

    However, the researchers stress that there’s much work to be done. Many more projects are needed to explore and understand the exact corrosion mechanism in specific alloys under different irradiation conditions. In addition, their findings need to be replicated by groups at other institutions using their own facilities. “What needs to happen now is for other labs to build their own facilities and start verifying whether they get the same results as we did,” says Short. To that end, Short and Zhou have made the details of their experimental setup and all of their data freely available online. “We’ve also been actively communicating with researchers at other institutions who have contacted us,” adds Zhou. “When they’re planning to visit, we offer to show them demonstration experiments while they’re here.”

    But already their findings provide practical guidance for other researchers and equipment designers. For example, the standard way to quantify corrosion damage is by “mass loss,” a measure of how much weight the material has lost. But Short and Zhou consider mass loss a flawed measure of corrosion in molten salts. “If you’re a nuclear plant operator, you usually care whether your structural components are going to break,” says Short. “Our experiments show that radiation can change how deep the cracks are, when all other things are held constant. The deeper the cracks, the more likely a structural component is to break, leading to a reactor failure.”

    In addition, the researchers offer a simple rule for identifying good metal alloys for structural components in molten salt reactors. Manufacturers provide extensive lists of available alloys with different compositions, microstructures, and additives. Faced with a list of options for critical structures, the designer of a new nuclear fission or fusion reactor can simply examine the composition of each alloy being offered. The one with the highest content of corrosion-resistant elements such as nickel will be the best choice. Inside a nuclear reactor, that alloy should respond to a bombardment of radiation not by corroding more rapidly but by forming a protective layer that helps block the corrosion process. “That may seem like a trivial result, but the exact threshold where radiation decelerates corrosion depends on the salt chemistry, the density of neutrons in the reactor, their energies, and a few other factors,” says Short. “Therefore, the complete guidelines are a bit more complicated. But they’re presented in a straightforward way that users can understand and utilize to make a good choice for the molten salt–based reactor they’re designing.”

    This research was funded, in part, by Eni S.p.A. through the MIT Plasma Science and Fusion Center’s Laboratory for Innovative Fusion Technologies. Earlier work was funded, in part, by the Transatomic Power Corporation and by the U.S. Department of Energy Nuclear Energy University Program. Equipment development and testing was supported by the Transatomic Power Corporation.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More