More stories

  • in

    Lessons from Fukushima: Prepare for the unlikely

    When a devastating earthquake and tsunami overwhelmed the protective systems at the Fukushima Dai’ichi nuclear power plant complex in Japan in March 2011, it triggered a sequence of events leading to one of the worst releases of radioactive materials in the world to date. Although nuclear energy is having a revival as a low-emissions energy source to mitigate climate change, the Fukushima accident is still cited as a reason for hesitancy in adopting it.

    A new study synthesizes information from multidisciplinary sources to understand how the Fukushima Dai’ichi disaster unfolded, and points to the importance of mitigation measures and last lines of defense — even against accidents considered highly unlikely. These procedures have received relatively little attention, but they are critical in determining how severe the consequences of a reactor failure will be, the researchers say.

    The researchers note that their synthesis is one of the few attempts to look at data across disciplinary boundaries, including: the physics and engineering of what took place within the plant’s systems, the plant operators’ actions throughout the emergency, actions by emergency responders, the meteorology of radionuclide releases and transport, and the environmental and health consequences documented since the event.

    The study appears in the journal iScience, in an open-access paper by postdoc Ali Ayoub and Professor Haruko Wainwright at MIT, along with others in Switzerland, Japan, and New Mexico.

    Since 2013, Wainwright has been leading the research to integrate all the radiation monitoring data in the Fukushima region into integrated maps. “I was staring at the contamination map for nearly 10 years, wondering what created the main plume extending in the northwest direction, but I could not find exact information,” Wainwright says. “Our study is unique because we started from the consequence, the contamination map, and tried to identify the key factors for the consequence. Other people study the Fukushima accident from the root cause, the tsunami.”

    One thing they found was that while all the operating reactors, units 1, 2, and 3, suffered core meltdowns as a result of the failure of emergency cooling systems, units 1 and 3 — although they did experience hydrogen explosions — did not release as much radiation to the environment because their venting systems essentially worked to relieve pressure inside the containment vessels as intended. But the same system in unit 2 failed badly.

    “People think that the hydrogen explosion or the core meltdown were the worst things, or the major driver of the radiological consequences of the accident,” Wainright says, “but our analysis found that’s not the case.” Much more significant in terms of the radiological release was the failure of the one venting mechanism.

    “There is a pressure-release mechanism that goes through water where a lot of the radionuclides get filtered out,” she explains. That system was effective in units 1 and 3, filtering out more than 90 percent of the radioactive elements before the gas was vented. However, “in unit 2, that pressure release mechanism got stuck, and the operators could not manually open it.” A hydrogen explosion in unit 1 had damaged the pressure relief mechanism of unit 2. This led to a breach of the containment structure and direct, unfiltered venting to the atmosphere, which, according to the new study, was what produced the greatest amount of contamination from the whole weeks-long event.

    Another factor was the timing of the attempt to vent the pressure buildup in the reactor. Guidelines at the time, and to this day in many reactors, specified that no venting should take place until the pressure inside the reactor containment vessel reached a specified threshold, with no regard to the wind directions at the time. In the case of Fukushima, an earlier venting could have dramatically reduced the impact: Much of the release happened when winds were blowing directly inland, but earlier the wind had been blowing offshore.

    “That pressure-release mechanism has not been a major focus of the engineering community,” she says. While there is appropriate attention to measures that prevent a core meltdown in the first place, “this sort of last line of defense has not been the main focus and should get more attention.”

    Wainwright says the study also underlines several successes in the management of the Fukushima accident. Many of the safety systems did work as they were designed. For example, even though the oldest reactor, unit 1, suffered the greatest internal damage, it released little radioactive material. Most people were able to evacuate from the 20-kilometer (12-mile) zone before the largest release happened. The mitigation measures were “somewhat successful,” Wainwright says. But there was tremendous confusion and anger during and after the accident because there were no preparations in place for such an event.

    Much work has focused on ways to prevent the kind of accidents that happened at Fukushima — for example, in the U.S. reactor operators can deploy portable backup power supplies to maintain proper reactor cooling at any reactor site. But the ongoing situation at the Zaporizhzhia nuclear complex in Ukraine, where nuclear safety is challenged by acts of war, demonstrates that despite engineers’ and operators’ best efforts to prevent it, “the totally unexpected could still happen,” Wainwright says.

    “The big-picture message is that we should have equal attention to both prevention and mitigation of accidents,” she says. “This is the essence of resilience, and it applies beyond nuclear power plants to all essential infrastructure of a functioning society, for example, the electric grid, the food and water supply, the transportation sector, etc.”

    One thing the researchers recommend is that in designing evacuation protocols, planners should make more effort to learn from much more frequent disasters such as wildfires and hurricanes. “We think getting more interdisciplinary, transdisciplinary knowledge from other kinds of disasters would be essential,” she says. Most of the emergency response strategies presently in place, she says, were designed in the 1980s and ’90s, and need to be modernized. “Consequences can be mitigated. A nuclear accident does not have to be a catastrophe, as is often portrayed in popular culture,” Wainright says.

    The research team included Giovanni Sansavini at ETH Zurich in Switzerland; Randall Gauntt at Sandia National Laboratories in New Mexico; and Kimiaki Saito at the Japan Atomic Energy Agency. More

  • in

    Future nuclear power reactors could rely on molten salts — but what about corrosion?

    Most discussions of how to avert climate change focus on solar and wind generation as key to the transition to a future carbon-free power system. But Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering at MIT and associate director of the MIT Plasma Science and Fusion Center (PSFC), is impatient with such talk. “We can say we should have only wind and solar someday. But we don’t have the luxury of ‘someday’ anymore, so we can’t ignore other helpful ways to combat climate change,” he says. “To me, it’s an ‘all-hands-on-deck’ thing. Solar and wind are clearly a big part of the solution. But I think that nuclear power also has a critical role to play.”

    For decades, researchers have been working on designs for both fission and fusion nuclear reactors using molten salts as fuels or coolants. While those designs promise significant safety and performance advantages, there’s a catch: Molten salt and the impurities within it often corrode metals, ultimately causing them to crack, weaken, and fail. Inside a reactor, key metal components will be exposed not only to molten salt but also simultaneously to radiation, which generally has a detrimental effect on materials, making them more brittle and prone to failure. Will irradiation make metal components inside a molten salt-cooled nuclear reactor corrode even more quickly?

    Short and Weiyue Zhou PhD ’21, a postdoc in the PSFC, have been investigating that question for eight years. Their recent experimental findings show that certain alloys will corrode more slowly when they’re irradiated — and identifying them among all the available commercial alloys can be straightforward.

    The first challenge — building a test facility

    When Short and Zhou began investigating the effect of radiation on corrosion, practically no reliable facilities existed to look at the two effects at once. The standard approach was to examine such mechanisms in sequence: first corrode, then irradiate, then examine the impact on the material. That approach greatly simplifies the task for the researchers, but with a major trade-off. “In a reactor, everything is going to be happening at the same time,” says Short. “If you separate the two processes, you’re not simulating a reactor; you’re doing some other experiment that’s not as relevant.”

    So, Short and Zhou took on the challenge of designing and building an experimental setup that could do both at once. Short credits a team at the University of Michigan for paving the way by designing a device that could accomplish that feat in water, rather than molten salts. Even so, Zhou notes, it took them three years to come up with a device that would work with molten salts. Both researchers recall failure after failure, but the persistent Zhou ultimately tried a totally new design, and it worked. Short adds that it also took them three years to precisely replicate the salt mixture used by industry — another factor critical to getting a meaningful result. The hardest part was achieving and ensuring that the purity was correct by removing critical impurities such as moisture, oxygen, and certain other metals.

    As they were developing and testing their setup, Short and Zhou obtained initial results showing that proton irradiation did not always accelerate corrosion but sometimes actually decelerated it. They and others had hypothesized that possibility, but even so, they were surprised. “We thought we must be doing something wrong,” recalls Short. “Maybe we mixed up the samples or something.” But they subsequently made similar observations for a variety of conditions, increasing their confidence that their initial observations were not outliers.

    The successful setup

    Central to their approach is the use of accelerated protons to mimic the impact of the neutrons inside a nuclear reactor. Generating neutrons would be both impractical and prohibitively expensive, and the neutrons would make everything highly radioactive, posing health risks and requiring very long times for an irradiated sample to cool down enough to be examined. Using protons would enable Short and Zhou to examine radiation-altered corrosion both rapidly and safely.

    Key to their experimental setup is a test chamber that they attach to a proton accelerator. To prepare the test chamber for an experiment, they place inside it a thin disc of the metal alloy being tested on top of a a pellet of salt. During the test, the entire foil disc is exposed to a bath of molten salt. At the same time, a beam of protons bombards the sample from the side opposite the salt pellet, but the proton beam is restricted to a circle in the middle of the foil sample. “No one can argue with our results then,” says Short. “In a single experiment, the whole sample is subjected to corrosion, and only a circle in the center of the sample is simultaneously irradiated by protons. We can see the curvature of the proton beam outline in our results, so we know which region is which.”

    The results with that arrangement were unchanged from the initial results. They confirmed the researchers’ preliminary findings, supporting their controversial hypothesis that rather than accelerating corrosion, radiation would actually decelerate corrosion in some materials under some conditions. Fortunately, they just happen to be the same conditions that will be experienced by metals in molten salt-cooled reactors.

    Why is that outcome controversial? A closeup look at the corrosion process will explain. When salt corrodes metal, the salt finds atomic-level openings in the solid, seeps in, and dissolves salt-soluble atoms, pulling them out and leaving a gap in the material — a spot where the material is now weak. “Radiation adds energy to atoms, causing them to be ballistically knocked out of their positions and move very fast,” explains Short. So, it makes sense that irradiating a material would cause atoms to move into the salt more quickly, increasing the rate of corrosion. Yet in some of their tests, the researchers found the opposite to be true.

    Experiments with “model” alloys

    The researchers’ first experiments in their novel setup involved “model” alloys consisting of nickel and chromium, a simple combination that would give them a first look at the corrosion process in action. In addition, they added europium fluoride to the salt, a compound known to speed up corrosion. In our everyday world, we often think of corrosion as taking years or decades, but in the more extreme conditions of a molten salt reactor it can noticeably occur in just hours. The researchers used the europium fluoride to speed up corrosion even more without changing the corrosion process. This allowed for more rapid determination of which materials, under which conditions, experienced more or less corrosion with simultaneous proton irradiation.

    The use of protons to emulate neutron damage to materials meant that the experimental setup had to be carefully designed and the operating conditions carefully selected and controlled. Protons are hydrogen atoms with an electrical charge, and under some conditions the hydrogen could chemically react with atoms in the sample foil, altering the corrosion response, or with ions in the salt, making the salt more corrosive. Therefore, the proton beam had to penetrate the foil sample but then stop in the salt as soon as possible. Under these conditions, the researchers found they could deliver a relatively uniform dose of radiation inside the foil layer while also minimizing chemical reactions in both the foil and the salt.

    Tests showed that a proton beam accelerated to 3 million electron-volts combined with a foil sample between 25 and 30 microns thick would work well for their nickel-chromium alloys. The temperature and duration of the exposure could be adjusted based on the corrosion susceptibility of the specific materials being tested.

    Optical images of samples examined after tests with the model alloys showed a clear boundary between the area that was exposed only to the molten salt and the area that was also exposed to the proton beam. Electron microscope images focusing on that boundary showed that the area that had been exposed only to the molten salt included dark patches where the molten salt had penetrated all the way through the foil, while the area that had also been exposed to the proton beam showed almost no such dark patches.

    To confirm that the dark patches were due to corrosion, the researchers cut through the foil sample to create cross sections. In them, they could see tunnels that the salt had dug into the sample. “For regions not under radiation, we see that the salt tunnels link the one side of the sample to the other side,” says Zhou. “For regions under radiation, we see that the salt tunnels stop more or less halfway and rarely reach the other side. So we verified that they didn’t penetrate the whole way.”

    The results “exceeded our wildest expectations,” says Short. “In every test we ran, the application of radiation slowed corrosion by a factor of two to three times.”

    More experiments, more insights

    In subsequent tests, the researchers more closely replicated commercially available molten salt by omitting the additive (europium fluoride) that they had used to speed up corrosion, and they tweaked the temperature for even more realistic conditions. “In carefully monitored tests, we found that by raising the temperature by 100 degrees Celsius, we could get corrosion to happen about 1,000 times faster than it would in a reactor,” says Short.

    Images from experiments with the nickel-chromium alloy plus the molten salt without the corrosive additive yielded further insights. Electron microscope images of the side of the foil sample facing the molten salt showed that in sections only exposed to the molten salt, the corrosion is clearly focused on the weakest part of the structure — the boundaries between the grains in the metal. In sections that were exposed to both the molten salt and the proton beam, the corrosion isn’t limited to the grain boundaries but is more spread out over the surface. Experimental results showed that these cracks are shallower and less likely to cause a key component to break.

    Short explains the observations. Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are areas — called grain boundaries — where the atoms don’t line up as well. In the corrosion-only images, dark lines track the grain boundaries. Molten salt has seeped into the grain boundaries and pulled out salt-soluble atoms. In the corrosion-plus-irradiation images, the damage is more general. It’s not only the grain boundaries that get attacked but also regions within the grains.

    So, when the material is irradiated, the molten salt also removes material from within the grains. Over time, more material comes out of the grains themselves than from the spaces between them. The removal isn’t focused on the grain boundaries; it’s spread out over the whole surface. As a result, any cracks that form are shallower and more spread out, and the material is less likely to fail.

    Testing commercial alloys

    The experiments described thus far involved model alloys — simple combinations of elements that are good for studying science but would never be used in a reactor. In the next series of experiments, the researchers focused on three commercially available alloys that are composed of nickel, chromium, iron, molybdenum, and other elements in various combinations.

    Results from the experiments with the commercial alloys showed a consistent pattern — one that confirmed an idea that the researchers had going in: the higher the concentration of salt-soluble elements in the alloy, the worse the radiation-induced corrosion damage. Radiation will increase the rate at which salt-soluble atoms such as chromium leave the grain boundaries, hastening the corrosion process. However, if there are more not-soluble elements such as nickel present, those atoms will go into the salt more slowly. Over time, they’ll accumulate at the grain boundary and form a protective coating that blocks the grain boundary — a “self-healing mechanism that decelerates the rate of corrosion,” say the researchers.

    Thus, if an alloy consists mostly of atoms that don’t dissolve in molten salt, irradiation will cause them to form a protective coating that slows the corrosion process. But if an alloy consists mostly of atoms that dissolve in molten salt, irradiation will make them dissolve faster, speeding up corrosion. As Short summarizes, “In terms of corrosion, irradiation makes a good alloy better and a bad alloy worse.”

    Real-world relevance plus practical guidelines

    Short and Zhou find their results encouraging. In a nuclear reactor made of “good” alloys, the slowdown in corrosion will probably be even more pronounced than what they observed in their proton-based experiments because the neutrons that inflict the damage won’t chemically react with the salt to make it more corrosive. As a result, reactor designers could push the envelope more in their operating conditions, allowing them to get more power out of the same nuclear plant without compromising on safety.

    However, the researchers stress that there’s much work to be done. Many more projects are needed to explore and understand the exact corrosion mechanism in specific alloys under different irradiation conditions. In addition, their findings need to be replicated by groups at other institutions using their own facilities. “What needs to happen now is for other labs to build their own facilities and start verifying whether they get the same results as we did,” says Short. To that end, Short and Zhou have made the details of their experimental setup and all of their data freely available online. “We’ve also been actively communicating with researchers at other institutions who have contacted us,” adds Zhou. “When they’re planning to visit, we offer to show them demonstration experiments while they’re here.”

    But already their findings provide practical guidance for other researchers and equipment designers. For example, the standard way to quantify corrosion damage is by “mass loss,” a measure of how much weight the material has lost. But Short and Zhou consider mass loss a flawed measure of corrosion in molten salts. “If you’re a nuclear plant operator, you usually care whether your structural components are going to break,” says Short. “Our experiments show that radiation can change how deep the cracks are, when all other things are held constant. The deeper the cracks, the more likely a structural component is to break, leading to a reactor failure.”

    In addition, the researchers offer a simple rule for identifying good metal alloys for structural components in molten salt reactors. Manufacturers provide extensive lists of available alloys with different compositions, microstructures, and additives. Faced with a list of options for critical structures, the designer of a new nuclear fission or fusion reactor can simply examine the composition of each alloy being offered. The one with the highest content of corrosion-resistant elements such as nickel will be the best choice. Inside a nuclear reactor, that alloy should respond to a bombardment of radiation not by corroding more rapidly but by forming a protective layer that helps block the corrosion process. “That may seem like a trivial result, but the exact threshold where radiation decelerates corrosion depends on the salt chemistry, the density of neutrons in the reactor, their energies, and a few other factors,” says Short. “Therefore, the complete guidelines are a bit more complicated. But they’re presented in a straightforward way that users can understand and utilize to make a good choice for the molten salt–based reactor they’re designing.”

    This research was funded, in part, by Eni S.p.A. through the MIT Plasma Science and Fusion Center’s Laboratory for Innovative Fusion Technologies. Earlier work was funded, in part, by the Transatomic Power Corporation and by the U.S. Department of Energy Nuclear Energy University Program. Equipment development and testing was supported by the Transatomic Power Corporation.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Optimizing nuclear fuels for next-generation reactors

    In 2010, when Ericmoore Jossou was attending college in northern Nigeria, the lights would flicker in and out all day, sometimes lasting only for a couple of hours at a time. The frustrating experience reaffirmed Jossou’s realization that the country’s sporadic energy supply was a problem. It was the beginning of his path toward nuclear engineering.

    Because of the energy crisis, “I told myself I was going to find myself in a career that allows me to develop energy technologies that can easily be scaled to meet the energy needs of the world, including my own country,” says Jossou, an assistant professor in a shared position between the departments of Nuclear Science and Engineering (NSE), where is the John Clark Hardwick (1986) Professor, and of Electrical Engineering and Computer Science.

    Today, Jossou uses computer simulations for rational materials design, AI-aided purposeful development of cladding materials and fuels for next-generation nuclear reactors. As one of the shared faculty hires between the MIT Schwarzman College of Computing and departments across MIT, his appointment recognizes his commitment to computing for climate and the environment.

    A well-rounded education in Nigeria

    Growing up in Lagos, Jossou knew education was about more than just bookish knowledge, so he was eager to travel and experience other cultures. He would start in his own backyard by traveling across the Niger river and enrolling in Ahmadu Bello University in northern Nigeria. Moving from the south was a cultural education with a different language and different foods. It was here that Jossou got to try and love tuwo shinkafa, a northern Nigerian rice-based specialty, for the first time.

    After his undergraduate studies, armed with a bachelor’s degree in chemistry, Jossou was among a small cohort selected for a specialty master’s training program funded by the World Bank Institute and African Development Bank. The program at the African University of Science and Technology in Abuja, Nigeria, is a pan-African venture dedicated to nurturing homegrown science talent on the continent. Visiting professors from around the world taught intensive three-week courses, an experience which felt like drinking from a fire hose. The program widened Jossou’s views and he set his sights on a doctoral program with an emphasis on clean energy systems.

    A pivot to nuclear science

    While in Nigeria, Jossou learned of Professor Jerzy Szpunar at the University of Saskatchewan in Canada, who was looking for a student researcher to explore fuels and alloys for nuclear reactors. Before then, Jossou was lukewarm on nuclear energy, but the research sounded fascinating. The Fukushima, Japan, incident was recently in the rearview mirror and Jossou remembered his early determination to address his own country’s energy crisis. He was sold on the idea and graduated with a doctoral degree from the University of Saskatchewan on an international dean’s scholarship.

    Jossou’s postdoctoral work registered a brief stint at Brookhaven National Laboratory as staff scientist. He leaped at the opportunity to join MIT NSE as a way of realizing his research interest and teaching future engineers. “I would really like to conduct cutting-edge research in nuclear materials design and to pass on my knowledge to the next generation of scientists and engineers and there’s no better place to do that than at MIT,” Jossou says.

    Merging material science and computational modeling

    Jossou’s doctoral work on designing nuclear fuels for next-generation reactors forms the basis of research his lab is pursuing at MIT NSE. Nuclear reactors that were built in the 1950s and ’60s are getting a makeover in terms of improved accident tolerance. Reactors are not confined to one kind, either: We have micro reactors and are now considering ones using metallic nuclear fuels, Jossou points out. The diversity of options is enough to keep researchers busy testing materials fit for cladding, the lining that prevents corrosion of the fuel and release of radioactive fission products into the surrounding reactor coolant.

    The team is also investigating fuels that improve burn-up efficiencies, so they can last longer in the reactor. An intriguing approach has been to immobilize the gas bubbles that arise from the fission process, so they don’t grow and degrade the fuel.

    Since joining MIT in July 2023, Jossou is setting up a lab that optimizes the composition of accident-tolerant nuclear fuels. He is leaning on his materials science background and looping computer simulations and artificial intelligence in the mix.

    Computer simulations allow the researchers to narrow down the potential field of candidates, optimized for specific parameters, so they can synthesize only the most promising candidates in the lab. And AI’s predictive capabilities guide researchers on which materials composition to consider next. “We no longer depend on serendipity to choose our materials, our lab is based on rational materials design,” Jossou says, “we can rapidly design advanced nuclear fuels.”

    Advancing energy causes in Africa

    Now that he is at MIT, Jossou admits the view from the outside is different. He now harbors a different perspective on what Africa needs to address some of its challenges. “The starting point to solve our problems is not money; it needs to start with ideas,” he says, “we need to find highly skilled people who can actually solve problems.” That job involves adding economic value to the rich arrays of raw materials that the continent is blessed with. It frustrates Jossou that Niger, a country rich in raw material for uranium, has no nuclear reactors of its own. It ships most of its ore to France. “The path forward is to find a way to refine these materials in Africa and to be able to power the industries on that continent as well,” Jossou says.

    Jossou is determined to do his part to eliminate these roadblocks.

    Anchored in mentorship, Jossou’s solution aims to train talent from Africa in his own lab. He has applied for a MIT Global Experiences MISTI grant to facilitate travel and research studies for Ghanaian scientists. “The goal is to conduct research in our facility and perhaps add value to indigenous materials,” Jossou says.

    Adding value has been a consistent theme of Jossou’s career. He remembers wanting to become a neurosurgeon after reading “Gifted Hands,” moved by the personal story of the author, Ben Carson. As Jossou grew older, however, he realized that becoming a doctor wasn’t necessarily what he wanted. Instead, he was looking to add value. “What I wanted was really to take on a career that allows me to solve a societal problem.” The societal problem of clean and safe energy for all is precisely what Jossou is working on today. More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More

  • in

    Reducing pesticide use while increasing effectiveness

    Farming can be a low-margin, high-risk business, subject to weather and climate patterns, insect population cycles, and other unpredictable factors. Farmers need to be savvy managers of the many resources they deal, and chemical fertilizers and pesticides are among their major recurring expenses.

    Despite the importance of these chemicals, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to their runoff into waterways and buildup up in the soil.

    That could change, thanks to a new approach of feedback-optimized spraying, invented by AgZen, an MIT spinout founded in 2020 by Professor Kripa Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22.

    Play video

    AgZen has developed a system for farming that can monitor exactly how much of the sprayed chemicals adheres to plants, in real time, as the sprayer drives through a field. Built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray.

    Over the past decade, AgZen’s founders have developed products and technologies to control the interactions of droplets and sprays with plant surfaces. The Boston-based venture-backed company launched a new commercial product in 2024 and is currently piloting another related product. Field tests of both have shown the products can help farmers spray more efficiently and effectively, using fewer chemicals overall.

    “Worldwide, farms spend approximately $60 billion a year on pesticides. Our objective is to reduce the number of pesticides sprayed and lighten the financial burden on farms without sacrificing effective pest management,” Varanasi says.

    Getting droplets to stick

    While the world pesticide market is growing rapidly, a lot of the pesticides sprayed don’t reach their target. A significant portion bounces off the plant surfaces, lands on the ground, and becomes part of the runoff that flows to streams and rivers, often causing serious pollution. Some of these pesticides can be carried away by wind over very long distances.

    “Drift, runoff, and poor application efficiency are well-known, longstanding problems in agriculture, but we can fix this by controlling and monitoring how sprayed droplets interact with leaves,” Varanasi says.

    With support from MIT Tata Center and the Abdul Latif Jameel Water and Food Systems Lab, Varanasi and his team analyzed how droplets strike plant surfaces, and explored ways to increase application efficiency. This research led them to develop a novel system of nozzles that cloak droplets with compounds that enhance the retention of droplets on the leaves, a product they call EnhanceCoverage.

    Field studies across regions — from Massachusetts to California to Italy and France —showed that this droplet-optimization system could allow farmers to cut the amount of chemicals needed by more than half because more of the sprayed substances would stick to the leaves.

    Measuring coverage

    However, in trying to bring this technology to market, the researchers faced a sticky problem: Nobody knew how well pesticide sprays were adhering to the plants in the first place, so how could AgZen say that the coverage was better with its new EnhanceCoverage system?

    “I had grown up spraying with a backpack on a small farm in India, so I knew this was an issue,” Jayaprakash says. “When we spoke to growers, they told me how complicated spraying is when you’re on a large machine. Whenever you spray, there are so many things that can influence how effective your spray is. How fast do you drive the sprayer? What flow rate are you using for the chemicals? What chemical are you using? What’s the age of the plants, what’s the nozzle you’re using, what is the weather at the time? All these things influence agrochemical efficiency.”

    Agricultural spraying essentially comes down to dissolving a chemical in water and then spraying droplets onto the plants. “But the interaction between a droplet and the leaf is complex,” Varanasi says. “We were coming in with ways to optimize that, but what the growers told us is, hey, we’ve never even really looked at that in the first place.”

    Although farmers have been spraying agricultural chemicals on a large scale for about 80 years, they’ve “been forced to rely on general rules of thumb and pick all these interlinked parameters, based on what’s worked for them in the past. You pick a set of these parameters, you go spray, and you’re basically praying for outcomes in terms of how effective your pest control is,” Varanasi says.

    Before AgZen could sell farmers on the new system to improve droplet coverage, the company had to invent a way to measure precisely how much spray was adhering to plants in real-time.

    Comparing before and after

    The system they came up with, which they tested extensively on farms across the country last year, involves a unit that can be bolted onto the spraying arm of virtually any sprayer. It carries two sensor stacks, one just ahead of the sprayer nozzles and one behind. Then, built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray. It also computes how much those droplets will spread out or evaporate, leading to a precise estimate of the final coverage.

    “There’s a lot of physics that governs how droplets spread and evaporate, and this has been incorporated into software that a farmer can use,” Varanasi says. “We bring a lot of our expertise into understanding droplets on leaves. All these factors, like how temperature and humidity influence coverage, have always been nebulous in the spraying world. But now you have something that can be exact in determining how well your sprays are doing.”

    “We’re not only measuring coverage, but then we recommend how to act,” says Jayaprakash, who is AgZen’s CEO. “With the information we collect in real-time and by using AI, RealCoverage tells operators how to optimize everything on their sprayer, from which nozzle to use, to how fast to drive, to how many gallons of spray is best for a particular chemical mix on a particular acre of a crop.”

    The tool was developed to prove how much AgZen’s EnhanceCoverage nozzle system (which will be launched in 2025) improves coverage. But it turns out that monitoring and optimizing droplet coverage on leaves in real-time with this system can itself yield major improvements.

    “We worked with large commercial farms last year in specialty and row crops,” Jayaprakash says. “When we saved our pilot customers up to 50 percent of their chemical cost at a large scale, they were very surprised.” He says the tool has reduced chemical costs and volume in fallow field burndowns, weed control in soybeans, defoliation in cotton, and fungicide and insecticide sprays in vegetables and fruits. Along with data from commercial farms, field trials conducted by three leading agricultural universities have also validated these results.

    “Across the board, we were able to save between 30 and 50 percent on chemical costs and increase crop yields by enabling better pest control,” Jayaprakash says. “By focusing on the droplet-leaf interface, our product can help any foliage spray throughout the year, whereas most technological advancements in this space recently have been focused on reducing herbicide use alone.” The company now intends to lease the system across thousands of acres this year.

    And these efficiency gains can lead to significant returns at scale, he emphasizes: In the U.S., farmers currently spend $16 billion a year on chemicals, to protect about $200 billion of crop yields.

    The company launched its first product, the coverage optimization system called RealCoverage, this year, reaching a wide variety of farms with different crops and in different climates. “We’re going from proof-of-concept with pilots in large farms to a truly massive scale on a commercial basis with our lease-to-own program,” Jayaprakash says.

    “We’ve also been tapped by the USDA to help them evaluate practices to minimize pesticides in watersheds,” Varanasi says, noting that RealCoverage can also be useful for regulators, chemical companies, and agricultural equipment manufacturers.

    Once AgZen has proven the effectiveness of using coverage as a decision metric, and after the RealCoverage optimization system is widely in practice, the company will next roll out its second product, EnhanceCoverage, designed to maximize droplet adhesion. Because that system will require replacing all the nozzles on a sprayer, the researchers are doing pilots this year but will wait for a full rollout in 2025, after farmers have gained experience and confidence with their initial product.

    “There is so much wastage,” Varanasi says. “Yet farmers must spray to protect crops, and there is a lot of environmental impact from this. So, after all this work over the years, learning about how droplets stick to surfaces and so on, now the culmination of it in all these products for me is amazing, to see all this come alive, to see that we’ll finally be able to solve the problem we set out to solve and help farmers.” More

  • in

    Tests show high-temperature superconducting magnets are ready for fusion

    In the predawn hours of Sept. 5, 2021, engineers achieved a major milestone in the labs of MIT’s Plasma Science and Fusion Center (PSFC), when a new type of magnet, made from high-temperature superconducting material, achieved a world-record magnetic field strength of 20 tesla for a large-scale magnet. That’s the intensity needed to build a fusion power plant that is expected to produce a net output of power and potentially usher in an era of virtually limitless power production.

    The test was immediately declared a success, having met all the criteria established for the design of the new fusion device, dubbed SPARC, for which the magnets are the key enabling technology. Champagne corks popped as the weary team of experimenters, who had labored long and hard to make the achievement possible, celebrated their accomplishment.

    But that was far from the end of the process. Over the ensuing months, the team tore apart and inspected the components of the magnet, pored over and analyzed the data from hundreds of instruments that recorded details of the tests, and performed two additional test runs on the same magnet, ultimately pushing it to its breaking point in order to learn the details of any possible failure modes.

    All of this work has now culminated in a detailed report by researchers at PSFC and MIT spinout company Commonwealth Fusion Systems (CFS), published in a collection of six peer-reviewed papers in a special edition of the March issue of IEEE Transactions on Applied Superconductivity. Together, the papers describe the design and fabrication of the magnet and the diagnostic equipment needed to evaluate its performance, as well as the lessons learned from the process. Overall, the team found, the predictions and computer modeling were spot-on, verifying that the magnet’s unique design elements could serve as the foundation for a fusion power plant.

    Enabling practical fusion power

    The successful test of the magnet, says Hitachi America Professor of Engineering Dennis Whyte, who recently stepped down as director of the PSFC, was “the most important thing, in my opinion, in the last 30 years of fusion research.”

    Before the Sept. 5 demonstration, the best-available superconducting magnets were powerful enough to potentially achieve fusion energy — but only at sizes and costs that could never be practical or economically viable. Then, when the tests showed the practicality of such a strong magnet at a greatly reduced size, “overnight, it basically changed the cost per watt of a fusion reactor by a factor of almost 40 in one day,” Whyte says.

    “Now fusion has a chance,” Whyte adds. Tokamaks, the most widely used design for experimental fusion devices, “have a chance, in my opinion, of being economical because you’ve got a quantum change in your ability, with the known confinement physics rules, about being able to greatly reduce the size and the cost of objects that would make fusion possible.”

    The comprehensive data and analysis from the PSFC’s magnet test, as detailed in the six new papers, has demonstrated that plans for a new generation of fusion devices — the one designed by MIT and CFS, as well as similar designs by other commercial fusion companies — are built on a solid foundation in science.

    The superconducting breakthrough

    Fusion, the process of combining light atoms to form heavier ones, powers the sun and stars, but harnessing that process on Earth has proved to be a daunting challenge, with decades of hard work and many billions of dollars spent on experimental devices. The long-sought, but never yet achieved, goal is to build a fusion power plant that produces more energy than it consumes. Such a power plant could produce electricity without emitting greenhouse gases during operation, and generating very little radioactive waste. Fusion’s fuel, a form of hydrogen that can be derived from seawater, is virtually limitless.

    But to make it work requires compressing the fuel at extraordinarily high temperatures and pressures, and since no known material could withstand such temperatures, the fuel must be held in place by extremely powerful magnetic fields. Producing such strong fields requires superconducting magnets, but all previous fusion magnets have been made with a superconducting material that requires frigid temperatures of about 4 degrees above absolute zero (4 kelvins, or -270 degrees Celsius). In the last few years, a newer material nicknamed REBCO, for rare-earth barium copper oxide, was added to fusion magnets, and allows them to operate at 20 kelvins, a temperature that despite being only 16 kelvins warmer, brings significant advantages in terms of material properties and practical engineering.

    Taking advantage of this new higher-temperature superconducting material was not just a matter of substituting it in existing magnet designs. Instead, “it was a rework from the ground up of almost all the principles that you use to build superconducting magnets,” Whyte says. The new REBCO material is “extraordinarily different than the previous generation of superconductors. You’re not just going to adapt and replace, you’re actually going to innovate from the ground up.” The new papers in Transactions on Applied Superconductivity describe the details of that redesign process, now that patent protection is in place.

    A key innovation: no insulation

    One of the dramatic innovations, which had many others in the field skeptical of its chances of success, was the elimination of insulation around the thin, flat ribbons of superconducting tape that formed the magnet. Like virtually all electrical wires, conventional superconducting magnets are fully protected by insulating material to prevent short-circuits between the wires. But in the new magnet, the tape was left completely bare; the engineers relied on REBCO’s much greater conductivity to keep the current flowing through the material.

    “When we started this project, in let’s say 2018, the technology of using high-temperature superconductors to build large-scale high-field magnets was in its infancy,” says Zach Hartwig, the Robert N. Noyce Career Development Professor in the Department of Nuclear Science and Engineering. Hartwig has a co-appointment at the PSFC and is the head of its engineering group, which led the magnet development project. “The state of the art was small benchtop experiments, not really representative of what it takes to build a full-size thing. Our magnet development project started at benchtop scale and ended up at full scale in a short amount of time,” he adds, noting that the team built a 20,000-pound magnet that produced a steady, even magnetic field of just over 20 tesla — far beyond any such field ever produced at large scale.

    “The standard way to build these magnets is you would wind the conductor and you have insulation between the windings, and you need insulation to deal with the high voltages that are generated during off-normal events such as a shutdown.” Eliminating the layers of insulation, he says, “has the advantage of being a low-voltage system. It greatly simplifies the fabrication processes and schedule.” It also leaves more room for other elements, such as more cooling or more structure for strength.

    The magnet assembly is a slightly smaller-scale version of the ones that will form the donut-shaped chamber of the SPARC fusion device now being built by CFS in Devens, Massachusetts. It consists of 16 plates, called pancakes, each bearing a spiral winding of the superconducting tape on one side and cooling channels for helium gas on the other.

    But the no-insulation design was considered risky, and a lot was riding on the test program. “This was the first magnet at any sufficient scale that really probed what is involved in designing and building and testing a magnet with this so-called no-insulation no-twist technology,” Hartwig says. “It was very much a surprise to the community when we announced that it was a no-insulation coil.”

    Pushing to the limit … and beyond

    The initial test, described in previous papers, proved that the design and manufacturing process not only worked but was highly stable — something that some researchers had doubted. The next two test runs, also performed in late 2021, then pushed the device to the limit by deliberately creating unstable conditions, including a complete shutoff of incoming power that can lead to a catastrophic overheating. Known as quenching, this is considered a worst-case scenario for the operation of such magnets, with the potential to destroy the equipment.

    Part of the mission of the test program, Hartwig says, was “to actually go off and intentionally quench a full-scale magnet, so that we can get the critical data at the right scale and the right conditions to advance the science, to validate the design codes, and then to take the magnet apart and see what went wrong, why did it go wrong, and how do we take the next iteration toward fixing that. … It was a very successful test.”

    That final test, which ended with the melting of one corner of one of the 16 pancakes, produced a wealth of new information, Hartwig says. For one thing, they had been using several different computational models to design and predict the performance of various aspects of the magnet’s performance, and for the most part, the models agreed in their overall predictions and were well-validated by the series of tests and real-world measurements. But in predicting the effect of the quench, the model predictions diverged, so it was necessary to get the experimental data to evaluate the models’ validity.

    “The highest-fidelity models that we had predicted almost exactly how the magnet would warm up, to what degree it would warm up as it started to quench, and where would the resulting damage to the magnet would be,” he says. As described in detail in one of the new reports, “That test actually told us exactly the physics that was going on, and it told us which models were useful going forward and which to leave by the wayside because they’re not right.”

    Whyte says, “Basically we did the worst thing possible to a coil, on purpose, after we had tested all other aspects of the coil performance. And we found that most of the coil survived with no damage,” while one isolated area sustained some melting. “It’s like a few percent of the volume of the coil that got damaged.” And that led to revisions in the design that are expected to prevent such damage in the actual fusion device magnets, even under the most extreme conditions.

    Hartwig emphasizes that a major reason the team was able to accomplish such a radical new record-setting magnet design, and get it right the very first time and on a breakneck schedule, was thanks to the deep level of knowledge, expertise, and equipment accumulated over decades of operation of the Alcator C-Mod tokamak, the Francis Bitter Magnet Laboratory, and other work carried out at PSFC. “This goes to the heart of the institutional capabilities of a place like this,” he says. “We had the capability, the infrastructure, and the space and the people to do these things under one roof.”

    The collaboration with CFS was also key, he says, with MIT and CFS combining the most powerful aspects of an academic institution and private company to do things together that neither could have done on their own. “For example, one of the major contributions from CFS was leveraging the power of a private company to establish and scale up a supply chain at an unprecedented level and timeline for the most critical material in the project: 300 kilometers (186 miles) of high-temperature superconductor, which was procured with rigorous quality control in under a year, and integrated on schedule into the magnet.”

    The integration of the two teams, those from MIT and those from CFS, also was crucial to the success, he says. “We thought of ourselves as one team, and that made it possible to do what we did.” More

  • in

    Power when the sun doesn’t shine

    In 2016, at the huge Houston energy conference CERAWeek, MIT materials scientist Yet-Ming Chiang found himself talking to a Tesla executive about a thorny problem: how to store the output of solar panels and wind turbines for long durations.        

    Chiang, the Kyocera Professor of Materials Science and Engineering, and Mateo Jaramillo, a vice president at Tesla, knew that utilities lacked a cost-effective way to store renewable energy to cover peak levels of demand and to bridge the gaps during windless and cloudy days. They also knew that the scarcity of raw materials used in conventional energy storage devices needed to be addressed if renewables were ever going to displace fossil fuels on the grid at scale.

    Energy storage technologies can facilitate access to renewable energy sources, boost the stability and reliability of power grids, and ultimately accelerate grid decarbonization. The global market for these systems — essentially large batteries — is expected to grow tremendously in the coming years. A study by the nonprofit LDES (Long Duration Energy Storage) Council pegs the long-duration energy storage market at between 80 and 140 terawatt-hours by 2040. “That’s a really big number,” Chiang notes. “Every 10 people on the planet will need access to the equivalent of one EV [electric vehicle] battery to support their energy needs.”

    In 2017, one year after they met in Houston, Chiang and Jaramillo joined forces to co-found Form Energy in Somerville, Massachusetts, with MIT graduates Marco Ferrara SM ’06, PhD ’08 and William Woodford PhD ’13, and energy storage veteran Ted Wiley.

    “There is a burgeoning market for electrical energy storage because we want to achieve decarbonization as fast and as cost-effectively as possible,” says Ferrara, Form’s senior vice president in charge of software and analytics.

    Investors agreed. Over the next six years, Form Energy would raise more than $800 million in venture capital.

    Bridging gaps

    The simplest battery consists of an anode, a cathode, and an electrolyte. During discharge, with the help of the electrolyte, electrons flow from the negative anode to the positive cathode. During charge, external voltage reverses the process. The anode becomes the positive terminal, the cathode becomes the negative terminal, and electrons move back to where they started. Materials used for the anode, cathode, and electrolyte determine the battery’s weight, power, and cost “entitlement,” which is the total cost at the component level.

    During the 1980s and 1990s, the use of lithium revolutionized batteries, making them smaller, lighter, and able to hold a charge for longer. The storage devices Form Energy has devised are rechargeable batteries based on iron, which has several advantages over lithium. A big one is cost.

    Chiang once declared to the MIT Club of Northern California, “I love lithium-ion.” Two of the four MIT spinoffs Chiang founded center on innovative lithium-ion batteries. But at hundreds of dollars a kilowatt-hour (kWh) and with a storage capacity typically measured in hours, lithium-ion was ill-suited for the use he now had in mind.

    The approach Chiang envisioned had to be cost-effective enough to boost the attractiveness of renewables. Making solar and wind energy reliable enough for millions of customers meant storing it long enough to fill the gaps created by extreme weather conditions, grid outages, and when there is a lull in the wind or a few days of clouds.

    To be competitive with legacy power plants, Chiang’s method had to come in at around $20 per kilowatt-hour of stored energy — one-tenth the cost of lithium-ion battery storage.

    But how to transition from expensive batteries that store and discharge over a couple of hours to some as-yet-undefined, cheap, longer-duration technology?

    “One big ball of iron”

    That’s where Ferrara comes in. Ferrara has a PhD in nuclear engineering from MIT and a PhD in electrical engineering and computer science from the University of L’Aquila in his native Italy. In 2017, as a research affiliate at the MIT Department of Materials Science and Engineering, he worked with Chiang to model the grid’s need to manage renewables’ intermittency.

    How intermittent depends on where you are. In the United States, for instance, there’s the windy Great Plains; the sun-drenched, relatively low-wind deserts of Arizona, New Mexico, and Nevada; and the often-cloudy Pacific Northwest.

    Ferrara, in collaboration with Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society and her MIT team, modeled four representative locations in the United States and concluded that energy storage with capacity costs below roughly $20/kWh and discharge durations of multiple days would allow a wind-solar mix to provide cost-competitive, firm electricity in resource-abundant locations.

    Now that they had a time frame, they turned their attention to materials. At the price point Form Energy was aiming for, lithium was out of the question. Chiang looked at plentiful and cheap sulfur. But a sulfur, sodium, water, and air battery had technical challenges.

    Thomas Edison once used iron as an electrode, and iron-air batteries were first studied in the 1960s. They were too heavy to make good transportation batteries. But this time, Chiang and team were looking at a battery that sat on the ground, so weight didn’t matter. Their priorities were cost and availability.

    “Iron is produced, mined, and processed on every continent,” Chiang says. “The Earth is one big ball of iron. We wouldn’t ever have to worry about even the most ambitious projections of how much storage that the world might use by mid-century.” If Form ever moves into the residential market, “it’ll be the safest battery you’ve ever parked at your house,” Chiang laughs. “Just iron, air, and water.”

    Scientists call it reversible rusting. While discharging, the battery takes in oxygen and converts iron to rust. Applying an electrical current converts the rusty pellets back to iron, and the battery “breathes out” oxygen as it charges. “In chemical terms, you have iron, and it becomes iron hydroxide,” Chiang says. “That means electrons were extracted. You get those electrons to go through the external circuit, and now you have a battery.”

    Form Energy’s battery modules are approximately the size of a washer-and-dryer unit. They are stacked in 40-foot containers, and several containers are electrically connected with power conversion systems to build storage plants that can cover several acres.

    The right place at the right time

    The modules don’t look or act like anything utilities have contracted for before.

    That’s one of Form’s key challenges. “There is not widespread knowledge of needing these new tools for decarbonized grids,” Ferrara says. “That’s not the way utilities have typically planned. They’re looking at all the tools in the toolkit that exist today, which may not contemplate a multi-day energy storage asset.”

    Form Energy’s customers are largely traditional power companies seeking to expand their portfolios of renewable electricity. Some are in the process of decommissioning coal plants and shifting to renewables.

    Ferrara’s research pinpointing the need for very low-cost multi-day storage provides key data for power suppliers seeking to determine the most cost-effective way to integrate more renewable energy.

    Using the same modeling techniques, Ferrara and team show potential customers how the technology fits in with their existing system, how it competes with other technologies, and how, in some cases, it can operate synergistically with other storage technologies.

    “They may need a portfolio of storage technologies to fully balance renewables on different timescales of intermittency,” he says. But other than the technology developed at Form, “there isn’t much out there, certainly not within the cost entitlement of what we’re bringing to market.”  Thanks to Chiang and Jaramillo’s chance encounter in Houston, Form has a several-year lead on other companies working to address this challenge. 

    In June 2023, Form Energy closed its biggest deal to date for a single project: Georgia Power’s order for a 15-megawatt/1,500-megawatt-hour system. That order brings Form’s total amount of energy storage under contracts with utility customers to 40 megawatts/4 gigawatt-hours. To meet the demand, Form is building a new commercial-scale battery manufacturing facility in West Virginia.

    The fact that Form Energy is creating jobs in an area that lost more than 10,000 steel jobs over the past decade is not lost on Chiang. “And these new jobs are in clean tech. It’s super exciting to me personally to be doing something that benefits communities outside of our traditional technology centers.

    “This is the right time for so many reasons,” Chiang says. He says he and his Form Energy co-founders feel “tremendous urgency to get these batteries out into the world.”

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Moving past the Iron Age

    MIT graduate student Sydney Rose Johnson has never seen the steel mills in central India. She’s never toured the American Midwest’s hulking steel plants or the mini mills dotting the Mississippi River. But in the past year, she’s become more familiar with steel production than she ever imagined.

    A fourth-year dual degree MBA and PhD candidate in chemical engineering and a graduate research assistant with the MIT Energy Initiative (MITEI) as well as a 2022-23 Shell Energy Fellow, Johnson looks at ways to reduce carbon dioxide (CO2) emissions generated by industrial processes in hard-to-abate industries. Those include steel.

    Almost every aspect of infrastructure and transportation — buildings, bridges, cars, trains, mass transit — contains steel. The manufacture of steel hasn’t changed much since the Iron Age, with some steel plants in the United States and India operating almost continually for more than a century, their massive blast furnaces re-lined periodically with carbon and graphite to keep them going.

    According to the World Economic Forum, steel demand is projected to increase 30 percent by 2050, spurred in part by population growth and economic development in China, India, Africa, and Southeast Asia.

    The steel industry is among the three biggest producers of CO2 worldwide. Every ton of steel produced in 2020 emitted, on average, 1.89 tons of CO2 into the atmosphere — around 8 percent of global CO2 emissions, according to the World Steel Association.

    A combination of technical strategies and financial investments, Johnson notes, will be needed to wrestle that 8 percent figure down to something more planet-friendly.

    Johnson’s thesis focuses on modeling and analyzing ways to decarbonize steel. Using data mined from academic and industry sources, she builds models to calculate emissions, costs, and energy consumption for plant-level production.

    “I optimize steel production pathways using emission goals, industry commitments, and cost,” she says. Based on the projected growth of India’s steel industry, she applies this approach to case studies that predict outcomes for some of the country’s thousand-plus factories, which together have a production capacity of 154 million metric tons of steel. For the United States, she looks at the effect of Inflation Reduction Act (IRA) credits. The 2022 IRA provides incentives that could accelerate the steel industry’s efforts to minimize its carbon emissions.

    Johnson compares emissions and costs across different production pathways, asking questions such as: “If we start today, what would a cost-optimal production scenario look like years from now? How would it change if we added in credits? What would have to happen to cut 2005 levels of emissions in half by 2030?”

    “My goal is to gain an understanding of how current and emerging decarbonization strategies will be integrated into the industry,” Johnson says.

    Grappling with industrial problems

    Growing up in Marietta, Georgia, outside Atlanta, the closest she ever came to a plant of any kind was through her father, a chemical engineer working in logistics and procuring steel for an aerospace company, and during high school, when she spent a semester working alongside chemical engineers tweaking the pH of an anti-foaming agent.

    At Kennesaw Mountain High School, a STEM magnet program in Cobb County, students devote an entire semester of their senior year to an internship and research project.

    Johnson chose to work at Kemira Chemicals, which develops chemical solutions for water-intensive industries with a focus on pulp and paper, water treatment, and energy systems.

    “My goal was to understand why a polymer product was falling out of suspension — essentially, why it was less stable,” she recalls. She learned how to formulate a lab-scale version of the product and conduct tests to measure its viscosity and acidity. Comparing the lab-scale and regular product results revealed that acidity was an important factor. “Through conversations with my mentor, I learned this was connected with the holding conditions, which led to the product being oxidized,” she says. With the anti-foaming agent’s problem identified, steps could be taken to fix it.

    “I learned how to apply problem-solving. I got to learn more about working in an industrial environment by connecting with the team in quality control as well as with R&D and chemical engineers at the plant site,” Johnson says. “This experience confirmed I wanted to pursue engineering in college.”

    As an undergraduate at Stanford University, she learned about the different fields — biotechnology, environmental science, electrochemistry, and energy, among others — open to chemical engineers. “It seemed like a very diverse field and application range,” she says. “I was just so intrigued by the different things I saw people doing and all these different sets of issues.”

    Turning up the heat

    At MIT, she turned her attention to how certain industries can offset their detrimental effects on climate.

    “I’m interested in the impact of technology on global communities, the environment, and policy. Energy applications affect every field. My goal as a chemical engineer is to have a broad perspective on problem-solving and to find solutions that benefit as many people, especially those under-resourced, as possible,” says Johnson, who has served on the MIT Chemical Engineering Graduate Student Advisory Board, the MIT Energy and Climate Club, and is involved with diversity and inclusion initiatives.

    The steel industry, Johnson acknowledges, is not what she first imagined when she saw herself working toward mitigating climate change.

    “But now, understanding the role the material has in infrastructure development, combined with its heavy use of coal, has illuminated how the sector, along with other hard-to-abate industries, is important in the climate change conversation,” Johnson says.

    Despite the advanced age of many steel mills, some are quite energy-efficient, she notes. Yet these operations, which produce heat upwards of 3,000 degrees Fahrenheit, are still emission-intensive.

    Steel is made from iron ore, a mixture of iron, oxygen, and other minerals found on virtually every continent, with Brazil and Australia alone exporting millions of metric tons per year. Commonly based on a process dating back to the 19th century, iron is extracted from the ore through smelting — heating the ore with blast furnaces until the metal becomes spongy and its chemical components begin to break down.

    A reducing agent is needed to release the oxygen trapped in the ore, transforming it from its raw form to pure iron. That’s where most emissions come from, Johnson notes.

    “We want to reduce emissions, and we want to make a cleaner and safer environment for everyone,” she says. “It’s not just the CO2 emissions. It’s also sometimes NOx and SOx [nitrogen oxides and sulfur oxides] and air pollution particulate matter at some of these production facilities that can affect people as well.”

    In 2020, the International Energy Agency released a roadmap exploring potential technologies and strategies that would make the iron and steel sector more compatible with the agency’s vision of increased sustainability. Emission reductions can be accomplished with more modern technology, the agency suggests, or by substituting the fuels producing the immense heat needed to process ore. Traditionally, the fuels used for iron reduction have been coal and natural gas. Alternative fuels include clean hydrogen, electricity, and biomass.

    Using the MITEI Sustainable Energy System Analysis Modeling Environment (SESAME), Johnson analyzes various decarbonization strategies. She considers options such as switching fuel for furnaces to hydrogen with a little bit of natural gas or adding carbon-capture devices. The models demonstrate how effective these tactics are likely to be. The answers aren’t always encouraging.

    “Upstream emissions can determine how effective the strategies are,” Johnson says. Charcoal derived from forestry biomass seemed to be a promising alternative fuel, but her models showed that processing the charcoal for use in the blast furnace limited its effectiveness in negating emissions.

    Despite the challenges, “there are definitely ways of moving forward,” Johnson says. “It’s been an intriguing journey in terms of understanding where the industry is at. There’s still a long way to go, but it’s doable.”

    Johnson is heartened by the steel industry’s efforts to recycle scrap into new steel products and incorporate more emission-friendly technologies and practices, some of which result in significantly lower CO2 emissions than conventional production.

    A major issue is that low-carbon steel can be more than 50 percent more costly than conventionally produced steel. “There are costs associated with making the transition, but in the context of the environmental implications, I think it’s well worth it to adopt these technologies,” she says.

    After graduation, Johnson plans to continue to work in the energy field. “I definitely want to use a combination of engineering knowledge and business knowledge to work toward mitigating climate change, potentially in the startup space with clean technology or even in a policy context,” she says. “I’m interested in connecting the private and public sectors to implement measures for improving our environment and benefiting as many people as possible.” More