More stories

  • in

    Study: Shutting down nuclear power could increase air pollution

    Nearly 20 percent of today’s electricity in the United States comes from nuclear power. The U.S. has the largest nuclear fleet in the world, with 92 reactors scattered around the country. Many of these power plants have run for more than half a century and are approaching the end of their expected lifetimes.

    Policymakers are debating whether to retire the aging reactors or reinforce their structures to continue producing nuclear energy, which many consider a low-carbon alternative to climate-warming coal, oil, and natural gas.

    Now, MIT researchers say there’s another factor to consider in weighing the future of nuclear power: air quality. In addition to being a low carbon-emitting source, nuclear power is relatively clean in terms of the air pollution it generates. Without nuclear power, how would the pattern of air pollution shift, and who would feel its effects?

    The MIT team took on these questions in a new study appearing today in Nature Energy. They lay out a scenario in which every nuclear power plant in the country has shut down, and consider how other sources such as coal, natural gas, and renewable energy would fill the resulting energy needs throughout an entire year.

    Their analysis reveals that indeed, air pollution would increase, as coal, gas, and oil sources ramp up to compensate for nuclear power’s absence. This in itself may not be surprising, but the team has put numbers to the prediction, estimating that the increase in air pollution would have serious health effects, resulting in an additional 5,200 pollution-related deaths over a single year.

    If, however, more renewable energy sources become available to supply the energy grid, as they are expected to by the year 2030, air pollution would be curtailed, though not entirely. The team found that even under this heartier renewable scenario, there is still a slight increase in air pollution in some parts of the country, resulting in a total of 260 pollution-related deaths over one year.

    When they looked at the populations directly affected by the increased pollution, they found that Black or African American communities — a disproportionate number of whom live near fossil-fuel plants — experienced the greatest exposure.

    “This adds one more layer to the environmental health and social impacts equation when you’re thinking about nuclear shutdowns, where the conversation often focuses on local risks due to accidents and mining or long-term climate impacts,” says lead author Lyssa Freese, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

    “In the debate over keeping nuclear power plants open, air quality has not been a focus of that discussion,” adds study author Noelle Selin, a professor in MIT’s Institute for Data, Systems, and Society (IDSS) and EAPS. “What we found was that air pollution from fossil fuel plants is so damaging, that anything that increases it, such as a nuclear shutdown, is going to have substantial impacts, and for some people more than others.”

    The study’s MIT-affiliated co-authors also include Principal Research Scientist Sebastian Eastham and Guillaume Chossière SM ’17, PhD ’20, along with Alan Jenn of the University of California at Davis.

    Future phase-outs

    When nuclear power plants have closed in the past, fossil fuel use increased in response. In 1985, the closure of reactors in Tennessee Valley prompted a spike in coal use, while the 2012 shutdown of a plant in California led to an increase in natural gas. In Germany, where nuclear power has almost completely been phased out, coal-fired power increased initially to fill the gap.

    Noting these trends, the MIT team wondered how the U.S. energy grid would respond if nuclear power were completely phased out.

    “We wanted to think about what future changes were expected in the energy grid,” Freese says. “We knew that coal use was declining, and there was a lot of work already looking at the impact of what that would have on air quality. But no one had looked at air quality and nuclear power, which we also noticed was on the decline.”

    In the new study, the team used an energy grid dispatch model developed by Jenn to assess how the U.S. energy system would respond to a shutdown of nuclear power. The model simulates the production of every power plant in the country and runs continuously to estimate, hour by hour, the energy demands in 64 regions across the country.

    Much like the way the actual energy market operates, the model chooses to turn a plant’s production up or down based on cost: Plants producing the cheapest energy at any given time are given priority to supply the grid over more costly energy sources.

    The team fed the model available data on each plant’s changing emissions and energy costs throughout an entire year. They then ran the model under different scenarios, including: an energy grid with no nuclear power, a baseline grid similar to today’s that includes nuclear power, and a grid with no nuclear power that also incorporates the additional renewable sources that are expected to be added by 2030.

    They combined each simulation with an atmospheric chemistry model to simulate how each plant’s various emissions travel around the country and to overlay these tracks onto maps of population density. For populations in the path of pollution, they calculated the risk of premature death based on their degree of exposure.

    System response

    Play video

    Courtesy of the researchers, edited by MIT News

    Their analysis showed a clear pattern: Without nuclear power, air pollution worsened in general, mainly affecting regions in the East Coast, where nuclear power plants are mostly concentrated. Without those plants, the team observed an uptick in production from coal and gas plants, resulting in 5,200 pollution-related deaths across the country, compared to the baseline scenario.

    They also calculated that more people are also likely to die prematurely due to climate impacts from the increase in carbon dioxide emissions, as the grid compensates for nuclear power’s absence. The climate-related effects from this additional influx of carbon dioxide could lead to 160,000 additional deaths over the next century.

    “We need to be thoughtful about how we’re retiring nuclear power plants if we are trying to think about them as part of an energy system,” Freese says. “Shutting down something that doesn’t have direct emissions itself can still lead to increases in emissions, because the grid system will respond.”

    “This might mean that we need to deploy even more renewables, in order to fill the hole left by nuclear, which is essentially a zero-emissions energy source,” Selin adds. “Otherwise we will have a reduction in air quality that we weren’t necessarily counting on.”

    This study was supported, in part, by the U.S. Environmental Protection Agency. More

  • in

    Flow batteries for grid-scale energy storage

    In the coming decades, renewable energy sources such as solar and wind will increasingly dominate the conventional power grid. Because those sources only generate electricity when it’s sunny or windy, ensuring a reliable grid — one that can deliver power 24/7 — requires some means of storing electricity when supplies are abundant and delivering it later when they’re not. And because there can be hours and even days with no wind, for example, some energy storage devices must be able to store a large amount of electricity for a long time.

    A promising technology for performing that task is the flow battery, an electrochemical device that can store hundreds of megawatt-hours of energy — enough to keep thousands of homes running for many hours on a single charge. Flow batteries have the potential for long lifetimes and low costs in part due to their unusual design. In the everyday batteries used in phones and electric vehicles, the materials that store the electric charge are solid coatings on the electrodes. “A flow battery takes those solid-state charge-storage materials, dissolves them in electrolyte solutions, and then pumps the solutions through the electrodes,” says Fikile Brushett, an associate professor of chemical engineering at MIT. That design offers many benefits and poses a few challenges.

    Flow batteries: Design and operation

    A flow battery contains two substances that undergo electrochemical reactions in which electrons are transferred from one to the other. When the battery is being charged, the transfer of electrons forces the two substances into a state that’s “less energetically favorable” as it stores extra energy. (Think of a ball being pushed up to the top of a hill.) When the battery is being discharged, the transfer of electrons shifts the substances into a more energetically favorable state as the stored energy is released. (The ball is set free and allowed to roll down the hill.)

    At the core of a flow battery are two large tanks that hold liquid electrolytes, one positive and the other negative. Each electrolyte contains dissolved “active species” — atoms or molecules that will electrochemically react to release or store electrons. During charging, one species is “oxidized” (releases electrons), and the other is “reduced” (gains electrons); during discharging, they swap roles. Pumps are used to circulate the two electrolytes through separate electrodes, each made of a porous material that provides abundant surfaces on which the active species can react. A thin membrane between the adjacent electrodes keeps the two electrolytes from coming into direct contact and possibly reacting, which would release heat and waste energy that could otherwise be used on the grid.

    When the battery is being discharged, active species on the negative side oxidize, releasing electrons that flow through an external circuit to the positive side, causing the species there to be reduced. The flow of those electrons through the external circuit can power the grid. In addition to the movement of the electrons, “supporting” ions — other charged species in the electrolyte — pass through the membrane to help complete the reaction and keep the system electrically neutral.

    Once all the species have reacted and the battery is fully discharged, the system can be recharged. In that process, electricity from wind turbines, solar farms, and other generating sources drives the reverse reactions. The active species on the positive side oxidize to release electrons back through the wires to the negative side, where they rejoin their original active species. The battery is now reset and ready to send out more electricity when it’s needed. Brushett adds, “The battery can be cycled in this way over and over again for years on end.”

    Benefits and challenges

    A major advantage of this system design is that where the energy is stored (the tanks) is separated from where the electrochemical reactions occur (the so-called reactor, which includes the porous electrodes and membrane). As a result, the capacity of the battery — how much energy it can store — and its power — the rate at which it can be charged and discharged — can be adjusted separately. “If I want to have more capacity, I can just make the tanks bigger,” explains Kara Rodby PhD ’22, a former member of Brushett’s lab and now a technical analyst at Volta Energy Technologies. “And if I want to increase its power, I can increase the size of the reactor.” That flexibility makes it possible to design a flow battery to suit a particular application and to modify it if needs change in the future.

    However, the electrolyte in a flow battery can degrade with time and use. While all batteries experience electrolyte degradation, flow batteries in particular suffer from a relatively faster form of degradation called “crossover.” The membrane is designed to allow small supporting ions to pass through and block the larger active species, but in reality, it isn’t perfectly selective. Some of the active species in one tank can sneak through (or “cross over”) and mix with the electrolyte in the other tank. The two active species may then chemically react, effectively discharging the battery. Even if they don’t, some of the active species is no longer in the first tank where it belongs, so the overall capacity of the battery is lower.

    Recovering capacity lost to crossover requires some sort of remediation — for example, replacing the electrolyte in one or both tanks or finding a way to reestablish the “oxidation states” of the active species in the two tanks. (Oxidation state is a number assigned to an atom or compound to tell if it has more or fewer electrons than it has when it’s in its neutral state.) Such remediation is more easily — and therefore more cost-effectively — executed in a flow battery because all the components are more easily accessed than they are in a conventional battery.

    The state of the art: Vanadium

    A critical factor in designing flow batteries is the selected chemistry. The two electrolytes can contain different chemicals, but today the most widely used setup has vanadium in different oxidation states on the two sides. That arrangement addresses the two major challenges with flow batteries.

    First, vanadium doesn’t degrade. “If you put 100 grams of vanadium into your battery and you come back in 100 years, you should be able to recover 100 grams of that vanadium — as long as the battery doesn’t have some sort of a physical leak,” says Brushett.

    And second, if some of the vanadium in one tank flows through the membrane to the other side, there is no permanent cross-contamination of the electrolytes, only a shift in the oxidation states, which is easily remediated by re-balancing the electrolyte volumes and restoring the oxidation state via a minor charge step. Most of today’s commercial systems include a pipe connecting the two vanadium tanks that automatically transfers a certain amount of electrolyte from one tank to the other when the two get out of balance.

    However, as the grid becomes increasingly dominated by renewables, more and more flow batteries will be needed to provide long-duration storage. Demand for vanadium will grow, and that will be a problem. “Vanadium is found around the world but in dilute amounts, and extracting it is difficult,” says Rodby. “So there are limited places — mostly in Russia, China, and South Africa — where it’s produced, and the supply chain isn’t reliable.” As a result, vanadium prices are both high and extremely volatile — an impediment to the broad deployment of the vanadium flow battery.

    Beyond vanadium

    The question then becomes: If not vanadium, then what? Researchers worldwide are trying to answer that question, and many are focusing on promising chemistries using materials that are more abundant and less expensive than vanadium. But it’s not that easy, notes Rodby. While other chemistries may offer lower initial capital costs, they may be more expensive to operate over time. They may require periodic servicing to rejuvenate one or both of their electrolytes. “You may even need to replace them, so you’re essentially incurring that initial (low) capital cost again and again,” says Rodby.

    Indeed, comparing the economics of different options is difficult because “there are so many dependent variables,” says Brushett. “A flow battery is an electrochemical system, which means that there are multiple components working together in order for the device to function. Because of that, if you are trying to improve a system — performance, cost, whatever — it’s very difficult because when you touch one thing, five other things change.”

    So how can we compare these new and emerging chemistries — in a meaningful way — with today’s vanadium systems? And how do we compare them with one another, so we know which ones are more promising and what the potential pitfalls are with each one? “Addressing those questions can help us decide where to focus our research and where to invest our research and development dollars now,” says Brushett.

    Techno-economic modeling as a guide

    A good way to understand and assess the economic viability of new and emerging energy technologies is using techno-economic modeling. With certain models, one can account for the capital cost of a defined system and — based on the system’s projected performance — the operating costs over time, generating a total cost discounted over the system’s lifetime. That result allows a potential purchaser to compare options on a “levelized cost of storage” basis.

    Using that approach, Rodby developed a framework for estimating the levelized cost for flow batteries. The framework includes a dynamic physical model of the battery that tracks its performance over time, including any changes in storage capacity. The calculated operating costs therefore cover all services required over decades of operation, including the remediation steps taken in response to species degradation and crossover.

    Analyzing all possible chemistries would be impossible, so the researchers focused on certain classes. First, they narrowed the options down to those in which the active species are dissolved in water. “Aqueous systems are furthest along and are most likely to be successful commercially,” says Rodby. Next, they limited their analyses to “asymmetric” chemistries; that is, setups that use different materials in the two tanks. (As Brushett explains, vanadium is unusual in that using the same “parent” material in both tanks is rarely feasible.) Finally, they divided the possibilities into two classes: species that have a finite lifetime and species that have an infinite lifetime; that is, ones that degrade over time and ones that don’t.

    Results from their analyses aren’t clear-cut; there isn’t a particular chemistry that leads the pack. But they do provide general guidelines for choosing and pursuing the different options.

    Finite-lifetime materials

    While vanadium is a single element, the finite-lifetime materials are typically organic molecules made up of multiple elements, among them carbon. One advantage of organic molecules is that they can be synthesized in a lab and at an industrial scale, and the structure can be altered to suit a specific function. For example, the molecule can be made more soluble, so more will be present in the electrolyte and the energy density of the system will be greater; or it can be made bigger so it won’t fit through the membrane and cross to the other side. Finally, organic molecules can be made from simple, abundant, low-cost elements, potentially even waste streams from other industries.

    Despite those attractive features, there are two concerns. First, organic molecules would probably need to be made in a chemical plant, and upgrading the low-cost precursors as needed may prove to be more expensive than desired. Second, these molecules are large chemical structures that aren’t always very stable, so they’re prone to degradation. “So along with crossover, you now have a new degradation mechanism that occurs over time,” says Rodby. “Moreover, you may figure out the degradation process and how to reverse it in one type of organic molecule, but the process may be totally different in the next molecule you work on, making the discovery and development of each new chemistry require significant effort.”

    Research is ongoing, but at present, Rodby and Brushett find it challenging to make the case for the finite-lifetime chemistries, mostly based on their capital costs. Citing studies that have estimated the manufacturing costs of these materials, Rodby believes that current options cannot be made at low enough costs to be economically viable. “They’re cheaper than vanadium, but not cheap enough,” says Rodby.

    The results send an important message to researchers designing new chemistries using organic molecules: Be sure to consider operating challenges early on. Rodby and Brushett note that it’s often not until way down the “innovation pipeline” that researchers start to address practical questions concerning the long-term operation of a promising-looking system. The MIT team recommends that understanding the potential decay mechanisms and how they might be cost-effectively reversed or remediated should be an upfront design criterion.

    Infinite-lifetime species

    The infinite-lifetime species include materials that — like vanadium — are not going to decay. The most likely candidates are other metals; for example, iron or manganese. “These are commodity-scale chemicals that will certainly be low cost,” says Rodby.

    Here, the researchers found that there’s a wider “design space” of feasible options that could compete with vanadium. But there are still challenges to be addressed. While these species don’t degrade, they may trigger side reactions when used in a battery. For example, many metals catalyze the formation of hydrogen, which reduces efficiency and adds another form of capacity loss. While there are ways to deal with the hydrogen-evolution problem, a sufficiently low-cost and effective solution for high rates of this side reaction is still needed.

    In addition, crossover is a still a problem requiring remediation steps. The researchers evaluated two methods of dealing with crossover in systems combining two types of infinite-lifetime species.

    The first is the “spectator strategy.” Here, both of the tanks contain both active species. Explains Brushett, “You have the same electrolyte mixture on both sides of the battery, but only one of the species is ever working and the other is a spectator.” As a result, crossover can be remediated in similar ways to those used in the vanadium flow battery. The drawback is that half of the active material in each tank is unavailable for storing charge, so it’s wasted. “You’ve essentially doubled your electrolyte cost on a per-unit energy basis,” says Rodby.

    The second method calls for making a membrane that is perfectly selective: It must let through only the supporting ion needed to maintain the electrical balance between the two sides. However, that approach increases cell resistance, hurting system efficiency. In addition, the membrane would need to be made of a special material — say, a ceramic composite — that would be extremely expensive based on current production methods and scales. Rodby notes that work on such membranes is under way, but the cost and performance metrics are “far off from where they’d need to be to make sense.”

    Time is of the essence

    The researchers stress the urgency of the climate change threat and the need to have grid-scale, long-duration storage systems at the ready. “There are many chemistries now being looked at,” says Rodby, “but we need to hone in on some solutions that will actually be able to compete with vanadium and can be deployed soon and operated over the long term.”

    The techno-economic framework is intended to help guide that process. It can calculate the levelized cost of storage for specific designs for comparison with vanadium systems and with one another. It can identify critical gaps in knowledge related to long-term operation or remediation, thereby identifying technology development or experimental investigations that should be prioritized. And it can help determine whether the trade-off between lower upfront costs and greater operating costs makes sense in these next-generation chemistries.

    The good news, notes Rodby, is that advances achieved in research on one type of flow battery chemistry can often be applied to others. “A lot of the principles learned with vanadium can be translated to other systems,” she says. She believes that the field has advanced not only in understanding but also in the ability to design experiments that address problems common to all flow batteries, thereby helping to prepare the technology for its important role of grid-scale storage in the future.

    This research was supported by the MIT Energy Initiative. Kara Rodby PhD ’22 was supported by an ExxonMobil-MIT Energy Fellowship in 2021-22.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    An interdisciplinary approach to fighting climate change through clean energy solutions

    In early 2021, the U.S. government set an ambitious goal: to decarbonize its power grid, the system that generates and transmits electricity throughout the country, by 2035. It’s an important goal in the fight against climate change, and will require a switch from current, greenhouse-gas producing energy sources (such as coal and natural gas), to predominantly renewable ones (such as wind and solar).

    Getting the power grid to zero carbon will be a challenging undertaking, as Audun Botterud, a principal research scientist at the MIT Laboratory for Information and Decision Systems (LIDS) who has long been interested in the problem, knows well. It will require building lots of renewable energy generators and new infrastructure; designing better technology to capture, store, and carry electricity; creating the right regulatory and economic incentives; and more. Decarbonizing the grid also presents many computational challenges, which is where Botterud’s focus lies. Botterud has modeled different aspects of the grid — the mechanics of energy supply, demand, and storage, and electricity markets — where economic factors can have a huge effect on how quickly renewable solutions get adopted.

    On again, off again

    A major challenge of decarbonization is that the grid must be designed and operated to reliably meet demand. Using renewable energy sources complicates this, as wind and solar power depend on an infamously volatile system: the weather. A sunny day becomes gray and blustery, and wind turbines get a boost but solar farms go idle. This will make the grid’s energy supply variable and hard to predict. Additional resources, including batteries and backup power generators, will need to be incorporated to regulate supply. Extreme weather events, which are becoming more common with climate change, can further strain both supply and demand. Managing a renewables-driven grid will require algorithms that can minimize uncertainty in the face of constant, sometimes random fluctuations to make better predictions of supply and demand, guide how resources are added to the grid, and inform how those resources are committed and dispatched across the entire United States.

    “The problem of managing supply and demand in the grid has to happen every second throughout the year, and given how much we rely on electricity in society, we need to get this right,” Botterud says. “You cannot let the reliability drop as you increase the amount of renewables, especially because I think that will lead to resistance towards adopting renewables.”

    That is why Botterud feels fortunate to be working on the decarbonization problem at LIDS — even though a career here is not something he had originally planned. Botterud’s first experience with MIT came during his time as a graduate student in his home country of Norway, when he spent a year as a visiting student with what is now called the MIT Energy Initiative. He might never have returned, except that while at MIT, Botterud met his future wife, Bilge Yildiz. The pair both ended up working at the Argonne National Laboratory outside of Chicago, with Botterud focusing on challenges related to power systems and electricity markets. Then Yildiz got a faculty position at MIT, where she is a professor of nuclear and materials science and engineering. Botterud moved back to the Cambridge area with her and continued to work for Argonne remotely, but he also kept an eye on local opportunities. Eventually, a position at LIDS became available, and Botterud took it, while maintaining his connections to Argonne.

    “At first glance, it may not be an obvious fit,” Botterud says. “My work is very focused on a specific application, power system challenges, and LIDS tends to be more focused on fundamental methods to use across many different application areas. However, being at LIDS, my lab [the Energy Analytics Group] has access to the most recent advances in these fundamental methods, and we can apply them to power and energy problems. Other people at LIDS are working on energy too, so there is growing momentum to address these important problems.”

    Weather, space, and time

    Much of Botterud’s research involves optimization, using mathematical programming to compare alternatives and find the best solution. Common computational challenges include dealing with large geographical areas that contain regions with different weather, different types and quantities of renewable energy available, and different infrastructure and consumer needs — such as the entire United States. Another challenge is the need for granular time resolution, sometimes even down to the sub-second level, to account for changes in energy supply and demand.

    Often, Botterud’s group will use decomposition to solve such large problems piecemeal and then stitch together solutions. However, it’s also important to consider systems as a whole. For example, in a recent paper, Botterud’s lab looked at the effect of building new transmission lines as part of national decarbonization. They modeled solutions assuming coordination at the state, regional, or national level, and found that the more regions coordinate to build transmission infrastructure and distribute electricity, the less they will need to spend to reach zero carbon.

    In other projects, Botterud uses game theory approaches to study strategic interactions in electricity markets. For example, he has designed agent-based models to analyze electricity markets. These assume each actor will make strategic decisions in their own best interest and then simulate interactions between them. Interested parties can use the models to see what would happen under different conditions and market rules, which may lead companies to make different investment decisions, or governing bodies to issue different regulations and incentives. These choices can shape how quickly the grid gets decarbonized.

    Botterud is also collaborating with researchers in MIT’s chemical engineering department who are working on improving battery storage technologies. Batteries will help manage variable renewable energy supply by capturing surplus energy during periods of high generation to release during periods of insufficient generation. Botterud’s group models the sort of charge cycles that batteries are likely to experience in the power grid, so that chemical engineers in the lab can test their batteries’ abilities in more realistic scenarios. In turn, this also leads to a more realistic representation of batteries in power system optimization models.

    These are only some of the problems that Botterud works on. He enjoys the challenge of tackling a spectrum of different projects, collaborating with everyone from engineers to architects to economists. He also believes that such collaboration leads to better solutions. The problems created by climate change are myriad and complex, and solving them will require researchers to cooperate and explore.

    “In order to have a real impact on interdisciplinary problems like energy and climate,” Botterud says, “you need to get outside of your research sweet spot and broaden your approach.” More

  • in

    Michael Howland gives wind energy a lift

    Michael Howland was in his office at MIT, watching real-time data from a wind farm 7,000 miles away in northwest India, when he noticed something odd: Some of the turbines weren’t producing the expected amount of electricity.

    Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering, studies the physics of the Earth’s atmosphere and how that information can optimize renewable energy systems. To accomplish this, he and his team develop and use predictive models, supercomputer simulations, and real-life data from wind farms, such as the one in India.

    The global wind power market is one of the most cost-competitive and resilient power sources across the world, the Global Wind Energy Council reported last year. The year 2020 saw record growth in wind power capacity, thanks to a surge of installations in China and the United States. Yet wind power needs to grow three times faster in the coming decade to address the worst impacts of climate change and achieve federal and state climate goals, the report says.

    “Optimal wind farm design and the resulting cost of energy are dependent on the wind,” Howland says. “But wind farms are often sited and designed based on short-term historical climate records.”

    In October 2021, Howland received a Seed Fund grant from the MIT Energy Initiative (MITEI) to account for how climate change might affect the wind of the future. “Our initial results suggest that considering the uncertainty in the winds in the design and operation of wind farms can lead to more reliable energy production,” he says.

    Most recently, Howland and his team came up with a model that predicts the power produced by each individual turbine based on the physics of the wind farm as a whole. The model can inform decisions that may boost a farm’s overall output.

    The state of the planet

    Growing up in a suburb of Philadelphia, the son of neuroscientists, Howland’s childhood wasn’t especially outdoorsy. Later, he’d become an avid hiker with a deep appreciation for nature, but a ninth-grade class assignment made him think about the state of the planet, perhaps for the first time.

    A history teacher had asked the class to write a report on climate change. “I remember arguing with my high school classmates about whether humans were the leading cause of climate change, but the teacher didn’t want to get into that debate,” Howland recalls. “He said climate change was happening, whether or not you accept that it’s anthropogenic, and he wanted us to think about the impacts of global warming, and solutions. I was one of his vigorous defenders.”

    As part of a research internship after his first year of college, Howland visited a wind farm in Iowa, where wind produces more than half of the state’s electricity. “The turbines look tall from the highway, but when you’re underneath them, you’re really struck by their scale,” he says. “That’s where you get a sense of how colossal they really are.” (Not a fan of heights, Howland opted not to climb the turbine’s internal ladder to snap a photo from the top.)

    After receiving an undergraduate degree from Johns Hopkins University and master’s and PhD degrees in mechanical engineering from Stanford University, he joined MIT’s Department of Civil and Environmental Engineering to focus on the intersection of fluid mechanics, weather, climate, and energy modeling. His goal is to enhance renewable energy systems.

    An added bonus to being at MIT is the opportunity to inspire the next generation, much like his ninth-grade history teacher did for him. Howland’s graduate-level introduction to the atmospheric boundary layer is geared primarily to engineers and physicists, but as he sees it, climate change is such a multidisciplinary and complex challenge that “every skill set that exists in human society can be relevant to mitigating it.”

    “There are the physics and engineering questions that our lab primarily works on, but there are also questions related to social sciences, public acceptance, policymaking, and implementation,” he says. “Careers in renewable energy are rapidly growing. There are far more job openings than we can hire for right now. In many areas, we don’t yet have enough people to address the challenges in renewable energy and climate change mitigation that need to be solved.

    “I encourage my students — really, everyone I interact with — to find a way to impact the climate change problem,” he says.

    Unusual conditions

    In fall 2021, Howland was trying to explain the odd data coming in from India.

    Based on sensor feedback, wind turbines’ software-driven control systems constantly tweak the speed and the angle of the blades, and what’s known as yaw — the orientation of the giant blades in relation to the wind direction.

    Existing utility-scale turbines are controlled “greedily,” which means that every turbine in the farm automatically turns into the wind to maximize its own power production.

    Though the turbines in the front row of the Indian wind farm were reacting appropriately to the wind direction, their power output was all over the place. “Not what we would expect based on the existing models,” Howland says.

    These massive turbine towers stood at 100 meters, about the length of a football field, with blades the length of an Olympic swimming pool. At their highest point, the blade tips lunged almost 200 meters into the sky.

    Then there’s the speed of the blades themselves: The tips move many times faster than the wind, around 80 to 100 meters per second — up to a quarter or a third of the speed of sound.

    Using a state-of-the-art sensor that measures the speed of incoming wind before it interacts with the massive rotors, Howland’s team saw an unexpectedly complex airflow effect. He covers the phenomenon in his class. The data coming in from India, he says, displayed “quite remarkable wind conditions stemming from the effects of Earth’s rotation and the physics of buoyancy 
that you don’t always see.”

    Traditionally, wind turbines operate in the lowest 10 percent of the atmospheric boundary layer — the so-called surface layer — which is affected primarily by ground conditions. The Indian turbines, Howland realized, were operating in regions of the atmosphere that turbines haven’t historically accessed.

    Trending taller

    Howland knew that airflow interactions can persist for kilometers. The interaction of high winds with the front-row turbines was generating wakes in the air similar to the way boats generate wakes in the water.

    To address this, Howland’s model trades off the efficiency of upwind turbines to benefit downwind ones. By misaligning some of the upwind turbines in certain conditions, the downwind units experience less wake turbulence, increasing the overall energy output of the wind farm by as much as 1 percent to 3 percent, without requiring additional costs. If a 1.2 percent energy increase was applied to the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines — enough to power about 3 million homes.

    Even a modest boost could mean fewer turbines generating the same output, or the ability to place more units into a smaller space, because negative interactions between the turbines can be diminished.

    Howland says the model can predict potential benefits in a variety of scenarios at different types of wind farms. “The part that’s important and exciting is that it’s not just particular to this wind farm. We can apply the collective control method across the wind farm fleet,” he says, which is growing taller and wider.

    By 2035, the average hub height for offshore turbines in the United States is projected to grow from 100 meters to around 150 meters — the height of the Washington Monument.

    “As we continue to build larger wind turbines and larger wind farms, we need to revisit the existing practice for their design and control,” Howland says. “We can use our predictive models to ensure that we build and operate the most efficient renewable generators possible.”

    Looking to the future

    Howland and other climate watchers have reason for optimism with the passage in August 2022 of the Inflation Reduction Act, which calls for a significant investment in domestic energy production and for reducing carbon emissions by roughly 40 percent by 2030.

    But Howland says the act itself isn’t sufficient. “We need to continue pushing the envelope in research and development as well as deployment,” he says. The model he created with his team can help, especially for offshore wind farms experiencing low wind turbulence and larger wake interactions.

    Offshore wind can face challenges of public acceptance. Howland believes that researchers, policymakers, and the energy industry need to do more to get the public on board by addressing concerns through open public dialogue, outreach, and education.

    Howland once wrote and illustrated a children’s book, inspired by Dr. Seuss’s “The Lorax,” that focused on renewable energy. Howland recalls his “really terrible illustrations,” but he believes he was onto something. “I was having some fun helping people interact with alternative energy in a more natural way at an earlier age,” he says, “and recognize that these are not nefarious technologies, but remarkable feats of human ingenuity.” More

  • in

    Helping the cause of environmental resilience

    Haruko Wainwright, the Norman C. Rasmussen Career Development Professor in Nuclear Science and Engineering (NSE) and assistant professor in civil and environmental engineering at MIT, grew up in rural Japan, where many nuclear facilities are located. She remembers worrying about the facilities as a child. Wainwright was only 6 at the time of the Chernobyl accident in 1986, but still recollects it vividly.

    Those early memories have contributed to Wainwright’s determination to research how technologies can mold environmental resilience — the capability of mitigating the consequences of accidents and recovering from contamination.

    Wainwright believes that environmental monitoring can help improve resilience. She co-leads the U.S. Department of Energy (DOE)’s Advanced Long-term Environmental Monitoring Systems (ALTEMIS) project, which integrates technologies such as in situ sensors, geophysics, remote sensing, simulations, and artificial intelligence to establish new paradigms for monitoring. The project focuses on soil and groundwater contamination at more than 100 U.S. sites that were used for nuclear weapons production.

    As part of this research, which was featured last year in Environmental Science & Technology Journal, Wainwright is working on a machine learning framework for improving environmental monitoring strategies. She hopes the ALTEMIS project will enable the rapid detection of anomalies while ensuring the stability of residual contamination and waste disposal facilities.

    Childhood in rural Japan

    Even as a child, Wainwright was interested in physics, history, and a variety of other subjects.

    But growing up in a rural area was not ideal for someone interested in STEM. There were no engineers or scientists in the community and no science museums, either. “It was not so cool to be interested in science, and I never talked about my interest with anyone,” Wainwright recalls.

    Television and books were the only door to the world of science. “I did not study English until middle school and I had never been on a plane until college. I sometimes find it miraculous that I am now working in the U.S. and teaching at MIT,” she says.

    As she grew a little older, Wainwright heard a lot of discussions about nuclear facilities in the region and many stories about Hiroshima and Nagasaki.

    At the same time, giants like Marie Curie inspired her to pursue science. Nuclear physics was particularly fascinating. “At some point during high school, I started wondering ‘what are radiations, what is radioactivity, what is light,’” she recalls. Reading Richard Feynman’s books and trying to understand quantum mechanics made her want to study physics in college.

    Pursuing research in the United States

    Wainwright pursued an undergraduate degree in engineering physics at Kyoto University. After two research internships in the United States, Wainwright was impressed by the dynamic and fast-paced research environment in the country.

    And compared to Japan, there were “more women in science and engineering,” Wainwright says. She enrolled at the University of California at Berkeley in 2005, where she completed her doctorate in nuclear engineering with minors in statistics and civil and environmental engineering.

    Before moving to MIT NSE in 2022, Wainwright was a staff scientist in the Earth and Environmental Area at Lawrence Berkeley National Laboratory (LBNL). She worked on a variety of topics, including radioactive contamination, climate science, CO2 sequestration, precision agriculture, and watershed science. Her time at LBNL helped Wainwright build a solid foundation about a variety of environmental sensors and monitoring and simulation methods across different earth science disciplines.   

    Empowering communities through monitoring

    One of the most compelling takeaways from Wainwright’s early research: People trust actual measurements and data as facts, even though they are skeptical about models and predictions. “I talked with many people living in Fukushima prefecture. Many of them have dosimeters and measure radiation levels on their own. They might not trust the government, but they trust their own data and are then convinced that it is safe to live there and to eat local food,” Wainwright says.

    She has been impressed that area citizens have gained significant knowledge about radiation and radioactivity through these efforts. “But they are often frustrated that people living far away, in cities like Tokyo, still avoid agricultural products from Fukushima,” Wainwright says.

    Wainwright thinks that data derived from environmental monitoring — through proper visualization and communication — can address misconceptions and fake news that often hurt people near contaminated sites.

    Wainwright is now interested in how these technologies — tested with real data at contaminated sites — can be proactively used for existing and future nuclear facilities “before contamination happens,” as she explored for Nuclear News. “I don’t think it is a good idea to simply dismiss someone’s concern as irrational. Showing credible data has been much more effective to provide assurance. Or a proper monitoring network would enable us to minimize contamination or support emergency responses when accidents happen,” she says.

    Educating communities and students

    Part of empowering communities involves improving their ability to process science-based information. “Potentially hazardous facilities always end up in rural regions; minorities’ concerns are often ignored. The problem is that these regions don’t produce so many scientists or policymakers; they don’t have a voice,” Wainwright says, “I am determined to dedicate my time to improve STEM education in rural regions and to increase the voice in these regions.”

    In a project funded by DOE, she collaborates with the team of researchers at the University of Alaska — the Alaska Center for Energy and Power and Teaching Through Technology program — aiming to improve STEM education for rural and indigenous communities. “Alaska is an important place for energy transition and environmental justice,” Wainwright says. Micro-nuclear reactors can potentially improve the life of rural communities who bear the brunt of the high cost of fuel and transportation. However, there is a distrust of nuclear technologies, stemming from past nuclear weapon testing. At the same time, Alaska has vast metal mining resources for renewable energy and batteries. And there are concerns about environmental contamination from mining and various sources. The teams’ vision is much broader, she points out. “The focus is on broader environmental monitoring technologies and relevant STEM education, addressing general water and air qualities,” Wainwright says.

    The issues also weave into the courses Wainwright teaches at MIT. “I think it is important for engineering students to be aware of environmental justice related to energy waste and mining as well as past contamination events and their recovery,” she says. “It is not OK just to send waste to, or develop mines in, rural regions, which could be a special place for some people. We need to make sure that these developments will not harm the environment and health of local communities.” Wainwright also hopes that this knowledge will ultimately encourage students to think creatively about engineering designs that minimize waste or recycle material.

    The last question of the final quiz of one of her recent courses was: Assume that you store high-level radioactive waste in your “backyard.” What technical strategies would make you and your family feel safe? “All students thought about this question seriously and many suggested excellent points, including those addressing environmental monitoring,” Wainwright says, “that made me hopeful about the future.” More

  • in

    Minimizing electric vehicles’ impact on the grid

    National and global plans to combat climate change include increasing the electrification of vehicles and the percentage of electricity generated from renewable sources. But some projections show that these trends might require costly new power plants to meet peak loads in the evening when cars are plugged in after the workday. What’s more, overproduction of power from solar farms during the daytime can waste valuable electricity-generation capacity.

    In a new study, MIT researchers have found that it’s possible to mitigate or eliminate both these problems without the need for advanced technological systems of connected devices and real-time communications, which could add to costs and energy consumption. Instead, encouraging the placing of charging stations for electric vehicles (EVs) in strategic ways, rather than letting them spring up anywhere, and setting up systems to initiate car charging at delayed times could potentially make all the difference.

    The study, published today in the journal Cell Reports Physical Science, is by Zachary Needell PhD ’22, postdoc Wei Wei, and Professor Jessika Trancik of MIT’s Institute for Data, Systems, and Society.

    In their analysis, the researchers used data collected in two sample cities: New York and Dallas. The data were gathered from, among other sources, anonymized records collected via onboard devices in vehicles, and surveys that carefully sampled populations to cover variable travel behaviors. They showed the times of day cars are used and for how long, and how much time the vehicles spend at different kinds of locations — residential, workplace, shopping, entertainment, and so on.

    The findings, Trancik says, “round out the picture on the question of where to strategically locate chargers to support EV adoption and also support the power grid.”

    Better availability of charging stations at workplaces, for example, could help to soak up peak power being produced at midday from solar power installations, which might otherwise go to waste because it is not economical to build enough battery or other storage capacity to save all of it for later in the day. Thus, workplace chargers can provide a double benefit, helping to reduce the evening peak load from EV charging and also making use of the solar electricity output.

    These effects on the electric power system are considerable, especially if the system must meet charging demands for a fully electrified personal vehicle fleet alongside the peaks in other demand for electricity, for example on the hottest days of the year. If unmitigated, the evening peaks in EV charging demand could require installing upwards of 20 percent more power-generation capacity, the researchers say.

    “Slow workplace charging can be more preferable than faster charging technologies for enabling a higher utilization of midday solar resources,” Wei says.

    Meanwhile, with delayed home charging, each EV charger could be accompanied by a simple app to estimate the time to begin its charging cycle so that it charges just before it is needed the next day. Unlike other proposals that require a centralized control of the charging cycle, such a system needs no interdevice communication of information and can be preprogrammed — and can accomplish a major shift in the demand on the grid caused by increasing EV penetration. The reason it works so well, Trancik says, is because of the natural variability in driving behaviors across individuals in a population.

    By “home charging,” the researchers aren’t only referring to charging equipment in individual garages or parking areas. They say it’s essential to make charging stations available in on-street parking locations and in apartment building parking areas as well.

    Trancik says the findings highlight the value of combining the two measures — workplace charging and delayed home charging — to reduce peak electricity demand, store solar energy, and conveniently meet drivers’ charging needs on all days. As the team showed in earlier research, home charging can be a particularly effective component of a strategic package of charging locations; workplace charging, they have found, is not a good substitute for home charging for meeting drivers’ needs on all days.

    “Given that there’s a lot of public money going into expanding charging infrastructure,” Trancik says, “how do you incentivize the location such that this is going to be efficiently and effectively integrated into the power grid without requiring a lot of additional capacity expansion?” This research offers some guidance to policymakers on where to focus rules and incentives.

    “I think one of the fascinating things about these findings is that by being strategic you can avoid a lot of physical infrastructure that you would otherwise need,” she adds. “Your electric vehicles can displace some of the need for stationary energy storage, and you can also avoid the need to expand the capacity of power plants, by thinking about the location of chargers as a tool for managing demands — where they occur and when they occur.”

    Delayed home charging could make a surprising amount of difference, the team found. “It’s basically incentivizing people to begin charging later. This can be something that is preprogrammed into your chargers. You incentivize people to delay the onset of charging by a bit, so that not everyone is charging at the same time, and that smooths out the peak.”

    Such a program would require some advance commitment on the part of participants. “You would need to have enough people committing to this program in advance to avoid the investment in physical infrastructure,” Trancik says. “So, if you have enough people signing up, then you essentially don’t have to build those extra power plants.”

    It’s not a given that all of this would line up just right, and putting in place the right mix of incentives would be crucial. “If you want electric vehicles to act as an effective storage technology for solar energy, then the [EV] market needs to grow fast enough in order to be able to do that,” Trancik says.

    To best use public funds to help make that happen, she says, “you can incentivize charging installations, which would go through ideally a competitive process — in the private sector, you would have companies bidding for different projects, but you can incentivize installing charging at workplaces, for example, to tap into both of these benefits.” Chargers people can access when they are parked near their residences are also important, Trancik adds, but for other reasons. Home charging is one of the ways to meet charging needs while avoiding inconvenient disruptions to people’s travel activities.

    The study was supported by the European Regional Development Fund Operational Program for Competitiveness and Internationalization, the Lisbon Portugal Regional Operation Program, and the Portuguese Foundation for Science and Technology. More

  • in

    Shrinky Dinks, nail polish, and smelly bacteria

    In a lab on the fourth floor of MIT’s Building 56, a group of Massachusetts high school students gathered around a device that measures conductivity.

    Vincent Nguyen, 15, from Saugus, thought of the times the material on their sample electrode flaked off the moment they took it out of the oven. Or how the electrode would fold weirdly onto itself. The big fails were kind of funny, but discouraging. The students had worked for a month, experimenting with different materials, and 17-year-old Brianna Tong of Malden wondered if they’d finally gotten it right: Would their electrode work well enough to power a microbial fuel cell?

    The students secured their electrode with alligator clips, someone hit start, and the teens watched anxiously as the device searched for even the faintest electrical current.

    Capturing electrons from bacteria

    Last July, Tong, Nguyen, and six other students from Malden Catholic High School commuted between the lab of MIT chemical engineer Ariel L. Furst and their school’s chemistry lab. Their goal was to fashion electrodes for low-cost microbial fuel cells — miniature bioreactors that generate small amounts of electricity by capturing electrons transferred from living microbes. These devices can double as electrochemical sensors.

    Furst, the Paul M. Cook Career Development Professor of Chemical Engineering, uses a mix of electrochemistry, microbial engineering, and materials science to address challenges in human health and clean energy. “The goal of all of our projects is to increase sustainability, clean energy, and health equity globally,” she says.

    Electrochemical sensors are powerful, sensitive detection and measurement tools. Typically, their electrodes need to be built in precisely engineered environments. “Thinking about ways of making devices without needing a cleanroom is important for coming up with inexpensive devices that can be deployed in low-resource settings under non-ideal conditions,” Furst says.

    For 17-year-old Angelina Ang of Everett, the project illuminated the significance of “coming together to problem-solve for a healthier and more sustainable earth,” she says. “It made me realize that we hold the answers to fix our dying planet.”

    With the help of a children’s toy called Shrinky Dinks, carbon-based materials, nail polish, and a certain smelly bacterium, the students got — literally — a trial-by-fire introduction to the scientific method. At one point, one of their experimental electrodes burst into flames. Other results were more promising.

    The students took advantage of the electrical properties of a bacterium — Shewanella oneidensis — that’s been called nature’s microscopic power plant. As part of their metabolism, Shewanella oneidensis generate electricity by oxidizing organic matter. In essence, they spit out electrons. Put enough together, and you get a few milliamps.

    To build bacteria-friendly electrodes, one of the first things the students did was culture Shewanella. They learned how to pour a growth medium into petri dishes where the reddish, normally lake-living bacteria could multiply. The microbes, Furst notes, are a little stinky, like cabbage. “But we think they’re really cool,” she says.

    With the right engineering, Shewanella can produce electric current when they detect toxins in water or soil. They could be used for bioremediation of wastewater. Low-cost versions could be useful for areas with limited or no access to reliable electricity and clean water.

    Next-generation chemists

    The Malden Catholic-MIT program resulted from a fluke encounter between Furst and a Malden Catholic parent.

    Mary-Margaret O’Donnell-Zablocki, then a medicinal chemist at a Kendall Square biotech startup, met Furst through a mutual friend. She asked Furst if she’d consider hosting high school chemistry students in her lab for the summer.

    Furst was intrigued. She traces her own passion for science to a program she’d happened upon between her junior and senior years in high school in St. Louis. The daughter of a software engineer and a businesswoman, Furst was casting around for potential career interests when she came across a summer program that enlisted scientists in academia and private research to introduce high school students and teachers to aspects of the scientific enterprise.

    “That’s when I realized that research is not like a lab class where there’s an expected outcome,” Furst recalls. “It’s so much cooler than that.”

    Using startup funding from an MIT Energy Initiative seed grant, Furst developed a curriculum with Malden Catholic chemistry teacher Seamus McGuire, and students were invited to apply. In addition to Tong, Ang, and Nguyen, participants included Chengxiang Lou, 18, from China; Christian Ogata, 14, of Wakefield; Kenneth Ramirez, 17, of Everett; Isaac Toscano, 17, of Medford; and MaryKatherine Zablocki, 15, of Revere and Wakefield. O’Donnell-Zablocki was surprised — and pleased — when her daughter applied to the program and was accepted.

    Furst notes that women are still underrepresented in chemical engineering. She was particularly excited to mentor young women through the program.

    A conductive ink

    The students were charged with identifying materials that had high conductivity, low resistance, were a bit soluble, and — with the help of a compatible “glue” — were able to stick to a substrate.

    Furst showed the Malden Catholic crew Shrinky Dinks — a common polymer popularized in the 1970s as a craft material that, when heated in a toaster oven, shrinks to a third of its size and becomes thicker and more rigid. Electrodes based on Shrinky Dinks would cost pennies, making it an ideal, inexpensive material for microbial fuel cells that could monitor, for instance, soil health in low- and middle-income countries.

    “Right now, monitoring soil health is problematic,” Furst says. “You have to collect a sample and bring it back to the lab to analyze in expensive equipment. But if we have these little devices that cost a couple of bucks each, we can monitor soil health remotely.”

    After a crash course in conductive carbon-based inks and solvent glues, the students went off to Malden Catholic to figure out what materials they wanted to try.

    Tong rattled them off: carbon nanotubes, carbon nanofibers, graphite powder, activated carbon. Potential solvents to help glue the carbon to the Shrinky Dinks included nail polish, corn syrup, and embossing ink, to name a few. They tested and retested. When they hit a dead end, they revised their hypotheses.

    They tried using a 3D printed stencil to daub the ink-glue mixture onto the Shrinky Dinks. They hand-painted them. They tried printing stickers. They worked with little squeegees. They tried scooping and dragging the material. Some of their electro-materials either flaked off or wouldn’t stick in the heating process.

    “Embossing ink never dried after baking the Shrinky Dink,” Ogata recalls. “In fact, it’s probably still liquid! And corn syrup had a tendency to boil. Seeing activated carbon ignite or corn syrup boiling in the convection oven was quite the spectacle.”

    “After the electrode was out of the oven and cooled down, we would check the conductivity,” says Tong, who plans to pursue a career in science. “If we saw there was a high conductivity, we got excited and thought those materials worked.”

    The moment of truth came in Furst’s MIT lab, where the students had access to more sophisticated testing equipment. Would their electrodes conduct electricity?

    Many of them didn’t. Tong says, “At first, we were sad, but then Dr. Furst told us that this is what science is, testing repeatedly and sometimes not getting the results we wanted.” Lou agrees. “If we just copy the data left by other scholars and don’t collect and figure it out by ourselves, then it is difficult to be a qualified researcher,” he says.

    Some of the students plan to continue the project one afternoon a week at MIT and as an independent study at Malden Catholic. The long-term goal is to create a field-based soil sensor that employs a bacterium like Shewanella.

    By chance, the students’ very first electrode — made of graphite powder ink and nail polish glue — generated the most current. One of the team’s biggest surprises was how much better black nail polish worked than clear nail polish. It turns out black nail polish contains iron-based pigment — a conductor. The unexpected win took some of the sting out of the failures.

    “They learned a very hard lesson: Your results might be awesome, and things are exciting, but then nothing else might work. And that’s totally fine,” Furst says.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Working to make nuclear energy more competitive

    Assil Halimi has loved science since he was a child, but it was a singular experience at a college internship that stoked his interest in nuclear engineering. As part of work on a conceptual design for an aircraft electric propulsion system, Halimi had to read a chart that compared the energy density of various fuel sources. He was floored to see that the value for uranium was orders of magnitude higher than the rest. “Just a fuel pellet the size of my fingertip can generate as much energy as a ton of coal or 150 gallons of oil,” Halimi points out.

    Having grown up in Algeria, in an economy dominated by oil and gas, Halimi was always aware of energy’s role in fueling growth. But here was a source that showed enormous potential. “The more I read about nuclear, the more I saw its direct relationship with climate change and how nuclear energy can potentially replace the carbonized economy,” Halimi says. “The problem we’re dealing with right now is that the source of energy is not clean. Nuclear [presented itself] as an answer, or at least as a promise that you can dig into,” he says. “I was also seeing the electrification of systems and the economy evolving.”

    A tectonic shift was brewing, and Halimi wanted in.

    Then an electrical engineering major at the Institut National des Sciences Appliquées de Lyon (INSA Lyon), Halimi added nuclear engineering as a second major. Today, the second-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE) has expanded on his early curiosity in the field and researches methods of improving the design of small modular reactors. Under Professor Koroush Shirvan’s advisement, Halimi also studies high burnup fuel so we can extract more energy from the same amount of material.

    A foot in two worlds

    The son of a computer engineer father and a mother who works as a judge, Halimi was born in Algiers and grew up in Cherchell, a small town near the capital. His interest in science grew sharper in middle school; Halimi remembers being a member of the astronomy club. As a middle and high schooler, Halimi traveled to areas with low light pollution to observe the night skies.

    As a teenager, Halimi set his goals high, enrolling in high school in both Algeria and France. Taking classes in Arabic and French, he found a fair amount of overlap between the two curricula. The divergence in the nonscientific classes gave Halimi a better understanding of the cultural perspectives. After studying the French curriculum remotely, Halimi graduated with two diplomas. He remembers having to take two baccalaureate exams, which didn’t bother him much, but he did have to miss viewing parts of the 2014 World Cup soccer tournament.

    A multidisciplinary approach to engineering

    After high school, Halimi moved to France to study engineering at INSA Lyon. He elected for a major in electrical engineering and, ever the pragmatist, also signed up for a bachelor’s degree in math and economics. “You can build a lot of amazing things, but you have to take costs into account to make sure you’re proposing something feasible that can make it in the real world,” Halimi says, explaining his motivation to study economics.

    Wrapping up his bachelor’s in math and economics in two short years, Halimi decided to pursue a double curriculum in electrical and nuclear engineering during his final year of engineering studies. Since his school in Lyon did not offer the double curriculum, Halimi had to move to Paris to study at The French Alternative Energies and Atomic Energy Commission (CEA), part of the University of Paris-Saclay. The summer before he started, he traveled to Japan and toured the Fukushima nuclear power plant.

    Halimi first conducted research at MIT NSE as part of an internship in nuclear engineering when he was still a student in France. He remembers wanting to explore work on reactor design, when an advisor at CEA recommended interning with Shirvan.

    Pragmatism in nuclear energy adoption

    Halimi’s work at MIT NSE focuses on high burnup fuel assessment and small modular reactor (SMR) design.

    Existing nuclear plants have faced stiff competition during the last decade. Improving the fuel efficiency (high burnup) is a potential way of improving the economic competitiveness of the existing reactor fleet. One challenge is that materials degrade when you keep them longer in the reactor. Halimi evaluates fuel performance and safety features of more efficient fuel operation using advanced computer simulation tools. At the 2022 TopFuel Light Water Reactor Fuel Performance Conference, Halimi presented a paper describing strategies to achieve higher burnups. He is now working on journal paper about this work.

    Halimi’s research on SMR design is motivated by the industry’s move to smaller plants that take less time to construct. The challenge, he says, is that if you simply make the reactors smaller, you lose the advantages of economies of scale and might end up with a more expensive economic proposal. Halimi’s goal is to analyze how smaller reactors can compensate for economies of scale by improving their technical design. Other advantages stacked in favor of smaller reactors is that they can be constructed faster and in series.

    Halimi analyzes the fuel performance, core design, thermal hydraulics, and safety of these small reactors. “One efficient way that I particularly assess to improve their economics is high power density operation,” he says. In late 2021 Halimi published a paper on the relationship between cost and reactor power density in Nuclear Engineering and Design Journal. The research has been featured in other conference papers.

    When he’s not working, Halimi makes time to play soccer and hopes to get back into astronomy. “I sold all my gear when I moved from Europe so I need to buy new ones at some point,” he says.

    Halimi is convinced that nuclear power will be a serious contender in the energy landscape. “You have to propose something that will make everyone happy,” Halimi laughs when he describes work in nuclear science and engineering.

    The work ahead is daunting — “Nuclear power is safe, sustainable, and reliable; now we need to be on time and on budget [to achieve] climate goals” he says — but Halimi is ready. By addressing both the competitiveness of the existing reactors through high burnup fuels and designing the next generation of nuclear plants, he is adopting a dual-pronged approach to make nuclear energy an economical and viable alternative to carbon-based fuels. More