More stories

  • in

    Flow batteries for grid-scale energy storage

    In the coming decades, renewable energy sources such as solar and wind will increasingly dominate the conventional power grid. Because those sources only generate electricity when it’s sunny or windy, ensuring a reliable grid — one that can deliver power 24/7 — requires some means of storing electricity when supplies are abundant and delivering it later when they’re not. And because there can be hours and even days with no wind, for example, some energy storage devices must be able to store a large amount of electricity for a long time.

    A promising technology for performing that task is the flow battery, an electrochemical device that can store hundreds of megawatt-hours of energy — enough to keep thousands of homes running for many hours on a single charge. Flow batteries have the potential for long lifetimes and low costs in part due to their unusual design. In the everyday batteries used in phones and electric vehicles, the materials that store the electric charge are solid coatings on the electrodes. “A flow battery takes those solid-state charge-storage materials, dissolves them in electrolyte solutions, and then pumps the solutions through the electrodes,” says Fikile Brushett, an associate professor of chemical engineering at MIT. That design offers many benefits and poses a few challenges.

    Flow batteries: Design and operation

    A flow battery contains two substances that undergo electrochemical reactions in which electrons are transferred from one to the other. When the battery is being charged, the transfer of electrons forces the two substances into a state that’s “less energetically favorable” as it stores extra energy. (Think of a ball being pushed up to the top of a hill.) When the battery is being discharged, the transfer of electrons shifts the substances into a more energetically favorable state as the stored energy is released. (The ball is set free and allowed to roll down the hill.)

    At the core of a flow battery are two large tanks that hold liquid electrolytes, one positive and the other negative. Each electrolyte contains dissolved “active species” — atoms or molecules that will electrochemically react to release or store electrons. During charging, one species is “oxidized” (releases electrons), and the other is “reduced” (gains electrons); during discharging, they swap roles. Pumps are used to circulate the two electrolytes through separate electrodes, each made of a porous material that provides abundant surfaces on which the active species can react. A thin membrane between the adjacent electrodes keeps the two electrolytes from coming into direct contact and possibly reacting, which would release heat and waste energy that could otherwise be used on the grid.

    When the battery is being discharged, active species on the negative side oxidize, releasing electrons that flow through an external circuit to the positive side, causing the species there to be reduced. The flow of those electrons through the external circuit can power the grid. In addition to the movement of the electrons, “supporting” ions — other charged species in the electrolyte — pass through the membrane to help complete the reaction and keep the system electrically neutral.

    Once all the species have reacted and the battery is fully discharged, the system can be recharged. In that process, electricity from wind turbines, solar farms, and other generating sources drives the reverse reactions. The active species on the positive side oxidize to release electrons back through the wires to the negative side, where they rejoin their original active species. The battery is now reset and ready to send out more electricity when it’s needed. Brushett adds, “The battery can be cycled in this way over and over again for years on end.”

    Benefits and challenges

    A major advantage of this system design is that where the energy is stored (the tanks) is separated from where the electrochemical reactions occur (the so-called reactor, which includes the porous electrodes and membrane). As a result, the capacity of the battery — how much energy it can store — and its power — the rate at which it can be charged and discharged — can be adjusted separately. “If I want to have more capacity, I can just make the tanks bigger,” explains Kara Rodby PhD ’22, a former member of Brushett’s lab and now a technical analyst at Volta Energy Technologies. “And if I want to increase its power, I can increase the size of the reactor.” That flexibility makes it possible to design a flow battery to suit a particular application and to modify it if needs change in the future.

    However, the electrolyte in a flow battery can degrade with time and use. While all batteries experience electrolyte degradation, flow batteries in particular suffer from a relatively faster form of degradation called “crossover.” The membrane is designed to allow small supporting ions to pass through and block the larger active species, but in reality, it isn’t perfectly selective. Some of the active species in one tank can sneak through (or “cross over”) and mix with the electrolyte in the other tank. The two active species may then chemically react, effectively discharging the battery. Even if they don’t, some of the active species is no longer in the first tank where it belongs, so the overall capacity of the battery is lower.

    Recovering capacity lost to crossover requires some sort of remediation — for example, replacing the electrolyte in one or both tanks or finding a way to reestablish the “oxidation states” of the active species in the two tanks. (Oxidation state is a number assigned to an atom or compound to tell if it has more or fewer electrons than it has when it’s in its neutral state.) Such remediation is more easily — and therefore more cost-effectively — executed in a flow battery because all the components are more easily accessed than they are in a conventional battery.

    The state of the art: Vanadium

    A critical factor in designing flow batteries is the selected chemistry. The two electrolytes can contain different chemicals, but today the most widely used setup has vanadium in different oxidation states on the two sides. That arrangement addresses the two major challenges with flow batteries.

    First, vanadium doesn’t degrade. “If you put 100 grams of vanadium into your battery and you come back in 100 years, you should be able to recover 100 grams of that vanadium — as long as the battery doesn’t have some sort of a physical leak,” says Brushett.

    And second, if some of the vanadium in one tank flows through the membrane to the other side, there is no permanent cross-contamination of the electrolytes, only a shift in the oxidation states, which is easily remediated by re-balancing the electrolyte volumes and restoring the oxidation state via a minor charge step. Most of today’s commercial systems include a pipe connecting the two vanadium tanks that automatically transfers a certain amount of electrolyte from one tank to the other when the two get out of balance.

    However, as the grid becomes increasingly dominated by renewables, more and more flow batteries will be needed to provide long-duration storage. Demand for vanadium will grow, and that will be a problem. “Vanadium is found around the world but in dilute amounts, and extracting it is difficult,” says Rodby. “So there are limited places — mostly in Russia, China, and South Africa — where it’s produced, and the supply chain isn’t reliable.” As a result, vanadium prices are both high and extremely volatile — an impediment to the broad deployment of the vanadium flow battery.

    Beyond vanadium

    The question then becomes: If not vanadium, then what? Researchers worldwide are trying to answer that question, and many are focusing on promising chemistries using materials that are more abundant and less expensive than vanadium. But it’s not that easy, notes Rodby. While other chemistries may offer lower initial capital costs, they may be more expensive to operate over time. They may require periodic servicing to rejuvenate one or both of their electrolytes. “You may even need to replace them, so you’re essentially incurring that initial (low) capital cost again and again,” says Rodby.

    Indeed, comparing the economics of different options is difficult because “there are so many dependent variables,” says Brushett. “A flow battery is an electrochemical system, which means that there are multiple components working together in order for the device to function. Because of that, if you are trying to improve a system — performance, cost, whatever — it’s very difficult because when you touch one thing, five other things change.”

    So how can we compare these new and emerging chemistries — in a meaningful way — with today’s vanadium systems? And how do we compare them with one another, so we know which ones are more promising and what the potential pitfalls are with each one? “Addressing those questions can help us decide where to focus our research and where to invest our research and development dollars now,” says Brushett.

    Techno-economic modeling as a guide

    A good way to understand and assess the economic viability of new and emerging energy technologies is using techno-economic modeling. With certain models, one can account for the capital cost of a defined system and — based on the system’s projected performance — the operating costs over time, generating a total cost discounted over the system’s lifetime. That result allows a potential purchaser to compare options on a “levelized cost of storage” basis.

    Using that approach, Rodby developed a framework for estimating the levelized cost for flow batteries. The framework includes a dynamic physical model of the battery that tracks its performance over time, including any changes in storage capacity. The calculated operating costs therefore cover all services required over decades of operation, including the remediation steps taken in response to species degradation and crossover.

    Analyzing all possible chemistries would be impossible, so the researchers focused on certain classes. First, they narrowed the options down to those in which the active species are dissolved in water. “Aqueous systems are furthest along and are most likely to be successful commercially,” says Rodby. Next, they limited their analyses to “asymmetric” chemistries; that is, setups that use different materials in the two tanks. (As Brushett explains, vanadium is unusual in that using the same “parent” material in both tanks is rarely feasible.) Finally, they divided the possibilities into two classes: species that have a finite lifetime and species that have an infinite lifetime; that is, ones that degrade over time and ones that don’t.

    Results from their analyses aren’t clear-cut; there isn’t a particular chemistry that leads the pack. But they do provide general guidelines for choosing and pursuing the different options.

    Finite-lifetime materials

    While vanadium is a single element, the finite-lifetime materials are typically organic molecules made up of multiple elements, among them carbon. One advantage of organic molecules is that they can be synthesized in a lab and at an industrial scale, and the structure can be altered to suit a specific function. For example, the molecule can be made more soluble, so more will be present in the electrolyte and the energy density of the system will be greater; or it can be made bigger so it won’t fit through the membrane and cross to the other side. Finally, organic molecules can be made from simple, abundant, low-cost elements, potentially even waste streams from other industries.

    Despite those attractive features, there are two concerns. First, organic molecules would probably need to be made in a chemical plant, and upgrading the low-cost precursors as needed may prove to be more expensive than desired. Second, these molecules are large chemical structures that aren’t always very stable, so they’re prone to degradation. “So along with crossover, you now have a new degradation mechanism that occurs over time,” says Rodby. “Moreover, you may figure out the degradation process and how to reverse it in one type of organic molecule, but the process may be totally different in the next molecule you work on, making the discovery and development of each new chemistry require significant effort.”

    Research is ongoing, but at present, Rodby and Brushett find it challenging to make the case for the finite-lifetime chemistries, mostly based on their capital costs. Citing studies that have estimated the manufacturing costs of these materials, Rodby believes that current options cannot be made at low enough costs to be economically viable. “They’re cheaper than vanadium, but not cheap enough,” says Rodby.

    The results send an important message to researchers designing new chemistries using organic molecules: Be sure to consider operating challenges early on. Rodby and Brushett note that it’s often not until way down the “innovation pipeline” that researchers start to address practical questions concerning the long-term operation of a promising-looking system. The MIT team recommends that understanding the potential decay mechanisms and how they might be cost-effectively reversed or remediated should be an upfront design criterion.

    Infinite-lifetime species

    The infinite-lifetime species include materials that — like vanadium — are not going to decay. The most likely candidates are other metals; for example, iron or manganese. “These are commodity-scale chemicals that will certainly be low cost,” says Rodby.

    Here, the researchers found that there’s a wider “design space” of feasible options that could compete with vanadium. But there are still challenges to be addressed. While these species don’t degrade, they may trigger side reactions when used in a battery. For example, many metals catalyze the formation of hydrogen, which reduces efficiency and adds another form of capacity loss. While there are ways to deal with the hydrogen-evolution problem, a sufficiently low-cost and effective solution for high rates of this side reaction is still needed.

    In addition, crossover is a still a problem requiring remediation steps. The researchers evaluated two methods of dealing with crossover in systems combining two types of infinite-lifetime species.

    The first is the “spectator strategy.” Here, both of the tanks contain both active species. Explains Brushett, “You have the same electrolyte mixture on both sides of the battery, but only one of the species is ever working and the other is a spectator.” As a result, crossover can be remediated in similar ways to those used in the vanadium flow battery. The drawback is that half of the active material in each tank is unavailable for storing charge, so it’s wasted. “You’ve essentially doubled your electrolyte cost on a per-unit energy basis,” says Rodby.

    The second method calls for making a membrane that is perfectly selective: It must let through only the supporting ion needed to maintain the electrical balance between the two sides. However, that approach increases cell resistance, hurting system efficiency. In addition, the membrane would need to be made of a special material — say, a ceramic composite — that would be extremely expensive based on current production methods and scales. Rodby notes that work on such membranes is under way, but the cost and performance metrics are “far off from where they’d need to be to make sense.”

    Time is of the essence

    The researchers stress the urgency of the climate change threat and the need to have grid-scale, long-duration storage systems at the ready. “There are many chemistries now being looked at,” says Rodby, “but we need to hone in on some solutions that will actually be able to compete with vanadium and can be deployed soon and operated over the long term.”

    The techno-economic framework is intended to help guide that process. It can calculate the levelized cost of storage for specific designs for comparison with vanadium systems and with one another. It can identify critical gaps in knowledge related to long-term operation or remediation, thereby identifying technology development or experimental investigations that should be prioritized. And it can help determine whether the trade-off between lower upfront costs and greater operating costs makes sense in these next-generation chemistries.

    The good news, notes Rodby, is that advances achieved in research on one type of flow battery chemistry can often be applied to others. “A lot of the principles learned with vanadium can be translated to other systems,” she says. She believes that the field has advanced not only in understanding but also in the ability to design experiments that address problems common to all flow batteries, thereby helping to prepare the technology for its important role of grid-scale storage in the future.

    This research was supported by the MIT Energy Initiative. Kara Rodby PhD ’22 was supported by an ExxonMobil-MIT Energy Fellowship in 2021-22.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    An interdisciplinary approach to fighting climate change through clean energy solutions

    In early 2021, the U.S. government set an ambitious goal: to decarbonize its power grid, the system that generates and transmits electricity throughout the country, by 2035. It’s an important goal in the fight against climate change, and will require a switch from current, greenhouse-gas producing energy sources (such as coal and natural gas), to predominantly renewable ones (such as wind and solar).

    Getting the power grid to zero carbon will be a challenging undertaking, as Audun Botterud, a principal research scientist at the MIT Laboratory for Information and Decision Systems (LIDS) who has long been interested in the problem, knows well. It will require building lots of renewable energy generators and new infrastructure; designing better technology to capture, store, and carry electricity; creating the right regulatory and economic incentives; and more. Decarbonizing the grid also presents many computational challenges, which is where Botterud’s focus lies. Botterud has modeled different aspects of the grid — the mechanics of energy supply, demand, and storage, and electricity markets — where economic factors can have a huge effect on how quickly renewable solutions get adopted.

    On again, off again

    A major challenge of decarbonization is that the grid must be designed and operated to reliably meet demand. Using renewable energy sources complicates this, as wind and solar power depend on an infamously volatile system: the weather. A sunny day becomes gray and blustery, and wind turbines get a boost but solar farms go idle. This will make the grid’s energy supply variable and hard to predict. Additional resources, including batteries and backup power generators, will need to be incorporated to regulate supply. Extreme weather events, which are becoming more common with climate change, can further strain both supply and demand. Managing a renewables-driven grid will require algorithms that can minimize uncertainty in the face of constant, sometimes random fluctuations to make better predictions of supply and demand, guide how resources are added to the grid, and inform how those resources are committed and dispatched across the entire United States.

    “The problem of managing supply and demand in the grid has to happen every second throughout the year, and given how much we rely on electricity in society, we need to get this right,” Botterud says. “You cannot let the reliability drop as you increase the amount of renewables, especially because I think that will lead to resistance towards adopting renewables.”

    That is why Botterud feels fortunate to be working on the decarbonization problem at LIDS — even though a career here is not something he had originally planned. Botterud’s first experience with MIT came during his time as a graduate student in his home country of Norway, when he spent a year as a visiting student with what is now called the MIT Energy Initiative. He might never have returned, except that while at MIT, Botterud met his future wife, Bilge Yildiz. The pair both ended up working at the Argonne National Laboratory outside of Chicago, with Botterud focusing on challenges related to power systems and electricity markets. Then Yildiz got a faculty position at MIT, where she is a professor of nuclear and materials science and engineering. Botterud moved back to the Cambridge area with her and continued to work for Argonne remotely, but he also kept an eye on local opportunities. Eventually, a position at LIDS became available, and Botterud took it, while maintaining his connections to Argonne.

    “At first glance, it may not be an obvious fit,” Botterud says. “My work is very focused on a specific application, power system challenges, and LIDS tends to be more focused on fundamental methods to use across many different application areas. However, being at LIDS, my lab [the Energy Analytics Group] has access to the most recent advances in these fundamental methods, and we can apply them to power and energy problems. Other people at LIDS are working on energy too, so there is growing momentum to address these important problems.”

    Weather, space, and time

    Much of Botterud’s research involves optimization, using mathematical programming to compare alternatives and find the best solution. Common computational challenges include dealing with large geographical areas that contain regions with different weather, different types and quantities of renewable energy available, and different infrastructure and consumer needs — such as the entire United States. Another challenge is the need for granular time resolution, sometimes even down to the sub-second level, to account for changes in energy supply and demand.

    Often, Botterud’s group will use decomposition to solve such large problems piecemeal and then stitch together solutions. However, it’s also important to consider systems as a whole. For example, in a recent paper, Botterud’s lab looked at the effect of building new transmission lines as part of national decarbonization. They modeled solutions assuming coordination at the state, regional, or national level, and found that the more regions coordinate to build transmission infrastructure and distribute electricity, the less they will need to spend to reach zero carbon.

    In other projects, Botterud uses game theory approaches to study strategic interactions in electricity markets. For example, he has designed agent-based models to analyze electricity markets. These assume each actor will make strategic decisions in their own best interest and then simulate interactions between them. Interested parties can use the models to see what would happen under different conditions and market rules, which may lead companies to make different investment decisions, or governing bodies to issue different regulations and incentives. These choices can shape how quickly the grid gets decarbonized.

    Botterud is also collaborating with researchers in MIT’s chemical engineering department who are working on improving battery storage technologies. Batteries will help manage variable renewable energy supply by capturing surplus energy during periods of high generation to release during periods of insufficient generation. Botterud’s group models the sort of charge cycles that batteries are likely to experience in the power grid, so that chemical engineers in the lab can test their batteries’ abilities in more realistic scenarios. In turn, this also leads to a more realistic representation of batteries in power system optimization models.

    These are only some of the problems that Botterud works on. He enjoys the challenge of tackling a spectrum of different projects, collaborating with everyone from engineers to architects to economists. He also believes that such collaboration leads to better solutions. The problems created by climate change are myriad and complex, and solving them will require researchers to cooperate and explore.

    “In order to have a real impact on interdisciplinary problems like energy and climate,” Botterud says, “you need to get outside of your research sweet spot and broaden your approach.” More

  • in

    Michael Howland gives wind energy a lift

    Michael Howland was in his office at MIT, watching real-time data from a wind farm 7,000 miles away in northwest India, when he noticed something odd: Some of the turbines weren’t producing the expected amount of electricity.

    Howland, the Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering, studies the physics of the Earth’s atmosphere and how that information can optimize renewable energy systems. To accomplish this, he and his team develop and use predictive models, supercomputer simulations, and real-life data from wind farms, such as the one in India.

    The global wind power market is one of the most cost-competitive and resilient power sources across the world, the Global Wind Energy Council reported last year. The year 2020 saw record growth in wind power capacity, thanks to a surge of installations in China and the United States. Yet wind power needs to grow three times faster in the coming decade to address the worst impacts of climate change and achieve federal and state climate goals, the report says.

    “Optimal wind farm design and the resulting cost of energy are dependent on the wind,” Howland says. “But wind farms are often sited and designed based on short-term historical climate records.”

    In October 2021, Howland received a Seed Fund grant from the MIT Energy Initiative (MITEI) to account for how climate change might affect the wind of the future. “Our initial results suggest that considering the uncertainty in the winds in the design and operation of wind farms can lead to more reliable energy production,” he says.

    Most recently, Howland and his team came up with a model that predicts the power produced by each individual turbine based on the physics of the wind farm as a whole. The model can inform decisions that may boost a farm’s overall output.

    The state of the planet

    Growing up in a suburb of Philadelphia, the son of neuroscientists, Howland’s childhood wasn’t especially outdoorsy. Later, he’d become an avid hiker with a deep appreciation for nature, but a ninth-grade class assignment made him think about the state of the planet, perhaps for the first time.

    A history teacher had asked the class to write a report on climate change. “I remember arguing with my high school classmates about whether humans were the leading cause of climate change, but the teacher didn’t want to get into that debate,” Howland recalls. “He said climate change was happening, whether or not you accept that it’s anthropogenic, and he wanted us to think about the impacts of global warming, and solutions. I was one of his vigorous defenders.”

    As part of a research internship after his first year of college, Howland visited a wind farm in Iowa, where wind produces more than half of the state’s electricity. “The turbines look tall from the highway, but when you’re underneath them, you’re really struck by their scale,” he says. “That’s where you get a sense of how colossal they really are.” (Not a fan of heights, Howland opted not to climb the turbine’s internal ladder to snap a photo from the top.)

    After receiving an undergraduate degree from Johns Hopkins University and master’s and PhD degrees in mechanical engineering from Stanford University, he joined MIT’s Department of Civil and Environmental Engineering to focus on the intersection of fluid mechanics, weather, climate, and energy modeling. His goal is to enhance renewable energy systems.

    An added bonus to being at MIT is the opportunity to inspire the next generation, much like his ninth-grade history teacher did for him. Howland’s graduate-level introduction to the atmospheric boundary layer is geared primarily to engineers and physicists, but as he sees it, climate change is such a multidisciplinary and complex challenge that “every skill set that exists in human society can be relevant to mitigating it.”

    “There are the physics and engineering questions that our lab primarily works on, but there are also questions related to social sciences, public acceptance, policymaking, and implementation,” he says. “Careers in renewable energy are rapidly growing. There are far more job openings than we can hire for right now. In many areas, we don’t yet have enough people to address the challenges in renewable energy and climate change mitigation that need to be solved.

    “I encourage my students — really, everyone I interact with — to find a way to impact the climate change problem,” he says.

    Unusual conditions

    In fall 2021, Howland was trying to explain the odd data coming in from India.

    Based on sensor feedback, wind turbines’ software-driven control systems constantly tweak the speed and the angle of the blades, and what’s known as yaw — the orientation of the giant blades in relation to the wind direction.

    Existing utility-scale turbines are controlled “greedily,” which means that every turbine in the farm automatically turns into the wind to maximize its own power production.

    Though the turbines in the front row of the Indian wind farm were reacting appropriately to the wind direction, their power output was all over the place. “Not what we would expect based on the existing models,” Howland says.

    These massive turbine towers stood at 100 meters, about the length of a football field, with blades the length of an Olympic swimming pool. At their highest point, the blade tips lunged almost 200 meters into the sky.

    Then there’s the speed of the blades themselves: The tips move many times faster than the wind, around 80 to 100 meters per second — up to a quarter or a third of the speed of sound.

    Using a state-of-the-art sensor that measures the speed of incoming wind before it interacts with the massive rotors, Howland’s team saw an unexpectedly complex airflow effect. He covers the phenomenon in his class. The data coming in from India, he says, displayed “quite remarkable wind conditions stemming from the effects of Earth’s rotation and the physics of buoyancy 
that you don’t always see.”

    Traditionally, wind turbines operate in the lowest 10 percent of the atmospheric boundary layer — the so-called surface layer — which is affected primarily by ground conditions. The Indian turbines, Howland realized, were operating in regions of the atmosphere that turbines haven’t historically accessed.

    Trending taller

    Howland knew that airflow interactions can persist for kilometers. The interaction of high winds with the front-row turbines was generating wakes in the air similar to the way boats generate wakes in the water.

    To address this, Howland’s model trades off the efficiency of upwind turbines to benefit downwind ones. By misaligning some of the upwind turbines in certain conditions, the downwind units experience less wake turbulence, increasing the overall energy output of the wind farm by as much as 1 percent to 3 percent, without requiring additional costs. If a 1.2 percent energy increase was applied to the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines — enough to power about 3 million homes.

    Even a modest boost could mean fewer turbines generating the same output, or the ability to place more units into a smaller space, because negative interactions between the turbines can be diminished.

    Howland says the model can predict potential benefits in a variety of scenarios at different types of wind farms. “The part that’s important and exciting is that it’s not just particular to this wind farm. We can apply the collective control method across the wind farm fleet,” he says, which is growing taller and wider.

    By 2035, the average hub height for offshore turbines in the United States is projected to grow from 100 meters to around 150 meters — the height of the Washington Monument.

    “As we continue to build larger wind turbines and larger wind farms, we need to revisit the existing practice for their design and control,” Howland says. “We can use our predictive models to ensure that we build and operate the most efficient renewable generators possible.”

    Looking to the future

    Howland and other climate watchers have reason for optimism with the passage in August 2022 of the Inflation Reduction Act, which calls for a significant investment in domestic energy production and for reducing carbon emissions by roughly 40 percent by 2030.

    But Howland says the act itself isn’t sufficient. “We need to continue pushing the envelope in research and development as well as deployment,” he says. The model he created with his team can help, especially for offshore wind farms experiencing low wind turbulence and larger wake interactions.

    Offshore wind can face challenges of public acceptance. Howland believes that researchers, policymakers, and the energy industry need to do more to get the public on board by addressing concerns through open public dialogue, outreach, and education.

    Howland once wrote and illustrated a children’s book, inspired by Dr. Seuss’s “The Lorax,” that focused on renewable energy. Howland recalls his “really terrible illustrations,” but he believes he was onto something. “I was having some fun helping people interact with alternative energy in a more natural way at an earlier age,” he says, “and recognize that these are not nefarious technologies, but remarkable feats of human ingenuity.” More

  • in

    An education in climate change

    Several years ago, Christopher Knittel’s father, then a math teacher, shared a mailing he had received at his high school. When he opened the packet, alarm bells went off for Knittel, who is the George P. Shultz Professor of Energy Economics at the MIT Sloan School of Management and the deputy director for policy at the MIT Energy Initiative (MITEI). “It was a slickly produced package of materials purporting to show how to teach climate change,” he says. “In reality, it was a thinly veiled attempt to kindle climate change denial.”

    Knittel was especially concerned to learn that this package had been distributed to schools nationwide. “Many teachers in search of information on climate change might use this material because they are not in a position to judge its scientific validity,” says Knittel, who is also the faculty director of the MIT Center for Energy and Environmental Policy Research (CEEPR). “I decided that MIT, which is committed to true science, was in the perfect position to develop its own climate change curriculum.”

    Today, Knittel is spearheading the Climate Action Through Education (CATE) program, a curriculum rolling out in pilot form this year in more than a dozen Massachusetts high schools, and eventually in high schools across the United States. To spur its broad adoption, says Knittel, the CATE curriculum features a unique suite of attributes: the creation of climate-based lessons for a range of disciplines beyond science, adherence to state-based education standards to facilitate integration into established curricula, material connecting climate change impacts to specific regions, and opportunities for students to explore climate solutions.

    CATE aims to engage both students and teachers in a subject that can be overwhelming. “We will be honest about the threats posed by climate change but also give students a sense of agency that they can do something about this,” says Knittel. “And for the many teachers — especially non-science teachers — starved for knowledge and background material, CATE offers resources to give them confidence to implement our curriculum.”

    Partnering with teachers

    From the outset, CATE sought guidance and hands-on development help from educators. Project manager Aisling O’Grady surveyed teachers to learn about their experiences teaching about climate and to identify the kinds of resources they lacked. She networked with MIT’s K-12 education experts and with Antje Danielson, MITEI director of education, “bouncing ideas off of them to shape the direction of our effort,” she says.

    O’Grady gained two critical insights from this process: “I realized that we needed practicing high school teachers as curriculum developers and that they had to represent different subject areas, because climate change is inherently interdisciplinary,” she says. This echoes the philosophy behind MITEI’s Energy Studies minor, she remarks, which includes classes from MIT’s different schools. “While science helps us understand and find solutions for climate change, it touches so many other areas, from economics, policy, environmental justice and politics, to history and literature.”

    In line with this thinking, CATE recruited Massachusetts teachers representing key subject areas in the high school curriculum: Amy Block, a full-time math teacher, and Lisa Borgatti, a full-time science teacher, both at the Governor’s Academy in Byfield; and Kathryn Teissier du Cros, a full-time language arts teacher at Newton North High School.

    The fourth member of this cohort, Michael Kozuch, is a full-time history teacher at Newton South High School, where he has worked for 24 years. Kozuch became engaged with environmental issues 15 years ago, introducing an elective in sustainability at Newton South. He serves on the coordinating committee for the Climate Action Network at the Massachusetts Teachers Association. He also is president of Earth Day Boston and organized Boston’s 50th anniversary celebration of Earth Day. When he learned that MIT was seeking teachers to help develop a climate education curriculum, he immediately applied.

    “I’ve heard time and again from teachers across the state that they want to incorporate climate change into the curriculum but don’t know how to make it work, given lesson plans and schedules geared toward preparing students for specific tests,” says Kozuch. “I knew that for a climate curriculum to succeed, it had to be part of an integrated approach.”

    Using climate as a lens

    Over the course of a year, Kozuch and fellow educators created units that fit into their pre-existing syllabi but were woven through with relevant climate change themes. Kozuch already had some experience in this vein, describing the role of the Industrial Revolution in triggering the use of fossil fuels and the greenhouse gas emissions that resulted. For CATE, Kozuch explored additional ways of shifting focus in covering U.S. history. There are, for instance, lessons looking at westward expansion in terms of land use, expulsion of Indigenous people, and environmental justice, and at the Baby Boom period and the emergence of the environmental movement.

    In English/language arts, there are units dedicated to explaining terms used by scientists and policymakers, such as “anthropogenic,” as well as lessons devoted to climate change fiction and to student-originated sustainability projects.

    The science and math classes work independently but also dovetail. For instance, there are science lessons that demystify the greenhouse effect, utilizing experiments to track fossil fuel emissions, which link to math lessons that calculate and graph the average rate of change of global carbon emissions. To make these classes even more relevant, there are labs where students compare carbon emissions in Massachusetts to those of a neighboring state, and where they determine the environmental and economic costs of plugging in electric devices in their own homes.

    Throughout this curriculum-shaping process, O’Grady and the teachers sought feedback from MIT faculty from a range of disciplines, including David McGee, associate professor in the Department of Earth, Atmospheric and Planetary Sciences. With the help of CATE undergraduate researcher Heidi Li ’22, the team held a focus group with the Sustainable Energy Alliance, an undergraduate student club. In spring 2022, CATE convened a professional development workshop in collaboration with the Massachusetts Teachers Association Climate Action Network, Earth Day Boston, and MIT’s Office of Government and Community Relations, sponsored by the Beker Foundation, to evaluate 15 discrete CATE lessons. One of the workshop participants, Gary Smith, a teacher from St. John’s Preparatory School in Danvers, Massachusetts, signed on as a volunteer science curriculum developer.

    “We had a diverse pool of teachers who thought the lessons were fantastic, but among their suggestions noted that their student cohorts included new English speakers, who needed simpler language and more pictures,” says O’Grady. “This was extremely useful to us, and we revised the curriculum because we want to reach students at every level of learning.”

    Reaching all the schools

    Now, the CATE curriculum is in the hands of a cohort of Massachusetts teachers. Each of these educators will test one or more of the lessons and lab activities over the next year, checking in regularly with MIT partners to report on their classroom experiences. The CATE team is building a Climate Education Resource Network of MIT graduate students, postdocs, and research staff who can answer teachers’ specific climate questions and help them find additional resources or datasets. Additionally, teachers will have the opportunity to attend two in-person cohort meetings and be paired with graduate student “climate advisors.”

    In spring 2023, in honor of Earth Day, O’Grady and Knittel want to bring CATE first adopters — high school teachers, students, and their families — to campus. “We envision professors giving mini lectures, youth climate groups discussing how to get involved in local actions, and our team members handing out climate change packets to students to spark conversations with their families at home,” says O’Grady.

    By creating a positive experience around their curriculum in these pilot schools, the CATE team hopes to promote its dissemination to many more Massachusetts schools in 2023. The team plans on enhancing lessons, offering more paths to integration in high school studies, and creating a companion resource website for teachers. Knittel wants to establish footholds in school after school, in Massachusetts and beyond.

    “I plan to spend a lot of my time convincing districts and states to adopt,” he says. “If one teacher tells another that the curriculum is useful, with touchpoints in different disciplines, that’s how we get a foot in the door.”

    Knittel is not shying away from places where “climate change is a politicized topic.” He hopes to team up with universities in states where there might be resistance to including such lessons in schools to develop the curriculum. Although his day job involves computing household-level carbon footprints, determining the relationship between driving behavior and the price of gasoline, and promoting wise climate policy, Knittel plans to push CATE as far as he can. “I want this curriculum to be adopted by everybody — that’s my goal,” he says.

    “In one sense, I’m not the natural person for this job,” he admits. “But I share the mission and passion of MITEI and CEEPR for decarbonizing our economy in ways that are socially equitable and efficient, and part of doing that is educating Americans about the actual costs and consequences of climate change.”

    The CATE program is sponsored by MITEI, CEEPR, and the MIT Vice President for Research.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Shrinky Dinks, nail polish, and smelly bacteria

    In a lab on the fourth floor of MIT’s Building 56, a group of Massachusetts high school students gathered around a device that measures conductivity.

    Vincent Nguyen, 15, from Saugus, thought of the times the material on their sample electrode flaked off the moment they took it out of the oven. Or how the electrode would fold weirdly onto itself. The big fails were kind of funny, but discouraging. The students had worked for a month, experimenting with different materials, and 17-year-old Brianna Tong of Malden wondered if they’d finally gotten it right: Would their electrode work well enough to power a microbial fuel cell?

    The students secured their electrode with alligator clips, someone hit start, and the teens watched anxiously as the device searched for even the faintest electrical current.

    Capturing electrons from bacteria

    Last July, Tong, Nguyen, and six other students from Malden Catholic High School commuted between the lab of MIT chemical engineer Ariel L. Furst and their school’s chemistry lab. Their goal was to fashion electrodes for low-cost microbial fuel cells — miniature bioreactors that generate small amounts of electricity by capturing electrons transferred from living microbes. These devices can double as electrochemical sensors.

    Furst, the Paul M. Cook Career Development Professor of Chemical Engineering, uses a mix of electrochemistry, microbial engineering, and materials science to address challenges in human health and clean energy. “The goal of all of our projects is to increase sustainability, clean energy, and health equity globally,” she says.

    Electrochemical sensors are powerful, sensitive detection and measurement tools. Typically, their electrodes need to be built in precisely engineered environments. “Thinking about ways of making devices without needing a cleanroom is important for coming up with inexpensive devices that can be deployed in low-resource settings under non-ideal conditions,” Furst says.

    For 17-year-old Angelina Ang of Everett, the project illuminated the significance of “coming together to problem-solve for a healthier and more sustainable earth,” she says. “It made me realize that we hold the answers to fix our dying planet.”

    With the help of a children’s toy called Shrinky Dinks, carbon-based materials, nail polish, and a certain smelly bacterium, the students got — literally — a trial-by-fire introduction to the scientific method. At one point, one of their experimental electrodes burst into flames. Other results were more promising.

    The students took advantage of the electrical properties of a bacterium — Shewanella oneidensis — that’s been called nature’s microscopic power plant. As part of their metabolism, Shewanella oneidensis generate electricity by oxidizing organic matter. In essence, they spit out electrons. Put enough together, and you get a few milliamps.

    To build bacteria-friendly electrodes, one of the first things the students did was culture Shewanella. They learned how to pour a growth medium into petri dishes where the reddish, normally lake-living bacteria could multiply. The microbes, Furst notes, are a little stinky, like cabbage. “But we think they’re really cool,” she says.

    With the right engineering, Shewanella can produce electric current when they detect toxins in water or soil. They could be used for bioremediation of wastewater. Low-cost versions could be useful for areas with limited or no access to reliable electricity and clean water.

    Next-generation chemists

    The Malden Catholic-MIT program resulted from a fluke encounter between Furst and a Malden Catholic parent.

    Mary-Margaret O’Donnell-Zablocki, then a medicinal chemist at a Kendall Square biotech startup, met Furst through a mutual friend. She asked Furst if she’d consider hosting high school chemistry students in her lab for the summer.

    Furst was intrigued. She traces her own passion for science to a program she’d happened upon between her junior and senior years in high school in St. Louis. The daughter of a software engineer and a businesswoman, Furst was casting around for potential career interests when she came across a summer program that enlisted scientists in academia and private research to introduce high school students and teachers to aspects of the scientific enterprise.

    “That’s when I realized that research is not like a lab class where there’s an expected outcome,” Furst recalls. “It’s so much cooler than that.”

    Using startup funding from an MIT Energy Initiative seed grant, Furst developed a curriculum with Malden Catholic chemistry teacher Seamus McGuire, and students were invited to apply. In addition to Tong, Ang, and Nguyen, participants included Chengxiang Lou, 18, from China; Christian Ogata, 14, of Wakefield; Kenneth Ramirez, 17, of Everett; Isaac Toscano, 17, of Medford; and MaryKatherine Zablocki, 15, of Revere and Wakefield. O’Donnell-Zablocki was surprised — and pleased — when her daughter applied to the program and was accepted.

    Furst notes that women are still underrepresented in chemical engineering. She was particularly excited to mentor young women through the program.

    A conductive ink

    The students were charged with identifying materials that had high conductivity, low resistance, were a bit soluble, and — with the help of a compatible “glue” — were able to stick to a substrate.

    Furst showed the Malden Catholic crew Shrinky Dinks — a common polymer popularized in the 1970s as a craft material that, when heated in a toaster oven, shrinks to a third of its size and becomes thicker and more rigid. Electrodes based on Shrinky Dinks would cost pennies, making it an ideal, inexpensive material for microbial fuel cells that could monitor, for instance, soil health in low- and middle-income countries.

    “Right now, monitoring soil health is problematic,” Furst says. “You have to collect a sample and bring it back to the lab to analyze in expensive equipment. But if we have these little devices that cost a couple of bucks each, we can monitor soil health remotely.”

    After a crash course in conductive carbon-based inks and solvent glues, the students went off to Malden Catholic to figure out what materials they wanted to try.

    Tong rattled them off: carbon nanotubes, carbon nanofibers, graphite powder, activated carbon. Potential solvents to help glue the carbon to the Shrinky Dinks included nail polish, corn syrup, and embossing ink, to name a few. They tested and retested. When they hit a dead end, they revised their hypotheses.

    They tried using a 3D printed stencil to daub the ink-glue mixture onto the Shrinky Dinks. They hand-painted them. They tried printing stickers. They worked with little squeegees. They tried scooping and dragging the material. Some of their electro-materials either flaked off or wouldn’t stick in the heating process.

    “Embossing ink never dried after baking the Shrinky Dink,” Ogata recalls. “In fact, it’s probably still liquid! And corn syrup had a tendency to boil. Seeing activated carbon ignite or corn syrup boiling in the convection oven was quite the spectacle.”

    “After the electrode was out of the oven and cooled down, we would check the conductivity,” says Tong, who plans to pursue a career in science. “If we saw there was a high conductivity, we got excited and thought those materials worked.”

    The moment of truth came in Furst’s MIT lab, where the students had access to more sophisticated testing equipment. Would their electrodes conduct electricity?

    Many of them didn’t. Tong says, “At first, we were sad, but then Dr. Furst told us that this is what science is, testing repeatedly and sometimes not getting the results we wanted.” Lou agrees. “If we just copy the data left by other scholars and don’t collect and figure it out by ourselves, then it is difficult to be a qualified researcher,” he says.

    Some of the students plan to continue the project one afternoon a week at MIT and as an independent study at Malden Catholic. The long-term goal is to create a field-based soil sensor that employs a bacterium like Shewanella.

    By chance, the students’ very first electrode — made of graphite powder ink and nail polish glue — generated the most current. One of the team’s biggest surprises was how much better black nail polish worked than clear nail polish. It turns out black nail polish contains iron-based pigment — a conductor. The unexpected win took some of the sting out of the failures.

    “They learned a very hard lesson: Your results might be awesome, and things are exciting, but then nothing else might work. And that’s totally fine,” Furst says.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    3 Questions: Antje Danielson on energy education and its role in climate action

    The MIT Energy Initiative (MITEI) leads energy education at MIT, developing and implementing a robust educational toolkit for MIT graduate and undergraduate students, online learners around the world, and high school students who want to contribute to the energy transition. As MITEI’s director of education, Antje Danielson manages a team devoted to training the next generation of energy innovators, entrepreneurs, and policymakers. Here, she discusses new initiatives in MITEI’s education program and how they are preparing students to take an active role in climate action.

    Q: What role are MITEI’s education efforts playing in climate action initiatives at MIT, and what more could we be doing?

    A: This is a big question. The carbon emissions from energy are such an important factor in climate mitigation; therefore, what we do in energy education is practically synonymous with climate education. This is well illustrated in a 2018 Nature Energy paper by Fuso Nerini, which outlines that affordable, clean energy is related to many of the United Nations Sustainable Development Goals (SDGs) — not just SDG 7, which specifically calls for “affordable, reliable, sustainable, and modern energy for all” by 2030. There are 17 SDGs containing 169 targets, of which 113 (65 percent) require actions to be taken concerning energy systems.

    Now, can we equate education with action? The answer is yes, but only if it is done correctly. From the behavioral change literature, we know that knowledge alone is not enough to change behavior. So, one important part of our education program is practice and experience through research, internships, stakeholder engagement, and other avenues. At a minimum, education must give the learner the knowledge, skills, and courage to be ready to jump into action, but ideally, practice is a part of the offering. We also want our learners to go out into the world and share what they know and do. If done right, education is an energy transition accelerator.

    At MITEI, our learners are not just MIT students. We are creating online offerings based on residential MIT courses to train global professionals, policymakers, and students in research methods and tools to support and accelerate the energy transition. These are free and open to learners worldwide. We have five courses available now, with more to come.

    Our latest program is a collaboration with MIT’s Center for Energy and Environmental Policy Research (CEEPR): Climate Action through Education, or CATE. This is a teach-the-teacher program for high school curriculum and is a part of the MIT Climate Action Plan. The aim is to develop interdisciplinary, solutions-focused climate change curricula for U.S. high school teachers with components in history/social science, English/language arts, math, science, and computer science.

    We are rapidly expanding our programming. In the online space, for our global learners, we are bundling courses for professional development certificates; for our undergraduates, we are redesigning the energy studies minor to reflect what we have learned over the past 12 years; and for our graduate students, we are adding a new program that allows them to garner industry experience related to the energy transition. Meanwhile, CATE is creating a support network for the teachers who adopt the curriculum. We are also working on creating an energy and climate alliance with other universities around the world.

    On the Institute level, I am a member of the Climate Education Working Group, a subgroup of the Climate Nucleus, where we discuss and will soon recommend further climate action the Institute can take. Stay tuned for that.

    Q: You mentioned that you are leading an effort to create a consortium of energy and climate education programs at universities around the world. How does this effort fit into MITEI’s educational mission?

    A: Yes, we are currently calling it the “Energy and Climate Education Alliance.” The background to this is that the problem we are facing — transitioning the entire global energy system from high carbon emissions to low, no, and negative carbon emissions — is global, huge, and urgent. Following the proverbial “many hands make light work,” we believe that the success of this very complex task is accomplished quicker with more participants. There is, of course, more to this as well. The complexity of the problem is such that (1) MIT doesn’t have all the expertise needed to accomplish the educational needs of the climate and energy crisis, (2) there is a definite local and regional component to capacity building, and (3) collaborations with universities around the world will make our mission-driven work more efficient. Finally, these collaborations will be advantageous for our students as they will be able to learn from real-world case studies that are not U.S.-based and maybe even visit other universities abroad, do internships, and engage in collaborative research projects. Also, students from those universities will be able to come here and experience MIT’s unique intellectual environment.

    Right now, we are very much in the beginning stages of creating the alliance. We have signed a collaboration agreement with the Technical University of Berlin, Germany, and are engaged in talks with other European and Southeast Asian universities. Some of the collaborations we are envisioning relate to course development, student exchange, collaborative research, and course promotion. We are very excited about this collaboration. It fits well into MIT’s ambition to take climate action outside of the university, while still staying within our educational mission.

    Q: It is clear to me from this conversation that MITEI’s education program is undertaking a number of initiatives to prepare MIT students and interested learners outside of the Institute to take an active role in climate action. But, the reality is that despite our rapidly changing climate and the immediate need to decarbonize our global economy, climate denialism and a lack of climate and energy understanding persist in the greater global population. What do you think must be done, and what can MITEI do, to increase climate and energy literacy broadly?

    A: I think the basic problem is not necessarily a lack of understanding but an abundance of competing issues that people are dealing with every day. Poverty, personal health, unemployment, inflation, pandemics, housing, wars — all are very immediate problems people have. And climate change is perceived to be in the future.

    The United States is a very bottom-up country, where corporations offer what people buy, and politicians advocate for what voters want and what money buys. Of course, this is overly simplified, but as long as we don’t come up with mechanisms to achieve a monumental shift in consumer and voter behavior, we are up against these immediate pressures. However, we are seeing some movement in this area due to rising gas and heating oil prices and the many natural disasters we are encountering now. People are starting to understand that climate change will hit their pocketbook, whether or not we have a carbon tax. The recent Florida hurricane damage, wildfires in the west, extreme summer temperatures, frequent droughts, increasing numbers of poisonous and disease-carrying insects — they all illustrate the relationship between climate change, health, and financial damage. Fewer and fewer people will be able to deny the existence of climate change because they will either be directly affected or know someone who is.

    The question is one of speed and scale. The more we can help to make the connections even more visible and understood, the faster we get to the general acceptance that this is real. Research projects like CEEPR’s Roosevelt Project, which develops action plans to help communities deal with industrial upheaval in the context of the energy transition, are contributing to this effect, as are studies related to climate change and national security. This is a fast-moving world, and our research findings need to be translated as we speak. A real problem in education is that we have the tendency to teach the tried and true. Our education programs have to become much nimbler, which means curricula have to be updated frequently, and that is expensive. And of course, the speed and magnitude of our efforts are dependent on the funding we can attract, and fundraising for education is more difficult than fundraising for research.

    However, let me pivot: You alluded to the fact that this is a global problem. The immediate pressures of poverty and hunger are a matter of survival in many parts of the world, and when it comes to surviving another day, who cares if climate change will render your fields unproductive in 20 years? Or if the weather turns your homeland into a lake, will you think about lobbying your government to reduce carbon emissions, or will you ask for help to rebuild your existence? On the flip side, politicians and government authorities in those areas have to deal with extremely complex situations, balancing local needs with global demands. We should learn from them. What we need is to listen. What do these areas of the world need most, and how can climate action be included in the calculations? The Global Commission to End Energy Poverty, a collaboration between MITEI and the Rockefeller Foundation to bring electricity to the billion people across the globe who currently live without it, is a good example of what we are already doing. Both our online education program and the Energy and Climate Education Alliance aim to go in this direction.

    The struggle and challenge to solve climate change can be pretty depressing, and there are many days when I feel despondent about the speed and progress we are making in saving the future of humanity. But, the prospect of contributing to such a large mission, even if the education team can only nudge us a tiny bit away from the business-as-usual scenario, is exciting. In particular, working on an issue like this at MIT is amazing. So much is happening here, and there don’t seem to be intellectual limits; in fact, thinking big is encouraged. It is very refreshing when one has encountered the old “you can’t do this” too often in the past. I want our students to take this attitude with them and go out there and think big. More

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More

  • in

    Using combustion to make better batteries

    For more than a century, much of the world has run on the combustion of fossil fuels. Now, to avert the threat of climate change, the energy system is changing. Notably, solar and wind systems are replacing fossil fuel combustion for generating electricity and heat, and batteries are replacing the internal combustion engine for powering vehicles. As the energy transition progresses, researchers worldwide are tackling the many challenges that arise.

    Sili Deng has spent her career thinking about combustion. Now an assistant professor in the MIT Department of Mechanical Engineering and the Class of 1954 Career Development Professor, Deng leads a group that, among other things, develops theoretical models to help understand and control combustion systems to make them more efficient and to control the formation of emissions, including particles of soot.

    “So we thought, given our background in combustion, what’s the best way we can contribute to the energy transition?” says Deng. In considering the possibilities, she notes that combustion refers only to the process — not to what’s burning. “While we generally think of fossil fuels when we think of combustion, the term ‘combustion’ encompasses many high-temperature chemical reactions that involve oxygen and typically emit light and large amounts of heat,” she says.

    Given that definition, she saw another role for the expertise she and her team have developed: They could explore the use of combustion to make materials for the energy transition. Under carefully controlled conditions, combusting flames can be used to produce not polluting soot, but rather valuable materials, including some that are critical in the manufacture of lithium-ion batteries.

    Improving the lithium-ion battery by lowering costs

    The demand for lithium-ion batteries is projected to skyrocket in the coming decades. Batteries will be needed to power the growing fleet of electric cars and to store the electricity produced by solar and wind systems so it can be delivered later when those sources aren’t generating. Some experts project that the global demand for lithium-ion batteries may increase tenfold or more in the next decade.

    Given such projections, many researchers are looking for ways to improve the lithium-ion battery technology. Deng and her group aren’t materials scientists, so they don’t focus on making new and better battery chemistries. Instead, their goal is to find a way to lower the high cost of making all of those batteries. And much of the cost of making a lithium-ion battery can be traced to the manufacture of materials used to make one of its two electrodes — the cathode.

    The MIT researchers began their search for cost savings by considering the methods now used to produce cathode materials. The raw materials are typically salts of several metals, including lithium, which provides ions — the electrically charged particles that move when the battery is charged and discharged. The processing technology aims to produce tiny particles, each one made up of a mixture of those ingredients, with the atoms arranged in the specific crystalline structure that will deliver the best performance in the finished battery.

    For the past several decades, companies have manufactured those cathode materials using a two-stage process called coprecipitation. In the first stage, the metal salts — excluding the lithium — are dissolved in water and thoroughly mixed inside a chemical reactor. Chemicals are added to change the acidity (the pH) of the mixture, and particles made up of the combined salts precipitate out of the solution. The particles are then removed, dried, ground up, and put through a sieve.

    A change in pH won’t cause lithium to precipitate, so it is added in the second stage. Solid lithium is ground together with the particles from the first stage until lithium atoms permeate the particles. The resulting material is then heated, or “annealed,” to ensure complete mixing and to achieve the targeted crystalline structure. Finally, the particles go through a “deagglomerator” that separates any particles that have joined together, and the cathode material emerges.

    Coprecipitation produces the needed materials, but the process is time-consuming. The first stage takes about 10 hours, and the second stage requires about 13 hours of annealing at a relatively low temperature (750 degrees Celsius). In addition, to prevent cracking during annealing, the temperature is gradually “ramped” up and down, which takes another 11 hours. The process is thus not only time-consuming but also energy-intensive and costly.

    For the past two years, Deng and her group have been exploring better ways to make the cathode material. “Combustion is very effective at oxidizing things, and the materials for lithium-ion batteries are generally mixtures of metal oxides,” says Deng. That being the case, they thought this could be an opportunity to use a combustion-based process called flame synthesis.

    A new way of making a high-performance cathode material

    The first task for Deng and her team — mechanical engineering postdoc Jianan Zhang, Valerie L. Muldoon ’20, SM ’22, and current graduate students Maanasa Bhat and Chuwei Zhang — was to choose a target material for their study. They decided to focus on a mixture of metal oxides consisting of nickel, cobalt, and manganese plus lithium. Known as “NCM811,” this material is widely used and has been shown to produce cathodes for batteries that deliver high performance; in an electric vehicle, that means a long driving range, rapid discharge and recharge, and a long lifetime. To better define their target, the researchers examined the literature to determine the composition and crystalline structure of NCM811 that has been shown to deliver the best performance as a cathode material.

    They then considered three possible approaches to improving on the coprecipitation process for synthesizing NCM811: They could simplify the system (to cut capital costs), speed up the process, or cut the energy required.

    “Our first thought was, what if we can mix together all of the substances — including the lithium — at the beginning?” says Deng. “Then we would not need to have the two stages” — a clear simplification over coprecipitation.

    Introducing FASP

    One process widely used in the chemical and other industries to fabricate nanoparticles is a type of flame synthesis called flame-assisted spray pyrolysis, or FASP. Deng’s concept for using FASP to make their targeted cathode powders proceeds as follows.

    The precursor materials — the metal salts (including the lithium) — are mixed with water, and the resulting solution is sprayed as fine droplets by an atomizer into a combustion chamber. There, a flame of burning methane heats up the mixture. The water evaporates, leaving the precursor materials to decompose, oxidize, and solidify to form the powder product. The cyclone separates particles of different sizes, and the baghouse filters out those that aren’t useful. The collected particles would then be annealed and deagglomerated.

    To investigate and optimize this concept, the researchers developed a lab-scale FASP setup consisting of a homemade ultrasonic nebulizer, a preheating section, a burner, a filter, and a vacuum pump that withdraws the powders that form. Using that system, they could control the details of the heating process: The preheating section replicates conditions as the material first enters the combustion chamber, and the burner replicates conditions as it passes the flame. That setup allowed the team to explore operating conditions that would give the best results.

    Their experiments showed marked benefits over coprecipitation. The nebulizer breaks up the liquid solution into fine droplets, ensuring atomic-level mixing. The water simply evaporates, so there’s no need to change the pH or to separate the solids from a liquid. As Deng notes, “You just let the gas go, and you’re left with the particles, which is what you want.” With lithium included at the outset, there’s no need for mixing solids with solids, which is neither efficient 
nor effective.

    They could even control the structure, or “morphology,” of the particles that formed. In one series of experiments, they tried exposing the incoming spray to different rates of temperature change over time. They found that the temperature “history” has a direct impact on morphology. With no preheating, the particles burst apart; and with rapid preheating, the particles were hollow. The best outcomes came when they used temperatures ranging from 175-225 C. Experiments with coin-cell batteries (laboratory devices used for testing battery materials) confirmed that by adjusting the preheating temperature, they could achieve a particle morphology that would optimize the performance of their materials.

    Best of all, the particles formed in seconds. Assuming the time needed for conventional annealing and deagglomerating, the new setup could synthesize the finished cathode material in half the total time needed for coprecipitation. Moreover, the first stage of the coprecipitation system is replaced by a far simpler setup — a savings in capital costs.

    “We were very happy,” says Deng. “But then we thought, if we’ve changed the precursor side so the lithium is mixed well with the salts, do we need to have the same process for the second stage? Maybe not!”

    Improving the second stage

    The key time- and energy-consuming step in the second stage is the annealing. In today’s coprecipitation process, the strategy is to anneal at a low temperature for a long time, giving the operator time to manipulate and control the process. But running a furnace for some 20 hours — even at a low temperature — consumes a lot of energy.

    Based on their studies thus far, Deng thought, “What if we slightly increase the temperature but reduce the annealing time by orders of magnitude? Then we could cut energy consumption, and we might still achieve the desired crystal structure.”

    However, experiments at slightly elevated temperatures and short treatment times didn’t bring the results they had hoped for. In transmission electron microscope (TEM) images, the particles that formed had clouds of light-looking nanoscale particles attached to their surfaces. When the researchers performed the same experiments without adding the lithium, those nanoparticles didn’t appear. Based on that and other tests, they concluded that the nanoparticles were pure lithium. So, it seemed like long-duration annealing would be needed to ensure that the lithium made its way inside the particles.

    But they then came up with a different solution to the lithium-distribution problem. They added a small amount — just 1 percent by weight — of an inexpensive compound called urea to their mixture. In TEM images of the particles formed, the “undesirable nanoparticles were largely gone,” says Deng.

    Experiments in the laboratory coin cells showed that the addition of urea significantly altered the response to changes in the annealing temperature. When the urea was absent, raising the annealing temperature led to a dramatic decline in performance of the cathode material that formed. But with the urea present, the performance of the material that formed was unaffected by any temperature change.

    That result meant that — as long as the urea was added with the other precursors — they could push up the temperature, shrink the annealing time, and omit the gradual ramp-up and cool-down process. Further imaging studies confirmed that their approach yields the desired crystal structure and the homogeneous elemental distribution of the cobalt, nickel, manganese, and lithium within the particles. Moreover, in tests of various performance measures, their materials did as well as materials produced by coprecipitation or by other methods using long-time heat treatment. Indeed, the performance was comparable to that of commercial batteries with cathodes made of NCM811.

    So now the long and expensive second stage required in standard coprecipitation could be replaced by just 20 minutes of annealing at about 870 C plus 20 minutes of cooling down at room temperature.

    Theory, continuing work, and planning for scale-up

    While experimental evidence supports their approach, Deng and her group are now working to understand why it works. “Getting the underlying physics right will help us design the process to control the morphology and to scale up the process,” says Deng. And they have a hypothesis for why the lithium nanoparticles in their flame synthesis process end up on the surfaces of the larger particles — and why the presence of urea solves that problem.

    According to their theory, without the added urea, the metal and lithium atoms are initially well-mixed within the droplet. But as heating progresses, the lithium diffuses to the surface and ends up as nanoparticles attached to the solidified particle. As a result, a long annealing process is needed to move the lithium in among the other atoms.

    When the urea is present, it starts out mixed with the lithium and other atoms inside the droplet. As temperatures rise, the urea decomposes, forming bubbles. As heating progresses, the bubbles burst, increasing circulation, which keeps the lithium from diffusing to the surface. The lithium ends up uniformly distributed, so the final heat treatment can be very short.

    The researchers are now designing a system to suspend a droplet of their mixture so they can observe the circulation inside it, with and without the urea present. They’re also developing experiments to examine how droplets vaporize, employing tools and methods they have used in the past to study how hydrocarbons vaporize inside internal combustion engines.

    They also have ideas about how to streamline and scale up their process. In coprecipitation, the first stage takes 10 to 20 hours, so one batch at a time moves on to the second stage to be annealed. In contrast, the novel FASP process generates particles in 20 minutes or less — a rate that’s consistent with continuous processing. In their design for an “integrated synthesis system,” the particles coming out of the baghouse are deposited on a belt that carries them for 10 or 20 minutes through a furnace. A deagglomerator then breaks any attached particles apart, and the cathode powder emerges, ready to be fabricated into a high-performance cathode for a lithium-ion battery. The cathode powders for high-performance lithium-ion batteries would thus be manufactured at unprecedented speed, low cost, and low energy use.

    Deng notes that every component in their integrated system is already used in industry, generally at a large scale and high flow-through rate. “That’s why we see great potential for our technology to be commercialized and scaled up,” she says. “Where our expertise comes into play is in designing the combustion chamber to control the temperature and heating rate so as to produce particles with the desired morphology.” And while a detailed economic analysis has yet to be performed, it seems clear that their technique will be faster, the equipment simpler, and the energy use lower than other methods of manufacturing cathode materials for lithium-ion batteries — potentially a major contribution to the ongoing energy transition.

    This research was supported by the MIT Department of Mechanical Engineering.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More