More stories

  • in

    Pricing carbon, valuing people

    In November, inflation hit a 39-year high in the United States. The consumer price index was up 6.8 percent from the previous year due to major increases in the cost of rent, food, motor vehicles, gasoline, and other common household expenses. While inflation impacts the entire country, its effects are not felt equally. At greatest risk are low- and middle-income Americans who may lack sufficient financial reserves to absorb such economic shocks.

    Meanwhile, scientists, economists, and activists across the political spectrum continue to advocate for another potential systemic economic change that many fear will also put lower-income Americans at risk: the imposition of a national carbon price, fee, or tax. Framed by proponents as the most efficient and cost-effective way to reduce greenhouse gas emissions and meet climate targets, a carbon penalty would incentivize producers and consumers to shift expenditures away from carbon-intensive products and services (e.g., coal or natural gas-generated electricity) and toward low-carbon alternatives (e.g., 100 percent renewable electricity). But if not implemented in a way that takes differences in household income into account, this policy strategy, like inflation, could place an unequal and untenable economic burden on low- and middle-income Americans.         

    To garner support from policymakers, carbon-penalty proponents have advocated for policies that recycle revenues from carbon penalties to all or lower-income taxpayers in the form of payroll tax reductions or lump-sum payments. And yet some of these proposed policies run the risk of reducing the overall efficiency of the U.S. economy, which would lower the nation’s GDP and impede its economic growth.

    Which begs the question: Is there a sweet spot at which a national carbon-penalty revenue-recycling policy can both avoid inflicting economic harm on lower-income Americans at the household level and degrading economic efficiency at the national level?

    In search of that sweet spot, researchers at the MIT Joint Program on the Science and Policy of Global Change assess the economic impacts of four different carbon-penalty revenue-recycling policies: direct rebates from revenues to households via lump-sum transfers; indirect refunding of revenues to households via a proportional reduction in payroll taxes; direct rebates from revenues to households, but only for low- and middle-income groups, with remaining revenues recycled via a proportional reduction in payroll taxes; and direct, higher rebates for poor households, with remaining revenues recycled via a proportional reduction in payroll taxes.

    To perform the assessment, the Joint Program researchers integrate a U.S. economic model (MIT U.S. Regional Energy Policy) with a dataset (Bureau of Labor Statistics’ Consumer Expenditure Survey) providing consumption patterns and other socioeconomic characteristics for 15,000 U.S. households. Using the combined model, they evaluate the distributional impacts and potential trade-offs between economic equity and efficiency of all four carbon-penalty revenue-recycling policies.

    The researchers find that household rebates have progressive impacts on consumers’ financial well-being, with the greatest benefits going to the lowest-income households, while policies centered on improving the efficiency of the economy (e.g., payroll tax reductions) have slightly regressive household-level financial impacts. In a nutshell, the trade-off is between rebates that provide more equity and less economic efficiency versus tax cuts that deliver the opposite result. The latter two policy options, which combine rebates to lower-income households with payroll tax reductions, result in an optimal blend of sufficiently progressive financial results at the household level and economy efficiency at the national level. Results of the study are published in the journal Energy Economics.

    “We have determined that only a portion of carbon-tax revenues is needed to compensate low-income households and thus reduce inequality, while the rest can be used to improve the economy by reducing payroll or other distortionary taxes,” says Xaquin García-Muros, lead author of the study, a postdoc at the MIT Joint Program who is affiliated with the Basque Centre for Climate Change in Spain. “Therefore, we can eliminate potential trade-offs between efficiency and equity, and promote a just and efficient energy transition.”

    “If climate policies increase the gap between rich and poor households or reduce the affordability of energy services, then these policies might be rejected by the public and, as a result, attempts to decarbonize the economy will be less efficient,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “Our findings provide guidance to decision-makers to advance more well-designed policies that deliver economic benefits to the nation as a whole.” 

    The study’s novel integration of a national economic model with household microdata creates a new and powerful platform to further investigate key differences among households that can help inform policies aimed at a just transition to a low-carbon economy. More

  • in

    Seeing the plasma edge of fusion experiments in new ways with artificial intelligence

    To make fusion energy a viable resource for the world’s energy grid, researchers need to understand the turbulent motion of plasmas: a mix of ions and electrons swirling around in reactor vessels. The plasma particles, following magnetic field lines in toroidal chambers known as tokamaks, must be confined long enough for fusion devices to produce significant gains in net energy, a challenge when the hot edge of the plasma (over 1 million degrees Celsius) is just centimeters away from the much cooler solid walls of the vessel.

    Abhilash Mathews, a PhD candidate in the Department of Nuclear Science and Engineering working at MIT’s Plasma Science and Fusion Center (PSFC), believes this plasma edge to be a particularly rich source of unanswered questions. A turbulent boundary, it is central to understanding plasma confinement, fueling, and the potentially damaging heat fluxes that can strike material surfaces — factors that impact fusion reactor designs.

    To better understand edge conditions, scientists focus on modeling turbulence at this boundary using numerical simulations that will help predict the plasma’s behavior. However, “first principles” simulations of this region are among the most challenging and time-consuming computations in fusion research. Progress could be accelerated if researchers could develop “reduced” computer models that run much faster, but with quantified levels of accuracy.

    For decades, tokamak physicists have regularly used a reduced “two-fluid theory” rather than higher-fidelity models to simulate boundary plasmas in experiment, despite uncertainty about accuracy. In a pair of recent publications, Mathews begins directly testing the accuracy of this reduced plasma turbulence model in a new way: he combines physics with machine learning.

    “A successful theory is supposed to predict what you’re going to observe,” explains Mathews, “for example, the temperature, the density, the electric potential, the flows. And it’s the relationships between these variables that fundamentally define a turbulence theory. What our work essentially examines is the dynamic relationship between two of these variables: the turbulent electric field and the electron pressure.”

    In the first paper, published in Physical Review E, Mathews employs a novel deep-learning technique that uses artificial neural networks to build representations of the equations governing the reduced fluid theory. With this framework, he demonstrates a way to compute the turbulent electric field from an electron pressure fluctuation in the plasma consistent with the reduced fluid theory. Models commonly used to relate the electric field to pressure break down when applied to turbulent plasmas, but this one is robust even to noisy pressure measurements.

    In the second paper, published in Physics of Plasmas, Mathews further investigates this connection, contrasting it against higher-fidelity turbulence simulations. This first-of-its-kind comparison of turbulence across models has previously been difficult — if not impossible — to evaluate precisely. Mathews finds that in plasmas relevant to existing fusion devices, the reduced fluid model’s predicted turbulent fields are consistent with high-fidelity calculations. In this sense, the reduced turbulence theory works. But to fully validate it, “one should check every connection between every variable,” says Mathews.

    Mathews’ advisor, Principal Research Scientist Jerry Hughes, notes that plasma turbulence is notoriously difficult to simulate, more so than the familiar turbulence seen in air and water. “This work shows that, under the right set of conditions, physics-informed machine-learning techniques can paint a very full picture of the rapidly fluctuating edge plasma, beginning from a limited set of observations. I’m excited to see how we can apply this to new experiments, in which we essentially never observe every quantity we want.”

    These physics-informed deep-learning methods pave new ways in testing old theories and expanding what can be observed from new experiments. David Hatch, a research scientist at the Institute for Fusion Studies at the University of Texas at Austin, believes these applications are the start of a promising new technique.

    “Abhi’s work is a major achievement with the potential for broad application,” he says. “For example, given limited diagnostic measurements of a specific plasma quantity, physics-informed machine learning could infer additional plasma quantities in a nearby domain, thereby augmenting the information provided by a given diagnostic. The technique also opens new strategies for model validation.”

    Mathews sees exciting research ahead.

    “Translating these techniques into fusion experiments for real edge plasmas is one goal we have in sight, and work is currently underway,” he says. “But this is just the beginning.”

    Mathews was supported in this work by the Manson Benedict Fellowship, Natural Sciences and Engineering Research Council of Canada, and U.S. Department of Energy Office of Science under the Fusion Energy Sciences program.​ More

  • in

    A tool to speed development of new solar cells

    In the ongoing race to develop ever-better materials and configurations for solar cells, there are many variables that can be adjusted to try to improve performance, including material type, thickness, and geometric arrangement. Developing new solar cells has generally been a tedious process of making small changes to one of these parameters at a time. While computational simulators have made it possible to evaluate such changes without having to actually build each new variation for testing, the process remains slow.

    Now, researchers at MIT and Google Brain have developed a system that makes it possible not just to evaluate one proposed design at a time, but to provide information about which changes will provide the desired improvements. This could greatly increase the rate for the discovery of new, improved configurations.

    The new system, called a differentiable solar cell simulator, is described in a paper published today in the journal Computer Physics Communications, written by MIT junior Sean Mann, research scientist Giuseppe Romano of MIT’s Institute for Soldier Nanotechnologies, and four others at MIT and at Google Brain.

    Traditional solar cell simulators, Romano explains, take the details of a solar cell configuration and produce as their output a predicted efficiency — that is, what percentage of the energy of incoming sunlight actually gets converted to an electric current. But this new simulator both predicts the efficiency and shows how much that output is affected by any one of the input parameters. “It tells you directly what happens to the efficiency if we make this layer a little bit thicker, or what happens to the efficiency if we for example change the property of the material,” he says.

    In short, he says, “we didn’t discover a new device, but we developed a tool that will enable others to discover more quickly other higher performance devices.” Using this system, “we are decreasing the number of times that we need to run a simulator to give quicker access to a wider space of optimized structures.” In addition, he says, “our tool can identify a unique set of material parameters that has been hidden so far because it’s very complex to run those simulations.”

    While traditional approaches use essentially a random search of possible variations, Mann says, with his tool “we can follow a trajectory of change because the simulator tells you what direction you want to be changing your device. That makes the process much faster because instead of exploring the entire space of opportunities, you can just follow a single path” that leads directly to improved performance.

    Since advanced solar cells often are composed of multiple layers interlaced with conductive materials to carry electric charge from one to the other, this computational tool reveals how changing the relative thicknesses of these different layers will affect the device’s output. “This is very important because the thickness is critical. There is a strong interplay between light propagation and the thickness of each layer and the absorption of each layer,” Mann explains.

    Other variables that can be evaluated include the amount of doping (the introduction of atoms of another element) that each layer receives, or the dielectric constant of insulating layers, or the bandgap, a measure of the energy levels of photons of light that can be captured by different materials used in the layers.

    This simulator is now available as an open-source tool that can be used immediately to help guide research in this field, Romano says. “It is ready, and can be taken up by industry experts.” To make use of it, researchers would couple this device’s computations with an optimization algorithm, or even a machine learning system, to rapidly assess a wide variety of possible changes and home in quickly on the most promising alternatives.

    At this point, the simulator is based on just a one-dimensional version of the solar cell, so the next step will be to expand its capabilities to include two- and three-dimensional configurations. But even this 1D version “can cover the majority of cells that are currently under production,” Romano says. Certain variations, such as so-called tandem cells using different materials, cannot yet be simulated directly by this tool, but “there are ways to approximate a tandem solar cell by simulating each of the individual cells,” Mann says.

    The simulator is “end-to-end,” Romano says, meaning it computes the sensitivity of the efficiency, also taking into account light absorption. He adds: “An appealing future direction is composing our simulator with advanced existing differentiable light-propagation simulators, to achieve enhanced accuracy.”

    Moving forward, Romano says, because this is an open-source code, “that means that once it’s up there, the community can contribute to it. And that’s why we are really excited.” Although this research group is “just a handful of people,” he says, now anyone working in the field can make their own enhancements and improvements to the code and introduce new capabilities.

    “Differentiable physics is going to provide new capabilities for the simulations of engineered systems,” says Venkat Viswanathan, an associate professor of mechanical engineering at Carnegie Mellon University, who was not associated with this work. “The  differentiable solar cell simulator is an incredible example of differentiable physics, that can now provide new capabilities to optimize solar cell device performance,” he says, calling the study “an exciting step forward.”

    In addition to Mann and Romano, the team included Eric Fadel and Steven Johnson at MIT, and Samuel Schoenholz and Ekin Cubuk at Google Brain. The work was supported in part by Eni S.p.A. and the MIT Energy Initiative, and the MIT Quest for Intelligence. More

  • in

    “Vigilant inclusion” central to combating climate change

    “To turbocharge work on saving the planet, we need effective, innovative, localized solutions, and diverse perspectives and experience at the table,” said U.S. Secretary of Energy Jennifer M. Granholm, the keynote speaker at the 10th annual U.S. Clean Energy Education and Empowerment (C3E) Women in Clean Energy Symposium and Awards.

    This event, convened virtually over Nov. 3-4 and engaging more than 1,000 participants, was devoted to the themes of justice and equity in clean energy. In panels and presentations, speakers hammered home the idea that the benefits of a zero-carbon future must be shared equitably, especially among groups historically neglected or marginalized. To ensure this outcome, the speakers concluded, these same groups must help drive the clean-energy transition, and women, who stand to bear enormous burdens as the world warms, should be central to the effort. This means “practicing vigilant inclusion,” said Granholm.

    The C3E symposium, which is dedicated to celebrating the leadership of women in the field of clean energy and inspiring the next generation of women leaders, featured professionals from government, industry, research, and other sectors. Some of them spoke from experience, and from the heart, on issues of environmental justice.

    “I grew up in a trailer park in northern Utah, where it was so cold at night a sheet of ice formed on the inside of the door,” said Melanie Santiago-Mosier, the deputy director of the Clean Energy Group and Clean Energy States Alliance. Santiago-Mosier, who won a 2018 C3E award for advocacy, has devoted her career “to bringing the benefits of clean energy to families like mine, and to preventing mistakes of the past that result in a deeply unjust energy system.”

    Tracey A. LeBeau, a member of the Cheyenne River Sioux Tribe who grew up in South Dakota, described the flooding of her community’s land to create a hydroelectric dam, forcing the dislocation of many people. Today, as administrator and CEO of the Western Area Power Administration, LeBeau manages distribution of hydropower across 15 states, and has built an organization in which the needs of disadvantaged communities are top of mind. “I stay true to my indigenous point of view,” she said.

    The C3E Symposium was launched in 2012 to increase gender diversity in the energy sector and provide awards to outstanding women in the field. It is part of the C3E Initiative, a collaboration between the U.S. Department of Energy (DOE), the MIT Energy Initiative (MITEI), Texas A&M Energy Institute, and Stanford Precourt Institute for Energy, which hosted the event this year.

    Connecting global rich and poor

    As the COP26 climate summit unfolded in Glasgow, highlighting the sharp divide between rich and poor nations, C3E panelists pursued a related agenda. One panel focused on paths for collaboration between industrialized nations and nations with developing economies to build a sustainable, carbon-neutral global economy.

    Radhika Thakkar, the vice president of corporate affairs at solar home energy provider Greenlight Planet and a 2019 C3E international award winner, believes that small partnerships with women at the community level can lead to large impacts. When her company introduced solar lamp home systems to Rwanda, “Women abandoned selling bananas to sell our lamps, making enough money to purchase land, cows, even putting their families through school,” she said.

    Sudeshna Banerjee, the practice manager for Europe and Central Asia and the energy and extractives global practice at the World Bank, talked about impacts of a bank-supported electrification program in Nairobi slums where gang warfare kept girls confined at home. “Once the lights came on, girls felt more empowered to go around in dark hours,” she said. “This is what development is: creating opportunities for young women to do something with their lives, giving them educational opportunities and creating instances for them to generate income.”

    In another session, panelists focused on ways to enable disadvantaged communities in the United States to take full advantage of clean energy opportunities.

    Amy Glasmeier, a professor of economic geography and regional planning at MIT, believes remote, rural communities require broadband and other information channels in order to chart their own clean-energy journeys. “We must provide access to more than energy, so people can educate themselves and imagine how the energy transition can work for them.”

    Santiago-Mosier described the absence of rooftop solar in underprivileged neighborhoods of the nation’s cities and towns as the result of a kind of clean-energy redlining. “Clean energy and the solar industry are falling into 400-year-old traps of systemic racism,” she said. “This is no accident: senior executives in solar are white and male.” The answer is “making sure that providers and companies are elevating people of color and women in industries,” otherwise “solar is leaving potential growth on the table.”

    Data for equitable outcomes

    Jessica Granderson, the director of building technology at the White House Council on Environmental Quality and the 2015 C3E research award winner, is measuring and remediating greenhouse gas emissions from the nation’s hundred-million-plus homes and commercial structures. In a panel exploring data-driven solutions for advancing equitable energy outcomes, Granderson described using new building performance standards that improve the energy efficiency and material performance of construction in a way that does not burden building owners with modest resources. “We are emphasizing engagements at the community level, bringing in a local workforce, and addressing the needs of local programs, in a way that hasn’t necessarily been present in the past,” she said.

    To facilitate her studies on how people in these communities use and experience public transportation systems, Tierra Bills, an assistant professor in civil and environmental engineering at Wayne State University, is developing a community-based approach for collecting data. “Not everyone who is eager to contribute to a study can participate in an online survey and upload data, so we need to find ways of overcoming these barriers,” she said.

    Corporate efforts to advance social and environmental justice turn on community engagement as well. Paula Gold-Williams, a C3E ambassador and the president and CEO of CPS Energy, with 1 million customers in San Antonio, Texas, described a weatherization campaign to better insulate homes that involved “looking for as many places to go as possible in parts of town where people wouldn’t normally raise their hands.”

    Carla Peterman, the executive vice president for corporate affairs and chief of sustainability at Pacific Gas & Electric, and the 2015 C3E government award winner, was deliberating about raising rates some years ago. “My ‘aha’ moment was in a community workshop where I realized that a $5 increase is too much,” she said. “It may be the cost of a latte, but these folks aren’t buying lattes, and it’s a choice between electricity and food or shelter.”

    A call to arms

    Humanity cannot win the all-out race to achieve a zero-carbon future without a vast new cohort of participants, symposium speakers agreed. A number of the 2021 C3E award winners who have committed their careers to clean energy invoked the moral imperative of the moment and issued a call to arms.

    “Seven-hundred-and-fifty million people around the world live without reliable energy, and 70 percent of schools lack power,” said Rhonda Jordan-Antoine PhD ’12, a senior energy specialist at the World Bank who received this year’s international award. By laboring to bring smart grids, battery technologies, and regional integration to even the most remote communities, she said, we open up opportunities for education and jobs. “Energy access is not just about energy, but development,” said Antoine, “and I hope you are encouraged to advance clean energy efforts around the globe.”

    Faith Corneille, who won the government award, works in the U.S. Department of State’s Bureau of Energy Resources. “We need innovators and scientists to design solutions; energy efficiency experts and engineers to build; lawyers to review, and bankers to invest, and insurance agents to protect against risk; and we need problem-solvers to thread these together,” she said. “Whatever your path, there’s a role for you: energy and climate intersect with whatever you do.”

    “We know the cause of climate change and how to reverse it, but to make that happen we need passionate and brilliant minds, all pulling in the same direction,” said Megan Nutting, the executive vice president of government and regulatory affairs at Sunnova Energy Corporation, and winner of the business award. “The clean-energy transition needs women,” she said. “If you are not working in clean energy, then why not?” More

  • in

    An energy-storage solution that flows like soft-serve ice cream

    Batteries made from an electrically conductive mixture the consistency of molasses could help solve a critical piece of the decarbonization puzzle. An interdisciplinary team from MIT has found that an electrochemical technology called a semisolid flow battery can be a cost-competitive form of energy storage and backup for variable renewable energy (VRE) sources such as wind and solar. The group’s research is described in a paper published in Joule.

    “The transition to clean energy requires energy storage systems of different durations for when the sun isn’t shining and the wind isn’t blowing,” says Emre Gençer, a research scientist with the MIT Energy Initiative (MITEI) and a member of the team. “Our work demonstrates that a semisolid flow battery could be a lifesaving as well as economical option when these VRE sources can’t generate power for a day or longer — in the case of natural disasters, for instance.”

    The rechargeable zinc-manganese dioxide (Zn-MnO2) battery the researchers created beat out other long-duration energy storage contenders. “We performed a comprehensive, bottom-up analysis to understand how the battery’s composition affects performance and cost, looking at all the trade-offs,” says Thaneer Malai Narayanan SM ’18, PhD ’21. “We showed that our system can be cheaper than others, and can be scaled up.”

    Narayanan, who conducted this work at MIT as part of his doctorate in mechanical engineering, is the lead author of the paper. Additional authors include Gençer, Yunguang Zhu, a postdoc in the MIT Electrochemical Energy Lab; Gareth McKinley, the School of Engineering Professor of Teaching Innovation and professor of mechanical engineering at MIT; and Yang Shao-Horn, the JR East Professor of Engineering, a professor of mechanical engineering and of materials science and engineering, and a member of the Research Laboratory of Electronics (RLE), who directs the MIT Electrochemical Energy Lab.

    Going with the flow

    In 2016, Narayanan began his graduate studies, joining the Electrochemical Energy Lab, a hotbed of research and exploration of solutions to mitigate climate change, which is centered on innovative battery chemistry and decarbonizing fuels and chemicals. One exciting opportunity for the lab: developing low- and no-carbon backup energy systems suitable for grid-scale needs when VRE generation flags.                                                  

    While the lab cast a wide net, investigating energy conversion and storage using solid oxide fuel cells, lithium-ion batteries, and metal-air batteries, among others, Narayanan took a particular interest in flow batteries. In these systems, two different chemical (electrolyte) solutions with either negative or positive ions are pumped from separate tanks, meeting across a membrane (called the stack). Here, the ion streams react, converting electrical energy to chemical energy — in effect, charging the battery. When there is demand for this stored energy, the solution gets pumped back to the stack to convert chemical energy into electrical energy again.

    The duration of time that flow batteries can discharge, releasing the stored electricity, is determined by the volume of positively and negatively charged electrolyte solutions streaming through the stack. In theory, as long as these solutions keep flowing, reacting, and converting the chemical energy to electrical energy, the battery systems can provide electricity.

    “For backup lasting more than a day, the architecture of flow batteries suggests they can be a cheap option,” says Narayanan. “You recharge the solution in the tanks from sun and wind power sources.” This renders the entire system carbon free.

    But while the promise of flow battery technologies has beckoned for at least a decade, the uneven performance and expense of materials required for these battery systems has slowed their implementation. So, Narayanan set out on an ambitious journey: to design and build a flow battery that could back up VRE systems for a day or more, storing and discharging energy with the same or greater efficiency than backup rivals; and to determine, through rigorous cost analysis, whether such a system could prove economically viable as a long-duration energy option.

    Multidisciplinary collaborators

    To attack this multipronged challenge, Narayanan’s project brought together, in his words, “three giants, scientists all well-known in their fields”:  Shao-Horn, who specializes in chemical physics and electrochemical science, and design of materials; Gençer, who creates detailed economic models of emergent energy systems at MITEI; and McKinley, an expert in rheology, the physics of flow. These three also served as his thesis advisors.

    “I was excited to work in such an interdisciplinary team, which offered a unique opportunity to create a novel battery architecture by designing charge transfer and ion transport within flowable semi-solid electrodes, and to guide battery engineering using techno-economics of such flowable batteries,” says Shao-Horn.

    While other flow battery systems in contention, such as the vanadium redox flow battery, offer the storage capacity and energy density to back up megawatt and larger power systems, they depend on expensive chemical ingredients that make them bad bets for long duration purposes. Narayanan was on the hunt for less-pricey chemical components that also feature rich energy potential.

    Through a series of bench experiments, the researchers came up with a novel electrode (electrical conductor) for the battery system: a mixture containing dispersed manganese dioxide (MnO2) particles, shot through with an electrically conductive additive, carbon black. This compound reacts with a conductive zinc solution or zinc plate at the stack, enabling efficient electrochemical energy conversion. The fluid properties of this battery are far removed from the watery solutions used by other flow batteries.

    “It’s a semisolid — a slurry,” says Narayanan. “Like thick, black paint, or perhaps a soft-serve ice cream,” suggests McKinley. The carbon black adds the pigment and the electric punch. To arrive at the optimal electrochemical mix, the researchers tweaked their formula many times.

    “These systems have to be able to flow under reasonable pressures, but also have a weak yield stress so that the active MnO2 particles don’t sink to the bottom of the flow tanks when the system isn’t being used, as well as not separate into a battery/oily clear fluid phase and a dense paste of carbon particles and MnO2,” says McKinley.

    This series of experiments informed the technoeconomic analysis. By “connecting the dots between composition, performance, and cost,” says Narayanan, he and Gençer were able to make system-level cost and efficiency calculations for the Zn-MnO2 battery.

    “Assessing the cost and performance of early technologies is very difficult, and this was an example of how to develop a standard method to help researchers at MIT and elsewhere,” says Gençer. “One message here is that when you include the cost analysis at the development stage of your experimental work, you get an important early understanding of your project’s cost implications.”

    In their final round of studies, Gençer and Narayanan compared the Zn-MnO2 battery to a set of equivalent electrochemical battery and hydrogen backup systems, looking at the capital costs of running them at durations of eight, 24, and 72 hours. Their findings surprised them: For battery discharges longer than a day, their semisolid flow battery beat out lithium-ion batteries and vanadium redox flow batteries. This was true even when factoring in the heavy expense of pumping the MnO2 slurry from tank to stack. “I was skeptical, and not expecting this battery would be competitive, but once I did the cost calculation, it was plausible,” says Gençer.

    But carbon-free battery backup is a very Goldilocks-like business: Different situations require different-duration solutions, whether an anticipated overnight loss of solar power, or a longer-term, climate-based disruption in the grid. “Lithium-ion is great for backup of eight hours and under, but the materials are too expensive for longer periods,” says Gençer. “Hydrogen is super expensive for very short durations, and good for very long durations, and we will need all of them.” This means it makes sense to continue working on the Zn-MnO2 system to see where it might fit in.

    “The next step is to take our battery system and build it up,” says Narayanan, who is working now as a battery engineer. “Our research also points the way to other chemistries that could be developed under the semi-solid flow battery platform, so we could be seeing this kind of technology used for energy storage in our lifetimes.”

    This research was supported by Eni S.p.A. through MITEI. Thaneer Malai Narayanan received an Eni-sponsored MIT Energy Fellowship during his work on the project. More

  • in

    Coupling power and hydrogen sector pathways to benefit decarbonization

    Governments and companies worldwide are increasing their investments in hydrogen research and development, indicating a growing recognition that hydrogen could play a significant role in meeting global energy system decarbonization goals. Since hydrogen is light, energy-dense, storable, and produces no direct carbon dioxide emissions at the point of use, this versatile energy carrier has the potential to be harnessed in a variety of ways in a future clean energy system.

    Often considered in the context of grid-scale energy storage, hydrogen has garnered renewed interest, in part due to expectations that our future electric grid will be dominated by variable renewable energy (VRE) sources such as wind and solar, as well as decreasing costs for water electrolyzers — both of which could make clean, “green” hydrogen more cost-competitive with fossil-fuel-based production. But hydrogen’s versatility as a clean energy fuel also makes it an attractive option to meet energy demand and to open pathways for decarbonization in hard-to-abate sectors where direct electrification is difficult, such as transportation, buildings, and industry.

    “We’ve seen a lot of progress and analysis around pathways to decarbonize electricity, but we may not be able to electrify all end uses. This means that just decarbonizing electricity supply is not sufficient, and we must develop other decarbonization strategies as well,” says Dharik Mallapragada, a research scientist at the MIT Energy Initiative (MITEI). “Hydrogen is an interesting energy carrier to explore, but understanding the role for hydrogen requires us to study the interactions between the electricity system and a future hydrogen supply chain.”

    In a recent paper, researchers from MIT and Shell present a framework to systematically study the role and impact of hydrogen-based technology pathways in a future low-carbon, integrated energy system, taking into account interactions with the electric grid and the spatio-temporal variations in energy demand and supply. The developed framework co-optimizes infrastructure investment and operation across the electricity and hydrogen supply chain under various emissions price scenarios. When applied to a Northeast U.S. case study, the researchers find this approach results in substantial benefits — in terms of costs and emissions reduction — as it takes advantage of hydrogen’s potential to provide the electricity system with a large flexible load when produced through electrolysis, while also enabling decarbonization of difficult-to-electrify, end-use sectors.

    The research team includes Mallapragada; Guannan He, a postdoc at MITEI; Abhishek Bose, a graduate research assistant at MITEI; Clara Heuberger-Austin, a researcher at Shell; and Emre Gençer, a research scientist at MITEI. Their findings are published in the journal Energy & Environmental Science.

    Cross-sector modeling

    “We need a cross-sector framework to analyze each energy carrier’s economics and role across multiple systems if we are to really understand the cost/benefits of direct electrification or other decarbonization strategies,” says He.

    To do that analysis, the team developed the Decision Optimization of Low-carbon Power-HYdrogen Network (DOLPHYN) model, which allows the user to study the role of hydrogen in low-carbon energy systems, the effects of coupling the power and hydrogen sectors, and the trade-offs between various technology options across both supply chains — spanning production, transport, storage, and end use, and their impact on decarbonization goals.

    “We are seeing great interest from industry and government, because they are all asking questions about where to invest their money and how to prioritize their decarbonization strategies,” says Gençer. Heuberger-Austin adds, “Being able to assess the system-level interactions between electricity and the emerging hydrogen economy is of paramount importance to drive technology development and support strategic value chain decisions. The DOLPHYN model can be instrumental in tackling those kinds of questions.”

    For a predefined set of electricity and hydrogen demand scenarios, the model determines the least-cost technology mix across the power and hydrogen sectors while adhering to a variety of operation and policy constraints. The model can incorporate a range of technology options — from VRE generation to carbon capture and storage (CCS) used with both power and hydrogen generation to trucks and pipelines used for hydrogen transport. With its flexible structure, the model can be readily adapted to represent emerging technology options and evaluate their long-term value to the energy system.

    As an important addition, the model takes into account process-level carbon emissions by allowing the user to add a cost penalty on emissions in both sectors. “If you have a limited emissions budget, we are able to explore the question of where to prioritize the limited emissions to get the best bang for your buck in terms of decarbonization,” says Mallapragada.

    Insights from a case study

    To test their model, the researchers investigated the Northeast U.S. energy system under a variety of demand, technology, and carbon price scenarios. While their major conclusions can be generalized for other regions, the Northeast proved to be a particularly interesting case study. This region has current legislation and regulatory support for renewable generation, as well as increasing emission-reduction targets, a number of which are quite stringent. It also has a high demand for energy for heating — a sector that is difficult to electrify and could particularly benefit from hydrogen and from coupling the power and hydrogen systems.

    The researchers find that when combining the power and hydrogen sectors through electrolysis or hydrogen-based power generation, there is more operational flexibility to support VRE integration in the power sector and a reduced need for alternative grid-balancing supply-side resources such as battery storage or dispatchable gas generation, which in turn reduces the overall system cost. This increased VRE penetration also leads to a reduction in emissions compared to scenarios without sector-coupling. “The flexibility that electricity-based hydrogen production provides in terms of balancing the grid is as important as the hydrogen it is going to produce for decarbonizing other end uses,” says Mallapragada. They found this type of grid interaction to be more favorable than conventional hydrogen-based electricity storage, which can incur additional capital costs and efficiency losses when converting hydrogen back to power. This suggests that the role of hydrogen in the grid could be more beneficial as a source of flexible demand than as storage.

    The researchers’ multi-sector modeling approach also highlighted that CCS is more cost-effective when utilized in the hydrogen supply chain, versus the power sector. They note that counter to this observation, by the end of the decade, six times more CCS projects will be deployed in the power sector than for use in hydrogen production — a fact that emphasizes the need for more cross-sectoral modeling when planning future energy systems.

    In this study, the researchers tested the robustness of their conclusions against a number of factors, such as how the inclusion of non-combustion greenhouse gas emissions (including methane emissions) from natural gas used in power and hydrogen production impacts the model outcomes. They find that including the upstream emissions footprint of natural gas within the model boundary does not impact the value of sector coupling in regards to VRE integration and cost savings for decarbonization; in fact, the value actually grows because of the increased emphasis on electricity-based hydrogen production over natural gas-based pathways.

    “You cannot achieve climate targets unless you take a holistic approach,” says Gençer. “This is a systems problem. There are sectors that you cannot decarbonize with electrification, and there are other sectors that you cannot decarbonize without carbon capture, and if you think about everything together, there is a synergistic solution that significantly minimizes the infrastructure costs.”

    This research was supported, in part, by Shell Global Solutions International B.V. in Amsterdam, the Netherlands, and MITEI’s Low-Carbon Energy Centers for Electric Power Systems and Carbon Capture, Utilization, and Storage. More

  • in

    New “risk triage” platform pinpoints compounding threats to US infrastructure

    Over a 36-hour period in August, Hurricane Henri delivered record rainfall in New York City, where an aging storm-sewer system was not built to handle the deluge, resulting in street flooding. Meanwhile, an ongoing drought in California continued to overburden aquifers and extend statewide water restrictions. As climate change amplifies the frequency and intensity of extreme events in the United States and around the world, and the populations and economies they threaten grow and change, there is a critical need to make infrastructure more resilient. But how can this be done in a timely, cost-effective way?

    An emerging discipline called multi-sector dynamics (MSD) offers a promising solution. MSD homes in on compounding risks and potential tipping points across interconnected natural and human systems. Tipping points occur when these systems can no longer sustain multiple, co-evolving stresses, such as extreme events, population growth, land degradation, drinkable water shortages, air pollution, aging infrastructure, and increased human demands. MSD researchers use observations and computer models to identify key precursory indicators of such tipping points, providing decision-makers with critical information that can be applied to mitigate risks and boost resilience in infrastructure and managed resources.

    At MIT, the Joint Program on the Science and Policy of Global Change has since 2018 been developing MSD expertise and modeling tools and using them to explore compounding risks and potential tipping points in selected regions of the United States. In a two-hour webinar on Sept. 15, MIT Joint Program researchers presented an overview of the program’s MSD research tool set and its applications.  

    MSD and the risk triage platform

    “Multi-sector dynamics explores interactions and interdependencies among human and natural systems, and how these systems may adapt, interact, and co-evolve in response to short-term shocks and long-term influences and stresses,” says MIT Joint Program Deputy Director C. Adam Schlosser, noting that such analysis can reveal and quantify potential risks that would likely evade detection in siloed investigations. “These systems can experience cascading effects or failures after crossing tipping points. The real question is not just where these tipping points are in each system, but how they manifest and interact across all systems.”

    To address that question, the program’s MSD researchers have developed the MIT Socio-Environmental Triage (MST) platform, now publicly available for the first time. Focused on the continental United States, the first version of the platform analyzes present-day risks related to water, land, climate, the economy, energy, demographics, health, and infrastructure, and where these compound to create risk hot spots. It’s essentially a screening-level visualization tool that allows users to examine risks, identify hot spots when combining risks, and make decisions about how to deploy more in-depth analysis to solve complex problems at regional and local levels. For example, MST can identify hot spots for combined flood and poverty risks in the lower Mississippi River basin, and thereby alert decision-makers as to where more concentrated flood-control resources are needed.

    Successive versions of the platform will incorporate projections based on the MIT Joint Program’s Integrated Global System Modeling (IGSM) framework of how different systems and stressors may co-evolve into the future and thereby change the risk landscape. This enhanced capability could help uncover cost-effective pathways for mitigating and adapting to a wide range of environmental and economic risks.  

    MSD applications

    Five webinar presentations explored how MIT Joint Program researchers are applying the program’s risk triage platform and other MSD modeling tools to identify potential tipping points and risks in five key domains: water quality, land use, economics and energy, health, and infrastructure. 

    Joint Program Principal Research Scientist Xiang Gao described her efforts to apply a high-resolution U.S. water-quality model to calculate a location-specific, water-quality index over more than 2,000 river basins in the country. By accounting for interactions among climate, agriculture, and socioeconomic systems, various water-quality measures can be obtained ranging from nitrate and phosphate levels to phytoplankton concentrations. This modeling approach advances a unique capability to identify potential water-quality risk hot spots for freshwater resources.

    Joint Program Research Scientist Angelo Gurgel discussed his MSD-based analysis of how climate change, population growth, changing diets, crop-yield improvements and other forces that drive land-use change at the global level may ultimately impact how land is used in the United States. Drawing upon national observational data and the IGSM framework, the analysis shows that while current U.S. land-use trends are projected to persist or intensify between now and 2050, there is no evidence of any concerning tipping points arising throughout this period.  

    MIT Joint Program Research Scientist Jennifer Morris presented several examples of how the risk triage platform can be used to combine existing U.S. datasets and the IGSM framework to assess energy and economic risks at the regional level. For example, by aggregating separate data streams on fossil-fuel employment and poverty, one can target selected counties for clean energy job training programs as the nation moves toward a low-carbon future. 

    “Our modeling and risk triage frameworks can provide pictures of current and projected future economic and energy landscapes,” says Morris. “They can also highlight interactions among different human, built, and natural systems, including compounding risks that occur in the same location.”  

    MIT Joint Program research affiliate Sebastian Eastham, a research scientist at the MIT Laboratory for Aviation and the Environment, described an MSD approach to the study of air pollution and public health. Linking the IGSM with an atmospheric chemistry model, Eastham ultimately aims to better understand where the greatest health risks are in the United States and how they may compound throughout this century under different policy scenarios. Using the risk triage tool to combine current risk metrics for air quality and poverty in a selected county based on current population and air-quality data, he showed how one can rapidly identify cardiovascular and other air-pollution-induced disease risk hot spots.

    Finally, MIT Joint Program research affiliate Alyssa McCluskey, a lecturer at the University of Colorado at Boulder, showed how the risk triage tool can be used to pinpoint potential risks to roadways, waterways, and power distribution lines from flooding, extreme temperatures, population growth, and other stressors. In addition, McCluskey described how transportation and energy infrastructure development and expansion can threaten critical wildlife habitats.

    Enabling comprehensive, location-specific analyses of risks and hot spots within and among multiple domains, the Joint Program’s MSD modeling tools can be used to inform policymaking and investment from the municipal to the global level.

    “MSD takes on the challenge of linking human, natural, and infrastructure systems in order to inform risk analysis and decision-making,” says Schlosser. “Through our risk triage platform and other MSD models, we plan to assess important interactions and tipping points, and to provide foresight that supports action toward a sustainable, resilient, and prosperous world.”

    This research is funded by the U.S. Department of Energy’s Office of Science as an ongoing project. More

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More