More stories

  • in

    Computing our climate future

    On Monday, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the first in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    With improvements to computer processing power and an increased understanding of the physical equations governing the Earth’s climate, scientists are continually working to refine climate models and improve their predictive power. But the tools they’re refining were originally conceived decades ago with only scientists in mind. When it comes to developing tangible climate action plans, these models remain inscrutable to the policymakers, public safety officials, civil engineers, and community organizers who need their predictive insight most.

    “What you end up having is a gap between what’s typically used in practice, and the real cutting-edge science,” says Noelle Selin, a professor in the Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and co-lead with Professor Raffaele Ferrari on the MIT Climate Grand Challenges flagship project “Bringing Computation to the Climate Crisis.” “How can we use new computational techniques, new understandings, new ways of thinking about modeling, to really bridge that gap between state-of-the-art scientific advances and modeling, and people who are actually needing to use these models?”

    Using this as a driving question, the team won’t just be trying to refine current climate models, they’re building a new one from the ground up.

    This kind of game-changing advancement is exactly what the MIT Climate Grand Challenges is looking for, which is why the proposal has been named one of the five flagship projects in the ambitious Institute-wide program aimed at tackling the climate crisis. The proposal, which was selected from 100 submissions and was among 27 finalists, will receive additional funding and support to further their goal of reimagining the climate modeling system. It also brings together contributors from across the Institute, including the MIT Schwarzman College of Computing, the School of Engineering, and the Sloan School of Management.

    When it comes to pursuing high-impact climate solutions that communities around the world can use, “it’s great to do it at MIT,” says Ferrari, EAPS Cecil and Ida Green Professor of Oceanography. “You’re not going to find many places in the world where you have the cutting-edge climate science, the cutting-edge computer science, and the cutting-edge policy science experts that we need to work together.”

    The climate model of the future

    The proposal builds on work that Ferrari began three years ago as part of a joint project with Caltech, the Naval Postgraduate School, and NASA’s Jet Propulsion Lab. Called the Climate Modeling Alliance (CliMA), the consortium of scientists, engineers, and applied mathematicians is constructing a climate model capable of more accurately projecting future changes in critical variables, such as clouds in the atmosphere and turbulence in the ocean, with uncertainties at least half the size of those in existing models.

    To do this, however, requires a new approach. For one thing, current models are too coarse in resolution — at the 100-to-200-kilometer scale — to resolve small-scale processes like cloud cover, rainfall, and sea ice extent. But also, explains Ferrari, part of this limitation in resolution is due to the fundamental architecture of the models themselves. The languages most global climate models are coded in were first created back in the 1960s and ’70s, largely by scientists for scientists. Since then, advances in computing driven by the corporate world and computer gaming have given rise to dynamic new computer languages, powerful graphics processing units, and machine learning.

    For climate models to take full advantage of these advancements, there’s only one option: starting over with a modern, more flexible language. Written in Julia, a part of Julialab’s Scientific Machine Learning technology, and spearheaded by Alan Edelman, a professor of applied mathematics in MIT’s Department of Mathematics, CliMA will be able to harness far more data than the current models can handle.

    “It’s been real fun finally working with people in computer science here at MIT,” Ferrari says. “Before it was impossible, because traditional climate models are in a language their students can’t even read.”

    The result is what’s being called the “Earth digital twin,” a climate model that can simulate global conditions on a large scale. This on its own is an impressive feat, but the team wants to take this a step further with their proposal.

    “We want to take this large-scale model and create what we call an ‘emulator’ that is only predicting a set of variables of interest, but it’s been trained on the large-scale model,” Ferrari explains. Emulators are not new technology, but what is new is that these emulators, being referred to as the “Earth digital cousins,” will take advantage of machine learning.

    “Now we know how to train a model if we have enough data to train them on,” says Ferrari. Machine learning for projects like this has only become possible in recent years as more observational data become available, along with improved computer processing power. The goal is to create smaller, more localized models by training them using the Earth digital twin. Doing so will save time and money, which is key if the digital cousins are going to be usable for stakeholders, like local governments and private-sector developers.

    Adaptable predictions for average stakeholders

    When it comes to setting climate-informed policy, stakeholders need to understand the probability of an outcome within their own regions — in the same way that you would prepare for a hike differently if there’s a 10 percent chance of rain versus a 90 percent chance. The smaller Earth digital cousin models will be able to do things the larger model can’t do, like simulate local regions in real time and provide a wider range of probabilistic scenarios.

    “Right now, if you wanted to use output from a global climate model, you usually would have to use output that’s designed for general use,” says Selin, who is also the director of the MIT Technology and Policy Program. With the project, the team can take end-user needs into account from the very beginning while also incorporating their feedback and suggestions into the models, helping to “democratize the idea of running these climate models,” as she puts it. Doing so means building an interactive interface that eventually will give users the ability to change input values and run the new simulations in real time. The team hopes that, eventually, the Earth digital cousins could run on something as ubiquitous as a smartphone, although developments like that are currently beyond the scope of the project.

    The next thing the team will work on is building connections with stakeholders. Through participation of other MIT groups, such as the Joint Program on the Science and Policy of Global Change and the Climate and Sustainability Consortium, they hope to work closely with policymakers, public safety officials, and urban planners to give them predictive tools tailored to their needs that can provide actionable outputs important for planning. Faced with rising sea levels, for example, coastal cities could better visualize the threat and make informed decisions about infrastructure development and disaster preparedness; communities in drought-prone regions could develop long-term civil planning with an emphasis on water conservation and wildfire resistance.

    “We want to make the modeling and analysis process faster so people can get more direct and useful feedback for near-term decisions,” she says.

    The final piece of the challenge is to incentivize students now so that they can join the project and make a difference. Ferrari has already had luck garnering student interest after co-teaching a class with Edelman and seeing the enthusiasm students have about computer science and climate solutions.

    “We’re intending in this project to build a climate model of the future,” says Selin. “So it seems really appropriate that we would also train the builders of that climate model.” More

  • in

    MIT announces five flagship projects in first-ever Climate Grand Challenges competition

    MIT today announced the five flagship projects selected in its first-ever Climate Grand Challenges competition. These multiyear projects will define a dynamic research agenda focused on unraveling some of the toughest unsolved climate problems and bringing high-impact, science-based solutions to the world on an accelerated basis.

    Representing the most promising concepts to emerge from the two-year competition, the five flagship projects will receive additional funding and resources from MIT and others to develop their ideas and swiftly transform them into practical solutions at scale.

    “Climate Grand Challenges represents a whole-of-MIT drive to develop game-changing advances to confront the escalating climate crisis, in time to make a difference,” says MIT President L. Rafael Reif. “We are inspired by the creativity and boldness of the flagship ideas and by their potential to make a significant contribution to the global climate response. But given the planet-wide scale of the challenge, success depends on partnership. We are eager to work with visionary leaders in every sector to accelerate this impact-oriented research, implement serious solutions at scale, and inspire others to join us in confronting this urgent challenge for humankind.”

    Brief descriptions of the five Climate Grand Challenges flagship projects are provided below.

    Bringing Computation to the Climate Challenge

    This project leverages advances in artificial intelligence, machine learning, and data sciences to improve the accuracy of climate models and make them more useful to a variety of stakeholders — from communities to industry. The team is developing a digital twin of the Earth that harnesses more data than ever before to reduce and quantify uncertainties in climate projections.

    Research leads: Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in the Department of Earth, Atmospheric and Planetary Sciences, and director of the Program in Atmospheres, Oceans, and Climate; and Noelle Eckley Selin, director of the Technology and Policy Program and professor with a joint appointment in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences

    Center for Electrification and Decarbonization of Industry

    This project seeks to reinvent and electrify the processes and materials behind hard-to-decarbonize industries like steel, cement, ammonia, and ethylene production. A new innovation hub will perform targeted fundamental research and engineering with urgency, pushing the technological envelope on electricity-driven chemical transformations.

    Research leads: Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering, and Bilge Yıldız, the Breene M. Kerr Professor in the Department of Nuclear Science and Engineering and professor in the Department of Materials Science and Engineering

    Preparing for a new world of weather and climate extremes

    This project addresses key gaps in knowledge about intensifying extreme events such as floods, hurricanes, and heat waves, and quantifies their long-term risk in a changing climate. The team is developing a scalable climate-change adaptation toolkit to help vulnerable communities and low-carbon energy providers prepare for these extreme weather events.

    Research leads: Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in the Department of Earth, Atmospheric and Planetary Sciences and co-director of the MIT Lorenz Center; Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab; and Paul O’Gorman, professor in the Program in Atmospheres, Oceans, and Climate in the Department of Earth, Atmospheric and Planetary Sciences

    The Climate Resilience Early Warning System

    The CREWSnet project seeks to reinvent climate change adaptation with a novel forecasting system that empowers underserved communities to interpret local climate risk, proactively plan for their futures incorporating resilience strategies, and minimize losses. CREWSnet will initially be demonstrated in southwestern Bangladesh, serving as a model for similarly threatened regions around the world.

    Research leads: John Aldridge, assistant leader of the Humanitarian Assistance and Disaster Relief Systems Group at MIT Lincoln Laboratory, and Elfatih Eltahir, the H.M. King Bhumibol Professor of Hydrology and Climate in the Department of Civil and Environmental Engineering

    Revolutionizing agriculture with low-emissions, resilient crops

    This project works to revolutionize the agricultural sector with climate-resilient crops and fertilizers that have the ability to dramatically reduce greenhouse gas emissions from food production.

    Research lead: Christopher Voigt, the Daniel I.C. Wang Professor in the Department of Biological Engineering

    “As one of the world’s leading institutions of research and innovation, it is incumbent upon MIT to draw on our depth of knowledge, ingenuity, and ambition to tackle the hard climate problems now confronting the world,” says Richard Lester, MIT associate provost for international activities. “Together with collaborators across industry, finance, community, and government, the Climate Grand Challenges teams are looking to develop and implement high-impact, path-breaking climate solutions rapidly and at a grand scale.”

    The initial call for ideas in 2020 yielded nearly 100 letters of interest from almost 400 faculty members and senior researchers, representing 90 percent of MIT departments. After an extensive evaluation, 27 finalist teams received a total of $2.7 million to develop comprehensive research and innovation plans. The projects address four broad research themes:

    To select the winning projects, research plans were reviewed by panels of international experts representing relevant scientific and technical domains as well as experts in processes and policies for innovation and scalability.

    “In response to climate change, the world really needs to do two things quickly: deploy the solutions we already have much more widely, and develop new solutions that are urgently needed to tackle this intensifying threat,” says Maria Zuber, MIT vice president for research. “These five flagship projects exemplify MIT’s strong determination to bring its knowledge and expertise to bear in generating new ideas and solutions that will help solve the climate problem.”

    “The Climate Grand Challenges flagship projects set a new standard for inclusive climate solutions that can be adapted and implemented across the globe,” says MIT Chancellor Melissa Nobles. “This competition propels the entire MIT research community — faculty, students, postdocs, and staff — to act with urgency around a worsening climate crisis, and I look forward to seeing the difference these projects can make.”

    “MIT’s efforts on climate research amid the climate crisis was a primary reason that I chose to attend MIT, and remains a reason that I view the Institute favorably. MIT has a clear opportunity to be a thought leader in the climate space in our own MIT way, which is why CGC fits in so well,” says senior Megan Xu, who served on the Climate Grand Challenges student committee and is studying ways to make the food system more sustainable.

    The Climate Grand Challenges competition is a key initiative of “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021. Fast Forward outlines MIT’s comprehensive plan for helping the world address the climate crisis. It consists of five broad areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts. More

  • in

    New England renewables + Canadian hydropower

    The urgent need to cut carbon emissions has prompted a growing number of U.S. states to commit to achieving 100 percent clean electricity by 2040 or 2050. But figuring out how to meet those commitments and still have a reliable and affordable power system is a challenge. Wind and solar installations will form the backbone of a carbon-free power system, but what technologies can meet electricity demand when those intermittent renewable sources are not adequate?

    In general, the options being discussed include nuclear power, natural gas with carbon capture and storage (CCS), and energy storage technologies such as new and improved batteries and chemical storage in the form of hydrogen. But in the northeastern United States, there is one more possibility being proposed: electricity imported from hydropower plants in the neighboring Canadian province of Quebec.

    The proposition makes sense. Those plants can produce as much electricity as about 40 large nuclear power plants, and some power generated in Quebec already comes to the Northeast. So, there could be abundant additional supply to fill any shortfall when New England’s intermittent renewables underproduce. However, U.S. wind and solar investors view Canadian hydropower as a competitor and argue that reliance on foreign supply discourages further U.S. investment.

    Two years ago, three researchers affiliated with the MIT Center for Energy and Environmental Policy Research (CEEPR) — Emil Dimanchev SM ’18, now a PhD candidate at the Norwegian University of Science and Technology; Joshua Hodge, CEEPR’s executive director; and John Parsons, a senior lecturer in the MIT Sloan School of Management — began wondering whether viewing Canadian hydro as another source of electricity might be too narrow. “Hydropower is a more-than-hundred-year-old technology, and plants are already built up north,” says Dimanchev. “We might not need to build something new. We might just need to use those plants differently or to a greater extent.”

    So the researchers decided to examine the potential role and economic value of Quebec’s hydropower resource in a future low-carbon system in New England. Their goal was to help inform policymakers, utility decision-makers, and others about how best to incorporate Canadian hydropower into their plans and to determine how much time and money New England should spend to integrate more hydropower into its system. What they found out was surprising, even to them.

    The analytical methods

    To explore possible roles for Canadian hydropower to play in New England’s power system, the MIT researchers first needed to predict how the regional power system might look in 2050 — both the resources in place and how they would be operated, given any policy constraints. To perform that analysis, they used GenX, a modeling tool originally developed by Jesse Jenkins SM ’14, PhD ’18 and Nestor Sepulveda SM ’16, PhD ’20 while they were researchers at the MIT Energy Initiative (MITEI).

    The GenX model is designed to support decision-making related to power system investment and real-time operation and to examine the impacts of possible policy initiatives on those decisions. Given information on current and future technologies — different kinds of power plants, energy storage technologies, and so on — GenX calculates the combination of equipment and operating conditions that can meet a defined future demand at the lowest cost. The GenX modeling tool can also incorporate specified policy constraints, such as limits on carbon emissions.

    For their study, Dimanchev, Hodge, and Parsons set parameters in the GenX model using data and assumptions derived from a variety of sources to build a representation of the interconnected power systems in New England, New York, and Quebec. (They included New York to account for that state’s existing demand on the Canadian hydro resources.) For data on the available hydropower, they turned to Hydro-Québec, the public utility that owns and operates most of the hydropower plants in Quebec.

    It’s standard in such analyses to include real-world engineering constraints on equipment, such as how quickly certain power plants can be ramped up and down. With help from Hydro-Québec, the researchers also put hour-to-hour operating constraints on the hydropower resource.

    Most of Hydro-Québec’s plants are “reservoir hydropower” systems. In them, when power isn’t needed, the flow on a river is restrained by a dam downstream of a reservoir, and the reservoir fills up. When power is needed, the dam is opened, and the water in the reservoir runs through downstream pipes, turning turbines and generating electricity. Proper management of such a system requires adhering to certain operating constraints. For example, to prevent flooding, reservoirs must not be allowed to overfill — especially prior to spring snowmelt. And generation can’t be increased too quickly because a sudden flood of water could erode the river edges or disrupt fishing or water quality.

    Based on projections from the National Renewable Energy Laboratory and elsewhere, the researchers specified electricity demand for every hour of the year 2050, and the model calculated the cost-optimal mix of technologies and system operating regime that would satisfy that hourly demand, including the dispatch of the Hydro-Québec hydropower system. In addition, the model determined how electricity would be traded among New England, New York, and Quebec.

    Effects of decarbonization limits on technology mix and electricity trading

    To examine the impact of the emissions-reduction mandates in the New England states, the researchers ran the model assuming reductions in carbon emissions between 80 percent and 100 percent relative to 1990 levels. The results of those runs show that, as emissions limits get more stringent, New England uses more wind and solar and extends the lifetime of its existing nuclear plants. To balance the intermittency of the renewables, the region uses natural gas plants, demand-side management, battery storage (modeled as lithium-ion batteries), and trading with Quebec’s hydropower-based system. Meanwhile, the optimal mix in Quebec is mostly composed of existing hydro generation. Some solar is added, but new reservoirs are built only if renewable costs are assumed to be very high.

    The most significant — and perhaps surprising — outcome is that in all the scenarios, the hydropower-based system of Quebec is not only an exporter but also an importer of electricity, with the direction of flow on the Quebec-New England transmission lines changing over time.

    Historically, energy has always flowed from Quebec to New England. The model results for 2018 show electricity flowing from north to south, with the quantity capped by the current transmission capacity limit of 2,225 megawatts (MW).

    An analysis for 2050, assuming that New England decarbonizes 90 percent and the capacity of the transmission lines remains the same, finds electricity flows going both ways. Flows from north to south still dominate. But for nearly 3,500 of the 8,760 hours of the year, electricity flows in the opposite direction — from New England to Quebec. And for more than 2,200 of those hours, the flow going north is at the maximum the transmission lines can carry.

    The direction of flow is motivated by economics. When renewable generation is abundant in New England, prices are low, and it’s cheaper for Quebec to import electricity from New England and conserve water in its reservoirs. Conversely, when New England’s renewables are scarce and prices are high, New England imports hydro-generated electricity from Quebec.

    So rather than delivering electricity, Canadian hydro provides a means of storing the electricity generated by the intermittent renewables in New England.

    “We see this in our modeling because when we tell the model to meet electricity demand using these resources, the model decides that it is cost-optimal to use the reservoirs to store energy rather than anything else,” says Dimanchev. “We should be sending the energy back and forth, so the reservoirs in Quebec are in essence a battery that we use to store some of the electricity produced by our intermittent renewables and discharge it when we need it.”

    Given that outcome, the researchers decided to explore the impact of expanding the transmission capacity between New England and Quebec. Building transmission lines is always contentious, but what would be the impact if it could be done?

    Their model results shows that when transmission capacity is increased from 2,225 MW to 6,225 MW, flows in both directions are greater, and in both cases the flow is at the new maximum for more than 1,000 hours.

    Results of the analysis thus confirm that the economic response to expanded transmission capacity is more two-way trading. To continue the battery analogy, more transmission capacity to and from Quebec effectively increases the rate at which the battery can be charged and discharged.

    Effects of two-way trading on the energy mix

    What impact would the advent of two-way trading have on the mix of energy-generating sources in New England and Quebec in 2050?

    Assuming current transmission capacity, in New England, the change from one-way to two-way trading increases both wind and solar power generation and to a lesser extent nuclear; it also decreases the use of natural gas with CCS. The hydro reservoirs in Canada can provide long-duration storage — over weeks, months, and even seasons — so there is less need for natural gas with CCS to cover any gaps in supply. The level of imports is slightly lower, but now there are also exports. Meanwhile, in Quebec, two-way trading reduces solar power generation, and the use of wind disappears. Exports are roughly the same, but now there are imports as well. Thus, two-way trading reallocates renewables from Quebec to New England, where it’s more economical to install and operate solar and wind systems.

    Another analysis examined the impact on the energy mix of assuming two-way trading plus expanded transmission capacity. For New England, greater transmission capacity allows wind, solar, and nuclear to expand further; natural gas with CCS all but disappears; and both imports and exports increase significantly. In Quebec, solar decreases still further, and both exports and imports of electricity increase.

    Those results assume that the New England power system decarbonizes by 99 percent in 2050 relative to 1990 levels. But at 90 percent and even 80 percent decarbonization levels, the model concludes that natural gas capacity decreases with the addition of new transmission relative to the current transmission scenario. Existing plants are retired, and new plants are not built as they are no longer economically justified. Since natural gas plants are the only source of carbon emissions in the 2050 energy system, the researchers conclude that the greater access to hydro reservoirs made possible by expanded transmission would accelerate the decarbonization of the electricity system.

    Effects of transmission changes on costs

    The researchers also explored how two-way trading with expanded transmission capacity would affect costs in New England and Quebec, assuming 99 percent decarbonization in New England. New England’s savings on fixed costs (investments in new equipment) are largely due to a decreased need to invest in more natural gas with CCS, and its savings on variable costs (operating costs) are due to a reduced need to run those plants. Quebec’s savings on fixed costs come from a reduced need to invest in solar generation. The increase in cost — borne by New England — reflects the construction and operation of the increased transmission capacity. The net benefit for the region is substantial.

    Thus, the analysis shows that everyone wins as transmission capacity increases — and the benefit grows as the decarbonization target tightens. At 99 percent decarbonization, the overall New England-Quebec region pays about $21 per megawatt-hour (MWh) of electricity with today’s transmission capacity but only $18/MWh with expanded transmission. Assuming 100 percent reduction in carbon emissions, the region pays $29/MWh with current transmission capacity and only $22/MWh with expanded transmission.

    Addressing misconceptions

    These results shed light on several misconceptions that policymakers, supporters of renewable energy, and others tend to have.

    The first misconception is that the New England renewables and Canadian hydropower are competitors. The modeling results instead show that they’re complementary. When the power systems in New England and Quebec work together as an integrated system, the Canadian reservoirs are used part of the time to store the renewable electricity. And with more access to hydropower storage in Quebec, there’s generally more renewable investment in New England.

    The second misconception arises when policymakers refer to Canadian hydro as a “baseload resource,” which implies a dependable source of electricity — particularly one that supplies power all the time. “Our study shows that by viewing Canadian hydropower as a baseload source of electricity — or indeed a source of electricity at all — you’re not taking full advantage of what that resource can provide,” says Dimanchev. “What we show is that Quebec’s reservoir hydro can provide storage, specifically for wind and solar. It’s a solution to the intermittency problem that we foresee in carbon-free power systems for 2050.”

    While the MIT analysis focuses on New England and Quebec, the researchers believe that their results may have wider implications. As power systems in many regions expand production of renewables, the value of storage grows. Some hydropower systems have storage capacity that has not yet been fully utilized and could be a good complement to renewable generation. Taking advantage of that capacity can lower the cost of deep decarbonization and help move some regions toward a decarbonized supply of electricity.

    This research was funded by the MIT Center for Energy and Environmental Policy Research, which is supported in part by a consortium of industry and government associates.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Finding the questions that guide MIT fusion research

    “One of the things I learned was, doing good science isn’t so much about finding the answers as figuring out what the important questions are.”

    As Martin Greenwald retires from the responsibilities of senior scientist and deputy director of the MIT Plasma Science and Fusion Center (PSFC), he reflects on his almost 50 years of science study, 43 of them as a researcher at MIT, pursuing the question of how to make the carbon-free energy of fusion a reality.

    Most of Greenwald’s important questions about fusion began after graduating from MIT with a BS in both physics and chemistry. Beginning graduate work at the University of California at Berkeley, he felt compelled to learn more about fusion as an energy source that could have “a real societal impact.” At the time, researchers were exploring new ideas for devices that could create and confine fusion plasmas. Greenwald worked on Berkeley’s “alternate concept” TORMAC, a Toroidal Magnetic Cusp. “It didn’t work out very well,” he laughs. “The first thing I was known for was making the measurements that shut down the program.”

    Believing the temperature of the plasma generated by the device would not be as high as his group leader expected, Greenwald developed hardware that could measure the low temperatures predicted by his own “back of the envelope calculations.” As he anticipated, his measurements showed that “this was not a fusion plasma; this was hardly a confined plasma at all.”

    With a PhD from Berkeley, Greenwald returned to MIT for a research position at the PSFC, attracted by the center’s “esprit de corps.”

    He arrived in time to participate in the final experiments on Alcator A, the first in a series of tokamaks built at MIT, all characterized by compact size and featuring high-field magnets. The tokamak design was then becoming favored as the most effective route to fusion: its doughnut-shaped vacuum chamber, surrounded by electromagnets, could confine the turbulent plasma long enough, while increasing its heat and density, to make fusion occur.

    Alcator A showed that the energy confinement time improves in relation to increasing plasma density. MIT’s succeeding device, Alcator C, was designed to use higher magnetic fields, boosting expectations that it would reach higher densities and better confinement. To attain these goals, however, Greenwald had to pursue a new technique that increased density by injecting pellets of frozen fuel into the plasma, a method he likens to throwing “snowballs in hell.” This work was notable for the creation of a new regime of enhanced plasma confinement on Alcator C. In those experiments, a confined plasma surpassed for the first time one of the two Lawson criteria — the minimum required value for the product of the plasma density and confinement time — for making net power from fusion. This had been a milestone for fusion research since their publication by John Lawson in 1957.

    Greenwald continued to make a name for himself as part of a larger study into the physics of the Compact Ignition Tokamak — a high-field burning plasma experiment that the U.S. program was proposing to build in the late 1980s. The result, unexpectedly, was a new scaling law, later known as the “Greenwald Density Limit,” and a new theory for the mechanism of the limit. It has been used to accurately predict performance on much larger machines built since.

    The center’s next tokamak, Alcator C-Mod, started operation in 1993 and ran for more than 20 years, with Greenwald as the chair of its Experimental Program Committee. Larger than Alcator C, the new device supported a highly shaped plasma, strong radiofrequency heating, and an all-metal plasma-facing first wall. All of these would eventually be required in a fusion power system.

    C-Mod proved to be MIT’s most enduring fusion experiment to date, producing important results for 20 years. During that time Greenwald contributed not only to the experiments, but to mentoring the next generation. Research scientist Ryan Sweeney notes that “Martin quickly gained my trust as a mentor, in part due to his often casual dress and slightly untamed hair, which are embodiments of his transparency and his focus on what matters. He can quiet a room of PhDs and demand attention not by intimidation, but rather by his calmness and his ability to bring clarity to complicated problems, be they scientific or human in nature.”

    Greenwald worked closely with the group of students who, in PSFC Director Dennis Whyte’s class, came up with the tokamak concept that evolved into SPARC. MIT is now pursuing this compact, high-field tokamak with Commonwealth Fusion Systems, a startup that grew out of the collective enthusiasm for this concept, and the growing realization it could work. Greenwald now heads the Physics Group for the SPARC project at MIT. He has helped confirm the device’s physics basis in order to predict performance and guide engineering decisions.

    “Martin’s multifaceted talents are thoroughly embodied by, and imprinted on, SPARC” says Whyte. “First, his leadership in its plasma confinement physics validation and publication place SPARC on a firm scientific footing. Secondly, the impact of the density limit he discovered, which shows that fuel density increases with magnetic field and decreasing the size of the tokamak, is critical in obtaining high fusion power density not just in SPARC, but in future power plants. Third, and perhaps most impressive, is Martin’s mentorship of the SPARC generation of leadership.”

    Greenwald’s expertise and easygoing personality have made him an asset as head of the PSFC Office for Computer Services and group leader for data acquisition and computing, and sought for many professional committees. He has been an APS Fellow since 2000, and was an APS Distinguished Lecturer in Plasma Physics (2001-02). He was also presented in 2014 with a Leadership Award from Fusion Power Associates. He is currently an associate editor for Physics of Plasmas and a member of the Lawrence Livermore National Laboratory Physical Sciences Directorate External Review Committee.

    Although leaving his full-time responsibilities, Greenwald will remain at MIT as a visiting scientist, a role he says will allow him to “stick my nose into everything without being responsible for anything.”

    “At some point in the race you have to hand off the baton,“ he says. “And it doesn’t mean you’re not interested in the outcome; and it doesn’t mean you’re just going to walk away into the stands. I want to be there at the end when we succeed.” More

  • in

    Leveraging science and technology against the world’s top problems

    Looking back on nearly a half-century at MIT, Richard K. Lester, associate provost and Japan Steel Industry Professor, sees a “somewhat eccentric professional trajectory.”

    But while his path has been irregular, there has been a clearly defined through line, Lester says: the emergence of new science and new technologies, the potential of these developments to shake up the status quo and address some of society’s most consequential problems, and what the outcomes might mean for America’s place in the world.

    Perhaps no assignment in Lester’s portfolio better captures this theme than the new MIT Climate Grand Challenges competition. Spearheaded by Lester and Maria Zuber, MIT vice president for research, and launched at the height of the pandemic in summer 2020, this initiative is designed to mobilize the entire MIT research community around tackling “the really hard, challenging problems currently standing in the way of an effective global response to the climate emergency,” says Lester. “The focus is on those problems where progress requires developing and applying frontier knowledge in the natural and social sciences and cutting-edge technologies. This is the MIT community swinging for the fences in areas where we have a comparative advantage.”This is a passion project for him, not least because it has engaged colleagues from nearly all of MIT’s departments. After nearly 100 initial ideas were submitted by more than 300 faculty, 27 teams were named finalists and received funding to develop comprehensive research and innovation plans in such areas as decarbonizing complex industries; risk forecasting and adaptation; advancing climate equity; and carbon removal, management, and storage. In April, a small subset of this group will become multiyear flagship projects, augmenting the work of existing MIT units that are pursuing climate research. Lester is sunny in the face of these extraordinarily complex problems. “This is a bottom-up effort with exciting proposals, and where the Institute is collectively committed — it’s MIT at its best.”

    Nuclear to the core

    This initiative carries a particular resonance for Lester, who remains deeply engaged in nuclear engineering. “The role of nuclear energy is central and will need to become even more central if we’re to succeed in addressing the climate challenge,” he says. He also acknowledges that for nuclear energy technologies — both fission and fusion — to play a vital role in decarbonizing the economy, they must not just win “in the court of public opinion, but in the marketplace,” he says. “Over the years, my research has sought to elucidate what needs to be done to overcome these obstacles.”

    In fact, Lester has been campaigning for much of his career for a U.S. nuclear innovation agenda, a commitment that takes on increased urgency as the contours of the climate crisis sharpen. He argues for the rapid development and testing of nuclear technologies that can complement the renewable but intermittent energy sources of sun and wind. Whether powerful, large-scale, molten-salt-cooled reactors or small, modular, light water reactors, nuclear batteries or promising new fusion projects, U.S. energy policy must embrace nuclear innovation, says Lester, or risk losing the high-stakes race for a sustainable future.

    Chancing into a discipline

    Lester’s introduction to nuclear science was pure happenstance.

    Born in the English industrial city of Leeds, he grew up in a musical family and played piano, violin, and then viola. “It was a big part of my life,” he says, and for a time, music beckoned as a career. He tumbled into a chemical engineering concentration at Imperial College, London, after taking a job in a chemical factory following high school. “There’s a certain randomness to life, and in my case, it’s reflected in my choice of major, which had a very large impact on my ultimate career.”

    In his second year, Lester talked his way into running a small experiment in the university’s research reactor, on radiation effects in materials. “I got hooked, and began thinking of studying nuclear engineering.” But there were few graduate programs in British universities at the time. Then serendipity struck again. The instructor of Lester’s single humanities course at Imperial had previously taught at MIT, and suggested Lester take a look at the nuclear program there. “I will always be grateful to him (and, indirectly, to MIT’s Humanities program) for opening my eyes to the existence of this institution where I’ve spent my whole adult life,” says Lester.

    He arrived at MIT with the notion of mitigating the harms of nuclear weapons. It was a time when the nuclear arms race “was an existential threat in everyone’s life,” he recalls. He targeted his graduate studies on nuclear proliferation. But he also encountered an electrifying study by MIT meteorologist Jule Charney. “Professor Charney produced one of the first scientific assessments of the effects on climate of increasing CO2 concentrations in the atmosphere, with quantitative estimates that have not fundamentally changed in 40 years.”

    Lester shifted directions. “I came to MIT to work on nuclear security, but stayed in the nuclear field because of the contributions that it can and must make in addressing climate change,” he says.

    Research and policy

    His path forward, Lester believed, would involve applying his science and technology expertise to critical policy problems, grounded in immediate, real-world concerns, and aiming for broad policy impacts. Even as a member of NSE, he joined with colleagues from many MIT departments to study American industrial practices and what was required to make them globally competitive, and then founded MIT’s Industrial Performance Center (IPC). Working at the IPC with interdisciplinary teams of faculty and students on the sources of productivity and innovation, his research took him to many countries at different stages of industrialization, including China, Taiwan, Japan, and Brazil.

    Lester’s wide-ranging work yielded books (including the MIT Press bestseller “Made in America”), advisory positions with governments, corporations, and foundations, and unexpected collaborations. “My interests were always fairly broad, and being at MIT made it possible to team up with world-leading scholars and extraordinary students not just in nuclear engineering, but in many other fields such as political science, economics, and management,” he says.

    Forging cross-disciplinary ties and bringing creative people together around a common goal proved a valuable skill as Lester stepped into positions of ever-greater responsibility at the Institute. He didn’t exactly relish the prospect of a desk job, though. “I religiously avoided administrative roles until I felt I couldn’t keep avoiding them,” he says.

    Today, as associate provost, he tends to MIT’s international activities — a daunting task given increasing scrutiny of research universities’ globe-spanning research partnerships and education of foreign students. But even in the midst of these consuming chores, Lester remains devoted to his home department. “Being a nuclear engineer is a central part of my identity,” he says.

    To students entering the nuclear field nearly 50 years after he did, who are understandably “eager to fix everything that seems wrong immediately,” he has a message: “Be patient. The hard things, the ones that are really worth doing, will take a long time to do.” Putting the climate crisis behind us will take two generations, Lester believes. Current students will start the job, but it will also take the efforts of their children’s generation before it is done.  “So we need you to be energetic and creative, of course, but whatever you do we also need you to be patient and to have ‘stick-to-itiveness’ — and maybe also a moral compass that our generation has lacked.” More

  • in

    Q&A: Climate Grand Challenges finalists on using data and science to forecast climate-related risk

    Note: This is the final article in a four-part interview series featuring the work of the 27 MIT Climate Grand Challenges finalist teams, which received a total of $2.7 million in startup funding to advance their projects. This month, the Institute will name a subset of the finalists as multiyear flagship projects.

    Advances in computation, artificial intelligence, robotics, and data science are enabling a new generation of observational tools and scientific modeling with the potential to produce timely, reliable, and quantitative analysis of future climate risks at a local scale. These projections can increase the accuracy and efficacy of early warning systems, improve emergency planning, and provide actionable information for climate mitigation and adaptation efforts, as human actions continue to change planetary conditions.

    In conversations prepared for MIT News, faculty from four Climate Grand Challenges teams with projects in the competition’s “Using data and science to forecast climate-related risk” category describe the promising new technologies that can help scientists understand the Earth’s climate system on a finer scale than ever before. (The other Climate Grand Challenges research themes include building equity and fairness into climate solutions, removing, managing, and storing greenhouse gases, and decarbonizing complex industries and processes.) The following responses have been edited for length and clarity.

    An observational system that can initiate a climate risk forecasting revolution

    Despite recent technological advances and massive volumes of data, climate forecasts remain highly uncertain. Gaps in observational capabilities create substantial challenges to predicting extreme weather events and establishing effective mitigation and adaptation strategies. R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics and director of the MIT International Center for Air Transportation, discusses the Stratospheric Airborne Climate Observatory System (SACOS) being developed together with Brent Minchew, the Cecil and Ida Green Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and a team that includes researchers from MIT Lincoln Laboratory and Harvard University.

    Q: How does SACOS reduce uncertainty in climate risk forecasting?

    A: There is a critical need for higher spatial and temporal resolution observations of the climate system than are currently available through remote (satellite or airborne) and surface (in-situ) sensing. We are developing an ensemble of high-endurance, solar-powered aircraft with instrument systems capable of performing months-long climate observing missions that satellites or aircraft alone cannot fulfill. Summer months are ideal for SACOS operations, as many key climate phenomena are active and short night periods reduce the battery mass, vehicle size, and technical risks. These observations hold the potential to inform and predict, allowing emergency planners, policymakers, and the rest of society to better prepare for the changes to come.

    Q: Describe the types of observing missions where SACOS could provide critical improvements.

    A: The demise of the Antarctic Ice Sheet, which is leading to rising sea levels around the world and threatening the displacement of millions of people, is one example. Current sea level forecasts struggle to account for giant fissures that create massive icebergs and cause the Antarctic Ice Sheet to flow more rapidly into the ocean. SACOS can track these fissures to accurately forecast ice slippage and give impacted populations enough time to prepare or evacuate. Elsewhere, widespread droughts cause rampant wildfires and water shortages. SACOS has the ability to monitor soil moisture and humidity in critically dry regions to identify where and when wildfires and droughts are imminent. SACOS also offers the most effective method to measure, track, and predict local ozone depletion over North America, which has resulted in increasingly severe summer thunderstorms.

    Quantifying and managing the risks of sea-level rise

    Prevailing estimates of sea-level rise range from approximately 20 centimeters to 2 meters by the end of the century, with the associated costs on the order of trillions of dollars. The instability of certain portions of the world’s ice sheets creates vast uncertainties, complicating how the world prepares for and responds to these potential changes. EAPS Professor Brent Minchew is leading another Climate Grand Challenges finalist team working on an integrated, multidisciplinary effort to improve the scientific understanding of sea-level rise and provide actionable information and tools to manage the risks it poses.

    Q: What have been the most significant challenges to understanding the potential rates of sea-level rise?

    A: West Antarctica is one of the most remote, inaccessible, and hostile places on Earth — to people and equipment. Thus, opportunities to observe the collapse of the West Antarctic Ice Sheet, which contains enough ice to raise global sea levels by about 3 meters, are limited and current observations crudely resolved. It is essential that we understand how the floating edge of the ice sheets, often called ice shelves, fracture and collapse because they provide critical forces that govern the rate of ice mass loss and can stabilize the West Antarctic Ice Sheet.

    Q: How will your project advance what is currently known about sea-level rise?

    A: We aim to advance global-scale projections of sea-level rise through novel observational technologies and computational models of ice sheet change and to link those predictions to region- to neighborhood-scale estimates of costs and adaptation strategies. To do this, we propose two novel instruments: a first-of-its-kind drone that can fly for months at a time over Antarctica making continuous observations of critical areas and an airdropped seismometer and GPS bundle that can be deployed to vulnerable and hard-to-reach areas of the ice sheet. This technology will provide greater data quality and density and will observe the ice sheet at frequencies that are currently inaccessible — elements that are essential for understanding the physics governing the evolution of the ice sheet and sea-level rise.

    Changing flood risk for coastal communities in the developing world

    Globally, more than 600 million people live in low-elevation coastal areas that face an increasing risk of flooding from sea-level rise. This includes two-thirds of cities with populations of more than 5 million and regions that conduct the vast majority of global trade. Dara Entekhabi, the Bacardi and Stockholm Water Foundations Professor in the Department of Civil and Environmental Engineering and professor in the Department of Earth, Atmospheric, and Planetary Sciences, outlines an interdisciplinary partnership that leverages data and technology to guide short-term and chart long-term adaptation pathways with Miho Mazereeuw, associate professor of architecture and urbanism and director of the Urban Risk Lab in the School of Architecture and Planning, and Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics.

    Q: What is the key problem this program seeks to address?

    A: The accumulated heating of the Earth system due to fossil burning is largely absorbed by the oceans, and the stored heat expands the ocean volume leading to increased base height for tides. When the high tides inundate a city, the condition is referred to as “sunny day” flooding, but the saline waters corrode infrastructure and wreak havoc on daily routines. The danger ahead for many coastal cities in the developing world is the combination of increasing high tide intrusions, coupled with heavy precipitation storm events.

    Q: How will your proposed solutions impact flood risk management?

    A: We are producing detailed risk maps for coastal cities in developing countries using newly available, very high-resolution remote-sensing data from space-borne instruments, as well as historical tides records and regional storm characteristics. Using these datasets, we aim to produce street-by-street risk maps that provide local decision-makers and stakeholders with a way to estimate present and future flood risks. With the model of future tides and probabilistic precipitation events, we can forecast future inundation by a flooding event, decadal changes with various climate-change and sea-level rise projections, and an increase in the likelihood of sunny-day flooding. Working closely with local partners, we will develop toolkits to explore short-term emergency response, as well as long-term mitigation and adaptation techniques in six pilot locations in South and Southeast Asia, Africa, and South America.

    Ocean vital signs

    On average, every person on Earth generates fossil fuel emissions equivalent to an 8-pound bag of carbon, every day. Much of this is absorbed by the ocean, but there is wide variability in the estimates of oceanic absorption, which translates into differences of trillions of dollars in the required cost of mitigation. In the Department of Earth, Atmospheric and Planetary Sciences, Christopher Hill, a principal research engineer specializing in Earth and planetary computational science, works with Ryan Woosley, a principal research scientist focusing on the carbon cycle and ocean acidification. Hill explains that they hope to use artificial intelligence and machine learning to help resolve this uncertainty.

    Q: What is the current state of knowledge on air-sea interactions?

    A: Obtaining specific, accurate field measurements of critical physical, chemical, and biological exchanges between the ocean and the planet have historically entailed expensive science missions with large ship-based infrastructure that leave gaps in real-time data about significant ocean climate processes. Recent advances in highly scalable in-situ autonomous observing and navigation combined with airborne, remote sensing, and machine learning innovations have the potential to transform data gathering, provide more accurate information, and address fundamental scientific questions around air-sea interaction.

    Q: How will your approach accelerate real-time, autonomous surface ocean observing from an experimental research endeavor to a permanent and impactful solution?

    A: Our project seeks to demonstrate how a scalable surface ocean observing network can be launched and operated, and to illustrate how this can reduce uncertainties in estimates of air-sea carbon dioxide exchange. With an initial high-impact goal of substantially eliminating the vast uncertainties that plague our understanding of ocean uptake of carbon dioxide, we will gather critical measurements for improving extended weather and climate forecast models and reducing climate impact uncertainty. The results have the potential to more accurately identify trillions of dollars worth of economic activity. More

  • in

    Ocean vital signs

    Without the ocean, the climate crisis would be even worse than it is. Each year, the ocean absorbs billions of tons of carbon from the atmosphere, preventing warming that greenhouse gas would otherwise cause. Scientists estimate about 25 to 30 percent of all carbon released into the atmosphere by both human and natural sources is absorbed by the ocean.

    “But there’s a lot of uncertainty in that number,” says Ryan Woosley, a marine chemist and a principal research scientist in the Department of Earth, Atmospheric and Planetary Sciences (EAPS) at MIT. Different parts of the ocean take in different amounts of carbon depending on many factors, such as the season and the amount of mixing from storms. Current models of the carbon cycle don’t adequately capture this variation.

    To close the gap, Woosley and a team of other MIT scientists developed a research proposal for the MIT Climate Grand Challenges competition — an Institute-wide campaign to catalyze and fund innovative research addressing the climate crisis. The team’s proposal, “Ocean Vital Signs,” involves sending a fleet of sailing drones to cruise the oceans taking detailed measurements of how much carbon the ocean is really absorbing. Those data would be used to improve the precision of global carbon cycle models and improve researchers’ ability to verify emissions reductions claimed by countries.

    “If we start to enact mitigation strategies—either through removing CO2 from the atmosphere or reducing emissions — we need to know where CO2 is going in order to know how effective they are,” says Woosley. Without more precise models there’s no way to confirm whether observed carbon reductions were thanks to policy and people, or thanks to the ocean.

    “So that’s the trillion-dollar question,” says Woosley. “If countries are spending all this money to reduce emissions, is it enough to matter?”

    In February, the team’s Climate Grand Challenges proposal was named one of 27 finalists out of the almost 100 entries submitted. From among this list of finalists, MIT will announce in April the selection of five flagship projects to receive further funding and support.

    Woosley is leading the team along with Christopher Hill, a principal research engineer in EAPS. The team includes physical and chemical oceanographers, marine microbiologists, biogeochemists, and experts in computational modeling from across the department, in addition to collaborators from the Media Lab and the departments of Mathematics, Aeronautics and Astronautics, and Electrical Engineering and Computer Science.

    Today, data on the flux of carbon dioxide between the air and the oceans are collected in a piecemeal way. Research ships intermittently cruise out to gather data. Some commercial ships are also fitted with sensors. But these present a limited view of the entire ocean, and include biases. For instance, commercial ships usually avoid storms, which can increase the turnover of water exposed to the atmosphere and cause a substantial increase in the amount of carbon absorbed by the ocean.

    “It’s very difficult for us to get to it and measure that,” says Woosley. “But these drones can.”

    If funded, the team’s project would begin by deploying a few drones in a small area to test the technology. The wind-powered drones — made by a California-based company called Saildrone — would autonomously navigate through an area, collecting data on air-sea carbon dioxide flux continuously with solar-powered sensors. This would then scale up to more than 5,000 drone-days’ worth of observations, spread over five years, and in all five ocean basins.

    Those data would be used to feed neural networks to create more precise maps of how much carbon is absorbed by the oceans, shrinking the uncertainties involved in the models. These models would continue to be verified and improved by new data. “The better the models are, the more we can rely on them,” says Woosley. “But we will always need measurements to verify the models.”

    Improved carbon cycle models are relevant beyond climate warming as well. “CO2 is involved in so much of how the world works,” says Woosley. “We’re made of carbon, and all the other organisms and ecosystems are as well. What does the perturbation to the carbon cycle do to these ecosystems?”

    One of the best understood impacts is ocean acidification. Carbon absorbed by the ocean reacts to form an acid. A more acidic ocean can have dire impacts on marine organisms like coral and oysters, whose calcium carbonate shells and skeletons can dissolve in the lower pH. Since the Industrial Revolution, the ocean has become about 30 percent more acidic on average.

    “So while it’s great for us that the oceans have been taking up the CO2, it’s not great for the oceans,” says Woosley. “Knowing how this uptake affects the health of the ocean is important as well.” More

  • in

    Chemical reactions for the energy transition

    One challenge in decarbonizing the energy system is knowing how to deal with new types of fuels. Traditional fuels such as natural gas and oil can be combined with other materials and then heated to high temperatures so they chemically react to produce other useful fuels or substances, or even energy to do work. But new materials such as biofuels can’t take as much heat without breaking down.

    A key ingredient in such chemical reactions is a specially designed solid catalyst that is added to encourage the reaction to happen but isn’t itself consumed in the process. With traditional materials, the solid catalyst typically interacts with a gas; but with fuels derived from biomass, for example, the catalyst must work with a liquid — a special challenge for those who design catalysts.

    For nearly a decade, Yogesh Surendranath, an associate professor of chemistry at MIT, has been focusing on chemical reactions between solid catalysts and liquids, but in a different situation: rather than using heat to drive reactions, he and his team input electricity from a battery or a renewable source such as wind or solar to give chemically inactive molecules more energy so they react. And key to their research is designing and fabricating solid catalysts that work well for reactions involving liquids.

    Recognizing the need to use biomass to develop sustainable liquid fuels, Surendranath wondered whether he and his team could take the principles they have learned about designing catalysts to drive liquid-solid reactions with electricity and apply them to reactions that occur at liquid-solid interfaces without any input of electricity.

    To their surprise, they found that their knowledge is directly relevant. Why? “What we found — amazingly — is that even when you don’t hook up wires to your catalyst, there are tiny internal ‘wires’ that do the reaction,” says Surendranath. “So, reactions that people generally think operate without any flow of current actually do involve electrons shuttling from one place to another.” And that means that Surendranath and his team can bring the powerful techniques of electrochemistry to bear on the problem of designing catalysts for sustainable fuels.

    A novel hypothesis

    Their work has focused on a class of chemical reactions important in the energy transition that involve adding oxygen to small organic (carbon-containing) molecules such as ethanol, methanol, and formic acid. The conventional assumption is that the reactant and oxygen chemically react to form the product plus water. And a solid catalyst — often a combination of metals — is present to provide sites on which the reactant and oxygen can interact.

    But Surendranath proposed a different view of what’s going on. In the usual setup, two catalysts, each one composed of many nanoparticles, are mounted on a conductive carbon substrate and submerged in water. In that arrangement, negatively charged electrons can flow easily through the carbon, while positively charged protons can flow easily through water.

    Surendranath’s hypothesis was that the conversion of reactant to product progresses by means of two separate “half-reactions” on the two catalysts. On one catalyst, the reactant turns into a product, in the process sending electrons into the carbon substrate and protons into the water. Those electrons and protons are picked up by the other catalyst, where they drive the oxygen-to-water conversion. So, instead of a single reaction, two separate but coordinated half-reactions together achieve the net conversion of reactant to product.

    As a result, the overall reaction doesn’t actually involve any net electron production or consumption. It is a standard “thermal” reaction resulting from the energy in the molecules and maybe some added heat. The conventional approach to designing a catalyst for such a reaction would focus on increasing the rate of that reactant-to-product conversion. And the best catalyst for that kind of reaction could turn out to be, say, gold or palladium or some other expensive precious metal.

    However, if that reaction actually involves two half-reactions, as Surendranath proposed, there is a flow of electrical charge (the electrons and protons) between them. So Surendranath and others in the field could instead use techniques of electrochemistry to design not a single catalyst for the overall reaction but rather two separate catalysts — one to speed up one half-reaction and one to speed up the other half-reaction. “That means we don’t have to design one catalyst to do all the heavy lifting of speeding up the entire reaction,” says Surendranath. “We might be able to pair up two low-cost, earth-abundant catalysts, each of which does half of the reaction well, and together they carry out the overall transformation quickly and efficiently.”

    But there’s one more consideration: Electrons can flow through the entire catalyst composite, which encompasses the catalyst particle(s) and the carbon substrate. For the chemical conversion to happen as quickly as possible, the rate at which electrons are put into the catalyst composite must exactly match the rate at which they are taken out. Focusing on just the electrons, if the reaction-to-product conversion on the first catalyst sends the same number of electrons per second into the “bath of electrons” in the catalyst composite as the oxygen-to-water conversion on the second catalyst takes out, the two half-reactions will be balanced, and the electron flow — and the rate of the combined reaction — will be fast. The trick is to find good catalysts for each of the half-reactions that are perfectly matched in terms of electrons in and electrons out.

    “A good catalyst or pair of catalysts can maintain an electrical potential — essentially a voltage — at which both half-reactions are fast and are balanced,” says Jaeyune Ryu PhD ’21, a former member of the Surendranath lab and lead author of the study; Ryu is now a postdoc at Harvard University. “The rates of the reactions are equal, and the voltage in the catalyst composite won’t change during the overall thermal reaction.”

    Drawing on electrochemistry

    Based on their new understanding, Surendranath, Ryu, and their colleagues turned to electrochemistry techniques to identify a good catalyst for each half-reaction that would also pair up to work well together. Their analytical framework for guiding catalyst development for systems that combine two half-reactions is based on a theory that has been used to understand corrosion for almost 100 years, but has rarely been applied to understand or design catalysts for reactions involving small molecules important for the energy transition.

    Key to their work is a potentiostat, a type of voltmeter that can either passively measure the voltage of a system or actively change the voltage to cause a reaction to occur. In their experiments, Surendranath and his team use the potentiostat to measure the voltage of the catalyst in real time, monitoring how it changes millisecond to millisecond. They then correlate those voltage measurements with simultaneous but separate measurements of the overall rate of catalysis to understand the reaction pathway.

    For their study of the conversion of small, energy-related molecules, they first tested a series of catalysts to find good ones for each half-reaction — one to convert the reactant to product, producing electrons and protons, and another to convert the oxygen to water, consuming electrons and protons. In each case, a promising candidate would yield a rapid reaction — that is, a fast flow of electrons and protons out or in.

    To help identify an effective catalyst for performing the first half-reaction, the researchers used their potentiostat to input carefully controlled voltages and measured the resulting current that flowed through the catalyst. A good catalyst will generate lots of current for little applied voltage; a poor catalyst will require high applied voltage to get the same amount of current. The team then followed the same procedure to identify a good catalyst for the second half-reaction.

    To expedite the overall reaction, the researchers needed to find two catalysts that matched well — where the amount of current at a given applied voltage was high for each of them, ensuring that as one produced a rapid flow of electrons and protons, the other one consumed them at the same rate.

    To test promising pairs, the researchers used the potentiostat to measure the voltage of the catalyst composite during net catalysis — not changing the voltage as before, but now just measuring it from tiny samples. In each test, the voltage will naturally settle at a certain level, and the goal is for that to happen when the rate of both reactions is high.

    Validating their hypothesis and looking ahead

    By testing the two half-reactions, the researchers could measure how the reaction rate for each one varied with changes in the applied voltage. From those measurements, they could predict the voltage at which the full reaction would proceed fastest. Measurements of the full reaction matched their predictions, supporting their hypothesis.

    The team’s novel approach of using electrochemistry techniques to examine reactions thought to be strictly thermal in nature provides new insights into the detailed steps by which those reactions occur and therefore into how to design catalysts to speed them up. “We can now use a divide-and-conquer strategy,” says Ryu. “We know that the net thermal reaction in our study happens through two ‘hidden’ but coupled half-reactions, so we can aim to optimize one half-reaction at a time” — possibly using low-cost catalyst materials for one or both.

    Adds Surendranath, “One of the things that we’re excited about in this study is that the result is not final in and of itself. It has really seeded a brand-new thrust area in our research program, including new ways to design catalysts for the production and transformation of renewable fuels and chemicals.”

    This research was supported primarily by the Air Force Office of Scientific Research. Jaeyune Ryu PhD ’21 was supported by a Samsung Scholarship. Additional support was provided by a National Science Foundation Graduate Research Fellowship.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More