More stories

  • in

    Chemical reactions for the energy transition

    One challenge in decarbonizing the energy system is knowing how to deal with new types of fuels. Traditional fuels such as natural gas and oil can be combined with other materials and then heated to high temperatures so they chemically react to produce other useful fuels or substances, or even energy to do work. But new materials such as biofuels can’t take as much heat without breaking down.

    A key ingredient in such chemical reactions is a specially designed solid catalyst that is added to encourage the reaction to happen but isn’t itself consumed in the process. With traditional materials, the solid catalyst typically interacts with a gas; but with fuels derived from biomass, for example, the catalyst must work with a liquid — a special challenge for those who design catalysts.

    For nearly a decade, Yogesh Surendranath, an associate professor of chemistry at MIT, has been focusing on chemical reactions between solid catalysts and liquids, but in a different situation: rather than using heat to drive reactions, he and his team input electricity from a battery or a renewable source such as wind or solar to give chemically inactive molecules more energy so they react. And key to their research is designing and fabricating solid catalysts that work well for reactions involving liquids.

    Recognizing the need to use biomass to develop sustainable liquid fuels, Surendranath wondered whether he and his team could take the principles they have learned about designing catalysts to drive liquid-solid reactions with electricity and apply them to reactions that occur at liquid-solid interfaces without any input of electricity.

    To their surprise, they found that their knowledge is directly relevant. Why? “What we found — amazingly — is that even when you don’t hook up wires to your catalyst, there are tiny internal ‘wires’ that do the reaction,” says Surendranath. “So, reactions that people generally think operate without any flow of current actually do involve electrons shuttling from one place to another.” And that means that Surendranath and his team can bring the powerful techniques of electrochemistry to bear on the problem of designing catalysts for sustainable fuels.

    A novel hypothesis

    Their work has focused on a class of chemical reactions important in the energy transition that involve adding oxygen to small organic (carbon-containing) molecules such as ethanol, methanol, and formic acid. The conventional assumption is that the reactant and oxygen chemically react to form the product plus water. And a solid catalyst — often a combination of metals — is present to provide sites on which the reactant and oxygen can interact.

    But Surendranath proposed a different view of what’s going on. In the usual setup, two catalysts, each one composed of many nanoparticles, are mounted on a conductive carbon substrate and submerged in water. In that arrangement, negatively charged electrons can flow easily through the carbon, while positively charged protons can flow easily through water.

    Surendranath’s hypothesis was that the conversion of reactant to product progresses by means of two separate “half-reactions” on the two catalysts. On one catalyst, the reactant turns into a product, in the process sending electrons into the carbon substrate and protons into the water. Those electrons and protons are picked up by the other catalyst, where they drive the oxygen-to-water conversion. So, instead of a single reaction, two separate but coordinated half-reactions together achieve the net conversion of reactant to product.

    As a result, the overall reaction doesn’t actually involve any net electron production or consumption. It is a standard “thermal” reaction resulting from the energy in the molecules and maybe some added heat. The conventional approach to designing a catalyst for such a reaction would focus on increasing the rate of that reactant-to-product conversion. And the best catalyst for that kind of reaction could turn out to be, say, gold or palladium or some other expensive precious metal.

    However, if that reaction actually involves two half-reactions, as Surendranath proposed, there is a flow of electrical charge (the electrons and protons) between them. So Surendranath and others in the field could instead use techniques of electrochemistry to design not a single catalyst for the overall reaction but rather two separate catalysts — one to speed up one half-reaction and one to speed up the other half-reaction. “That means we don’t have to design one catalyst to do all the heavy lifting of speeding up the entire reaction,” says Surendranath. “We might be able to pair up two low-cost, earth-abundant catalysts, each of which does half of the reaction well, and together they carry out the overall transformation quickly and efficiently.”

    But there’s one more consideration: Electrons can flow through the entire catalyst composite, which encompasses the catalyst particle(s) and the carbon substrate. For the chemical conversion to happen as quickly as possible, the rate at which electrons are put into the catalyst composite must exactly match the rate at which they are taken out. Focusing on just the electrons, if the reaction-to-product conversion on the first catalyst sends the same number of electrons per second into the “bath of electrons” in the catalyst composite as the oxygen-to-water conversion on the second catalyst takes out, the two half-reactions will be balanced, and the electron flow — and the rate of the combined reaction — will be fast. The trick is to find good catalysts for each of the half-reactions that are perfectly matched in terms of electrons in and electrons out.

    “A good catalyst or pair of catalysts can maintain an electrical potential — essentially a voltage — at which both half-reactions are fast and are balanced,” says Jaeyune Ryu PhD ’21, a former member of the Surendranath lab and lead author of the study; Ryu is now a postdoc at Harvard University. “The rates of the reactions are equal, and the voltage in the catalyst composite won’t change during the overall thermal reaction.”

    Drawing on electrochemistry

    Based on their new understanding, Surendranath, Ryu, and their colleagues turned to electrochemistry techniques to identify a good catalyst for each half-reaction that would also pair up to work well together. Their analytical framework for guiding catalyst development for systems that combine two half-reactions is based on a theory that has been used to understand corrosion for almost 100 years, but has rarely been applied to understand or design catalysts for reactions involving small molecules important for the energy transition.

    Key to their work is a potentiostat, a type of voltmeter that can either passively measure the voltage of a system or actively change the voltage to cause a reaction to occur. In their experiments, Surendranath and his team use the potentiostat to measure the voltage of the catalyst in real time, monitoring how it changes millisecond to millisecond. They then correlate those voltage measurements with simultaneous but separate measurements of the overall rate of catalysis to understand the reaction pathway.

    For their study of the conversion of small, energy-related molecules, they first tested a series of catalysts to find good ones for each half-reaction — one to convert the reactant to product, producing electrons and protons, and another to convert the oxygen to water, consuming electrons and protons. In each case, a promising candidate would yield a rapid reaction — that is, a fast flow of electrons and protons out or in.

    To help identify an effective catalyst for performing the first half-reaction, the researchers used their potentiostat to input carefully controlled voltages and measured the resulting current that flowed through the catalyst. A good catalyst will generate lots of current for little applied voltage; a poor catalyst will require high applied voltage to get the same amount of current. The team then followed the same procedure to identify a good catalyst for the second half-reaction.

    To expedite the overall reaction, the researchers needed to find two catalysts that matched well — where the amount of current at a given applied voltage was high for each of them, ensuring that as one produced a rapid flow of electrons and protons, the other one consumed them at the same rate.

    To test promising pairs, the researchers used the potentiostat to measure the voltage of the catalyst composite during net catalysis — not changing the voltage as before, but now just measuring it from tiny samples. In each test, the voltage will naturally settle at a certain level, and the goal is for that to happen when the rate of both reactions is high.

    Validating their hypothesis and looking ahead

    By testing the two half-reactions, the researchers could measure how the reaction rate for each one varied with changes in the applied voltage. From those measurements, they could predict the voltage at which the full reaction would proceed fastest. Measurements of the full reaction matched their predictions, supporting their hypothesis.

    The team’s novel approach of using electrochemistry techniques to examine reactions thought to be strictly thermal in nature provides new insights into the detailed steps by which those reactions occur and therefore into how to design catalysts to speed them up. “We can now use a divide-and-conquer strategy,” says Ryu. “We know that the net thermal reaction in our study happens through two ‘hidden’ but coupled half-reactions, so we can aim to optimize one half-reaction at a time” — possibly using low-cost catalyst materials for one or both.

    Adds Surendranath, “One of the things that we’re excited about in this study is that the result is not final in and of itself. It has really seeded a brand-new thrust area in our research program, including new ways to design catalysts for the production and transformation of renewable fuels and chemicals.”

    This research was supported primarily by the Air Force Office of Scientific Research. Jaeyune Ryu PhD ’21 was supported by a Samsung Scholarship. Additional support was provided by a National Science Foundation Graduate Research Fellowship.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Setting carbon management in stone

    Keeping global temperatures within limits deemed safe by the Intergovernmental Panel on Climate Change means doing more than slashing carbon emissions. It means reversing them.

    “If we want to be anywhere near those limits [of 1.5 or 2 C], then we have to be carbon neutral by 2050, and then carbon negative after that,” says Matěj Peč, a geoscientist and the Victor P. Starr Career Development Assistant Professor in the Department of Earth, Atmospheric, and Planetary Sciences (EAPS).

    Going negative will require finding ways to radically increase the world’s capacity to capture carbon from the atmosphere and put it somewhere where it will not leak back out. Carbon capture and storage projects already suck in tens of million metric tons of carbon each year. But putting a dent in emissions will mean capturing many billions of metric tons more. Today, people emit around 40 billion tons of carbon each year globally, mainly by burning fossil fuels.

    Because of the need for new ideas when it comes to carbon storage, Peč has created a proposal for the MIT Climate Grand Challenges competition — a bold and sweeping effort by the Institute to support paradigm-shifting research and innovation to address the climate crisis. Called the Advanced Carbon Mineralization Initiative, his team’s proposal aims to bring geologists, chemists, and biologists together to make permanently storing carbon underground workable under different geological conditions. That means finding ways to speed-up the process by which carbon pumped underground is turned into rock, or mineralized.

    “That’s what the geology has to offer,” says Peč, who is a lead on the project, along with Ed Boyden, professor of biological engineering, brain and cognitive sciences, and media arts and sciences, and Yogesh Surendranath, professor of chemistry. “You look for the places where you can safely and permanently store these huge volumes of CO2.”

    Peč‘s proposal is one of 27 finalists selected from a pool of almost 100 Climate Grand Challenge proposals submitted by collaborators from across the Institute. Each finalist team received $100,000 to further develop their research proposals. A subset of finalists will be announced in April, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    Building industries capable of going carbon negative presents huge technological, economic, environmental, and political challenges. For one, it’s expensive and energy-intensive to capture carbon from the air with existing technologies, which are “hellishly complicated,” says Peč. Much of the carbon capture underway today focuses on more concentrated sources like coal- or gas-burning power plants.

    It’s also difficult to find geologically suitable sites for storage. To keep it in the ground after it has been captured, carbon must either be trapped in airtight reservoirs or turned to stone.

    One of the best places for carbon capture and storage (CCS) is Iceland, where a number of CCS projects are up and running. The island’s volcanic geology helps speed up the mineralization process, as carbon pumped underground interacts with basalt rock at high temperatures. In that ideal setting, says Peč, 95 percent of carbon injected underground is mineralized after just two years — a geological flash.

    But Iceland’s geology is unusual. Elsewhere requires deeper drilling to reach suitable rocks at suitable temperature, which adds costs to already expensive projects. Further, says Peč, there’s not a complete understanding of how different factors influence the speed of mineralization.

    Peč‘s Climate Grand Challenge proposal would study how carbon mineralizes under different conditions, as well as explore ways to make mineralization happen more rapidly by mixing the carbon dioxide with different fluids before injecting it underground. Another idea — and the reason why there are biologists on the team — is to learn from various organisms adept at turning carbon into calcite shells, the same stuff that makes up limestone.

    Two other carbon management proposals, led by EAPS Cecil and Ida Green Professor Bradford Hager, were also selected as Climate Grand Challenge finalists. They focus on both the technologies necessary for capturing and storing gigatons of carbon as well as the logistical challenges involved in such an enormous undertaking.

    That involves everything from choosing suitable sites for storage, to regulatory and environmental issues, as well as how to bring disparate technologies together to improve the whole pipeline. The proposals emphasize CCS systems that can be powered by renewable sources, and can respond dynamically to the needs of different hard-to-decarbonize industries, like concrete and steel production.

    “We need to have an industry that is on the scale of the current oil industry that will not be doing anything but pumping CO2 into storage reservoirs,” says Peč.

    For a problem that involves capturing enormous amounts of gases from the atmosphere and storing it underground, it’s no surprise EAPS researchers are so involved. The Earth sciences have “everything” to offer, says Peč, including the good news that the Earth has more than enough places where carbon might be stored.

    “Basically, the Earth is really, really large,” says Peč. “The reasonably accessible places, which are close to the continents, store somewhere on the order of tens of thousands to hundreds thousands of gigatons of carbon. That’s orders of magnitude more than we need to put back in.” More

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    Microbes and minerals may have set off Earth’s oxygenation

    For the first 2 billion years of Earth’s history, there was barely any oxygen in the air. While some microbes were photosynthesizing by the latter part of this period, oxygen had not yet accumulated at levels that would impact the global biosphere.

    But somewhere around 2.3 billion years ago, this stable, low-oxygen equilibrium shifted, and oxygen began building up in the atmosphere, eventually reaching the life-sustaining levels we breathe today. This rapid infusion is known as the Great Oxygenation Event, or GOE. What triggered the event and pulled the planet out of its low-oxygen funk is one of the great mysteries of science.

    A new hypothesis, proposed by MIT scientists, suggests that oxygen finally started accumulating in the atmosphere thanks to interactions between certain marine microbes and minerals in ocean sediments. These interactions helped prevent oxygen from being consumed, setting off a self-amplifying process where more and more oxygen was made available to accumulate in the atmosphere.

    The scientists have laid out their hypothesis using mathematical and evolutionary analyses, showing that there were indeed microbes that existed before the GOE and evolved the ability to interact with sediment in the way that the researchers have proposed.

    Their study, appearing today in Nature Communications, is the first to connect the co-evolution of microbes and minerals to Earth’s oxygenation.

    “Probably the most important biogeochemical change in the history of the planet was oxygenation of the atmosphere,” says study author Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS). “We show how the interactions of microbes, minerals, and the geochemical environment acted in concert to increase oxygen in the atmosphere.”

    The study’s co-authors include lead author Haitao Shang, a former MIT graduate student, and Gregory Fournier, associate professor of geobiology in EAPS.

    A step up

    Today’s oxygen levels in the atmosphere are a stable balance between processes that produce oxygen and those that consume it. Prior to the GOE, the atmosphere maintained a different kind of equilibrium, with producers and consumers of oxygen  in balance, but in a way that didn’t leave much extra oxygen for the atmosphere.

    What could have pushed the planet out of one stable, oxygen-deficient state to another stable, oxygen-rich state?

    “If you look at Earth’s history, it appears there were two jumps, where you went from a steady state of low oxygen to a steady state of much higher oxygen, once in the Paleoproterozoic, once in the Neoproterozoic,” Fournier notes. “These jumps couldn’t have been because of a gradual increase in excess oxygen. There had to have been some feedback loop that caused this step-change in stability.”

    He and his colleagues wondered whether such a positive feedback loop could have come from a process in the ocean that made some organic carbon unavailable to its consumers. Organic carbon is mainly consumed through oxidation, usually accompanied by the consumption of oxygen — a process by which microbes in the ocean use oxygen to break down organic matter, such as detritus that has settled in sediment. The team wondered: Could there have been some process by which the presence of oxygen stimulated its further accumulation?

    Shang and Rothman worked out a mathematical model that made the following prediction: If microbes possessed the ability to only partially oxidize organic matter, the partially-oxidized matter, or “POOM,” would effectively become “sticky,” and chemically bind to minerals in sediment in a way that would protect the material from further oxidation. The oxygen that would otherwise have been consumed to fully degrade the material would instead be free to build up in the atmosphere. This process, they found, could serve as a positive feedback, providing a natural pump to push the atmosphere into a new, high-oxygen equilibrium.

    “That led us to ask, is there a microbial metabolism out there that produced POOM?” Fourier says.

    In the genes

    To answer this, the team searched through the scientific literature and identified a group of microbes that partially oxidizes organic matter in the deep ocean today. These microbes belong to the bacterial group SAR202, and their partial oxidation is carried out through an enzyme, Baeyer-Villiger monooxygenase, or BVMO.

    The team carried out a phylogenetic analysis to see how far back the microbe, and the gene for the enzyme, could be traced. They found that the bacteria did indeed have ancestors dating back before the GOE, and that the gene for the enzyme could be traced across various microbial species, as far back as pre-GOE times.

    What’s more, they found that the gene’s diversification, or the number of species that acquired the gene, increased significantly during times when the atmosphere experienced spikes in oxygenation, including once during the GOE’s Paleoproterozoic, and again in the Neoproterozoic.

    “We found some temporal correlations between diversification of POOM-producing genes, and the oxygen levels in the atmosphere,” Shang says. “That supports our overall theory.”

    To confirm this hypothesis will require far more follow-up, from experiments in the lab to surveys in the field, and everything in between. With their new study, the team has introduced a new suspect in the age-old case of what oxygenated Earth’s atmosphere.

    “Proposing a novel method, and showing evidence for its plausibility, is the first but important step,” Fournier says. “We’ve identified this as a theory worthy of study.”

    This work was supported in part by the mTerra Catalyst Fund and the National Science Foundation. More

  • in

    Study reveals chemical link between wildfire smoke and ozone depletion

    The Australian wildfires in 2019 and 2020 were historic for how far and fast they spread, and for how long and powerfully they burned. All told, the devastating “Black Summer” fires blazed across more than 43 million acres of land, and extinguished or displaced nearly 3 billion animals. The fires also injected over 1 million tons of smoke particles into the atmosphere, reaching up to 35 kilometers above Earth’s surface — a mass and reach comparable to that of an erupting volcano.

    Now, atmospheric chemists at MIT have found that the smoke from those fires set off chemical reactions in the stratosphere that contributed to the destruction of ozone, which shields the Earth from incoming ultraviolet radiation. The team’s study, appearing this week in the Proceedings of the National Academy of Sciences, is the first to establish a chemical link between wildfire smoke and ozone depletion.

    In March 2020, shortly after the fires subsided, the team observed a sharp drop in nitrogen dioxide in the stratosphere, which is the first step in a chemical cascade that is known to end in ozone depletion. The researchers found that this drop in nitrogen dioxide directly correlates with the amount of smoke that the fires released into the stratosphere. They estimate that this smoke-induced chemistry depleted the column of ozone by 1 percent.

    To put this in context, they note that the phaseout of ozone-depleting gases under a worldwide agreement to stop their production has led to about a 1 percent ozone recovery from earlier ozone decreases over the past 10 years — meaning that the wildfires canceled those hard-won diplomatic gains for a short period. If future wildfires grow stronger and more frequent, as they are predicted to do with climate change, ozone’s projected recovery could be delayed by years. 

    “The Australian fires look like the biggest event so far, but as the world continues to warm, there is every reason to think these fires will become more frequent and more intense,” says lead author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT. “It’s another wakeup call, just as the Antarctic ozone hole was, in the sense of showing how bad things could actually be.”

    The study’s co-authors include Kane Stone, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, along with collaborators at multiple institutions including the University of Saskatchewan, Jinan University, the National Center for Atmospheric Research, and the University of Colorado at Boulder.

    Chemical trace

    Massive wildfires are known to generate pyrocumulonimbus — towering clouds of smoke that can reach into the stratosphere, the layer of the atmosphere that lies between about 15 and 50 kilometers above the Earth’s surface. The smoke from Australia’s wildfires reached well into the stratosphere, as high as 35 kilometers.

    In 2021, Solomon’s co-author, Pengfei Yu at Jinan University, carried out a separate study of the fires’ impacts and found that the accumulated smoke warmed parts of the stratosphere by as much as 2 degrees Celsius — a warming that persisted for six months. The study also found hints of ozone destruction in the Southern Hemisphere following the fires.

    Solomon wondered whether smoke from the fires could have depleted ozone through a chemistry similar to volcanic aerosols. Major volcanic eruptions can also reach into the stratosphere, and in 1989, Solomon discovered that the particles in these eruptions can destroy ozone through a series of chemical reactions. As the particles form in the atmosphere, they gather moisture on their surfaces. Once wet, the particles can react with circulating chemicals in the stratosphere, including dinitrogen pentoxide, which reacts with the particles to form nitric acid.

    Normally, dinitrogen pentoxide reacts with the sun to form various nitrogen species, including nitrogen dioxide, a compound that binds with chlorine-containing chemicals in the stratosphere. When volcanic smoke converts dinitrogen pentoxide into nitric acid, nitrogen dioxide drops, and the chlorine compounds take another path, morphing into chlorine monoxide, the main human-made agent that destroys ozone.

    “This chemistry, once you get past that point, is well-established,” Solomon says. “Once you have less nitrogen dioxide, you have to have more chlorine monoxide, and that will deplete ozone.”

    Cloud injection

    In the new study, Solomon and her colleagues looked at how concentrations of nitrogen dioxide in the stratosphere changed following the Australian fires. If these concentrations dropped significantly, it would signal that wildfire smoke depletes ozone through the same chemical reactions as some volcanic eruptions.

    The team looked to observations of nitrogen dioxide taken by three independent satellites that have surveyed the Southern Hemisphere for varying lengths of time. They compared each satellite’s record in the months and years leading up to and following the Australian fires. All three records showed a significant drop in nitrogen dioxide in March 2020. For one satellite’s record, the drop represented a record low among observations spanning the last 20 years.

    To check that the nitrogen dioxide decrease was a direct chemical effect of the fires’ smoke, the researchers carried out atmospheric simulations using a global, three-dimensional model that simulates hundreds of chemical reactions in the atmosphere, from the surface on up through the stratosphere.

    The team injected a cloud of smoke particles into the model, simulating what was observed from the Australian wildfires. They assumed that the particles, like volcanic aerosols, gathered moisture. They then ran the model multiple times and compared the results to simulations without the smoke cloud.

    In every simulation incorporating wildfire smoke, the team found that as the amount of smoke particles increased in the stratosphere, concentrations of nitrogen dioxide decreased, matching the observations of the three satellites.

    “The behavior we saw, of more and more aerosols, and less and less nitrogen dioxide, in both the model and the data, is a fantastic fingerprint,” Solomon says. “It’s the first time that science has established a chemical mechanism linking wildfire smoke to ozone depletion. It may only be one chemical mechanism among several, but it’s clearly there. It tells us these particles are wet and they had to have caused some ozone depletion.”

    She and her collaborators are looking into other reactions triggered by wildfire smoke that might further contribute to stripping ozone. For the time being, the major driver of ozone depletion remains chlorofluorocarbons, or CFCs — chemicals such as old refrigerants that have been banned under the Montreal Protocol, though they continue to linger in the stratosphere. But as global warming leads to stronger, more frequent wildfires, their smoke could have a serious, lasting impact on ozone.

    “Wildfire smoke is a toxic brew of organic compounds that are complex beasts,” Solomon says. “And I’m afraid ozone is getting pummeled by a whole series of reactions that we are now furiously working to unravel.”

    This research was supported in part by the National Science Foundation and NASA. More

  • in

    MIT Energy Initiative awards seven Seed Fund grants for early-stage energy research

    The MIT Energy Initiative (MITEI) has awarded seven Seed Fund grants to support novel, early-stage energy research by faculty and researchers at MIT. The awardees hail from a range of disciplines, but all strive to bring their backgrounds and expertise to address the global climate crisis by improving the efficiency, scalability, and adoption of clean energy technologies.

    “Solving climate change is truly an interdisciplinary challenge,” says MITEI Director Robert C. Armstrong. “The Seed Fund grants foster collaboration and innovation from across all five of MIT’s schools and one college, encouraging an ‘all hands on deck approach’ to developing the energy solutions that will prove critical in combatting this global crisis.”

    This year, MITEI’s Seed Fund grant program received 70 proposals from 86 different principal investigators (PIs) across 25 departments, labs, and centers. Of these proposals, 31 involved collaborations between two or more PIs, including 24 that involved multiple departments.

    The winning projects reflect this collaborative nature with topics addressing the optimization of low-energy thermal cooling in buildings; the design of safe, robust, and resilient distributed power systems; and how to design and site wind farms with consideration of wind resource uncertainty due to climate change.

    Increasing public support for low-carbon technologies

    One winning team aims to leverage work done in the behavioral sciences to motivate sustainable behaviors and promote the adoption of clean energy technologies.

    “Objections to scalable low-carbon technologies such as nuclear energy and carbon sequestration have made it difficult to adopt these technologies and reduce greenhouse gas emissions,” says Howard Herzog, a senior research scientist at MITEI and co-PI. “These objections tend to neglect the sheer scale of energy generation required and the inability to meet this demand solely with other renewable energy technologies.”

    This interdisciplinary team — which includes researchers from MITEI, the Department of Nuclear Science and Engineering, and the MIT Sloan School of Management — plans to convene industry professionals and academics, as well as behavioral scientists, to identify common objections, design messaging to overcome them, and prove that these messaging campaigns have long-lasting impacts on attitudes toward scalable low-carbon technologies.

    “Our aim is to provide a foundation for shifting the public and policymakers’ views about these low-carbon technologies from something they, at best, tolerate, to something they actually welcome,” says co-PI David Rand, the Erwin H. Schell Professor and professor of management science and brain and cognitive sciences at MIT Sloan School of Management.

    Siting and designing wind farms

    Michael Howland, an assistant professor of civil and environmental engineering, will use his Seed Fund grant to develop a foundational methodology for wind farm siting and design that accounts for the uncertainty of wind resources resulting from climate change.

    “The optimal wind farm design and its resulting cost of energy is inherently dependent on the wind resource at the location of the farm,” says Howland. “But wind farms are currently sited and designed based on short-term climate records that do not account for the future effects of climate change on wind patterns.”

    Wind farms are capital-intensive infrastructure that cannot be relocated and often have lifespans exceeding 20 years — all of which make it especially important that developers choose the right locations and designs based not only on wind patterns in the historical climate record, but also based on future predictions. The new siting and design methodology has the potential to replace current industry standards to enable a more accurate risk analysis of wind farm development and energy grid expansion under climate change-driven energy resource uncertainty.

    Membraneless electrolyzers for hydrogen production

    Producing hydrogen from renewable energy-powered water electrolyzers is central to realizing a sustainable and low-carbon hydrogen economy, says Kripa Varanasi, a professor of mechanical engineering and a Seed Fund award recipient. The idea of using hydrogen as a fuel has existed for decades, but it has yet to be widely realized at a considerable scale. Varanasi hopes to change that with his Seed Fund grant.

    “The critical economic hurdle for successful electrolyzers to overcome is the minimization of the capital costs associated with their deployment,” says Varanasi. “So, an immediate task at hand to enable electrochemical hydrogen production at scale will be to maximize the effectiveness of the most mature, least complex, and least expensive water electrolyzer technologies.”

    To do this, he aims to combine the advantages of existing low-temperature alkaline electrolyzer designs with a novel membraneless electrolyzer technology that harnesses a gas management system architecture to minimize complexity and costs, while also improving efficiency. Varanasi hopes his project will demonstrate scalable concepts for cost-effective electrolyzer technology design to help realize a decarbonized hydrogen economy.

    Since its establishment in 2008, the MITEI Seed Fund Program has supported 194 energy-focused seed projects through grants totaling more than $26 million. This funding comes primarily from MITEI’s founding and sustaining members, supplemented by gifts from generous donors.

    Recipients of the 2021 MITEI Seed Fund grants are:

    “Design automation of safe, robust, and resilient distributed power systems” — Chuchu Fan of the Department of Aeronautics and Astronautics
    “Advanced MHD topping cycles: For fission, fusion, solar power plants” — Jeffrey Freidberg of the Department of Nuclear Science and Engineering and Dennis Whyte of the Plasma Science and Fusion Center
    “Robust wind farm siting and design under climate-change‐driven wind resource uncertainty” — Michael Howland of the Department of Civil and Environmental Engineering
    “Low-energy thermal comfort for buildings in the Global South: Optimal design of integrated structural-thermal systems” — Leslie Norford of the Department of Architecture and Caitlin Mueller of the departments of Architecture and Civil and Environmental Engineering
    “New low-cost, high energy-density boron-based redox electrolytes for nonaqueous flow batteries” — Alexander Radosevich of the Department of Chemistry
    “Increasing public support for scalable low-carbon energy technologies using behavorial science insights” — David Rand of the MIT Sloan School of Management, Koroush Shirvan of the Department of Nuclear Science and Engineering, Howard Herzog of the MIT Energy Initiative, and Jacopo Buongiorno of the Department of Nuclear Science and Engineering
    “Membraneless electrolyzers for efficient hydrogen production using nanoengineered 3D gas capture electrode architectures” — Kripa Varanasi of the Department of Mechanical Engineering More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Making catalytic surfaces more active to help decarbonize fuels and chemicals

    Electrochemical reactions that are accelerated using catalysts lie at the heart of many processes for making and using fuels, chemicals, and materials — including storing electricity from renewable energy sources in chemical bonds, an important capability for decarbonizing transportation fuels. Now, research at MIT could open the door to ways of making certain catalysts more active, and thus enhancing the efficiency of such processes.

    A new production process yielded catalysts that increased the efficiency of the chemical reactions by fivefold, potentially enabling useful new processes in biochemistry, organic chemistry, environmental chemistry, and electrochemistry. The findings are described today in the journal Nature Catalysis, in a paper by Yang Shao-Horn, an MIT professor of mechanical engineering and of materials science and engineering, and a member of the Research Lab of Electronics (RLE); Tao Wang, a postdoc in RLE; Yirui Zhang, a graduate student in the Department of Mechanical Engineering; and five others.

    The process involves adding a layer of what’s called an ionic liquid in between a gold or platinum catalyst and a chemical feedstock. Catalysts produced with this method could potentially enable much more efficient conversion of hydrogen fuel to power devices such as fuel cells, or more efficient conversion of carbon dioxide into fuels.

    “There is an urgent need to decarbonize how we power transportation beyond light-duty vehicles, how we make fuels, and how we make materials and chemicals,” says Shao-Horn, emphasizing the pressing call to reduce carbon emissions highlighted in the latest IPCC report on climate change. This new approach to enhancing catalytic activity could provide an important step in that direction, she says.

    Using hydrogen in electrochemical devices such as fuel cells is one promising approach to decarbonizing fields such as aviation and heavy-duty vehicles, and the new process may help to make such uses practical. At present, the oxygen reduction reaction that powers such fuel cells is limited by its inefficiency. Previous attempts to improve that efficiency have focused on choosing different catalyst materials or modifying their surface compositions and structure.

    In this research, however, instead of modifying the solid surfaces, the team added a thin layer in between the catalyst and the electrolyte, the active material that participates in the chemical reaction. The ionic liquid layer, they found, regulates the activity of protons that help to increase the rate of the chemical reactions taking place on the interface.

    Because there is a great variety of such ionic liquids to choose from, it’s possible to “tune” proton activity and the reaction rates to match the energetics needed for processes involving proton transfer, which can be used to make fuels and chemicals through reactions with oxygen.

    “The proton activity and the barrier for proton transfer is governed by the ionic liquid layer, and so there’s a great tuneability in terms of catalytic activity for reactions involving proton and electron transfer,” Shao-Horn says. And the effect is produced by a vanishingly thin layer of the liquid, just a few nanometers thick, above which is a much thicker layer of the liquid that is to undergo the reaction.

    “I think this concept is novel and important,” says Wang, the paper’s first author, “because people know the proton activity is important in many electrochemistry reactions, but it’s very challenging to study.” That’s because in a water environment, there are so many interactions between neighboring water molecules involved that it’s very difficult to separate out which reactions are taking place. By using an ionic liquid, whose ions can each only form a single bond with the intermediate material, it became possible to study the reactions in detail, using infrared spectroscopy.

    As a result, Wang says, “Our finding highlights the critical role that interfacial electrolytes, in particular the intermolecular hydrogen bonding, can play in enhancing the activity of the electro-catalytic process. It also provides fundamental insights into proton transfer mechanisms at a quantum mechanical level, which can push the frontiers of knowing how protons and electrons interact at catalytic interfaces.”

    “The work is also exciting because it gives people a design principle for how they can tune the catalysts,” says Zhang. “We need some species right at a ‘sweet spot’ — not too active or too inert — to enhance the reaction rate.”

    With some of these techniques, says Reshma Rao, a recent doctoral graduate from MIT and now a postdoc at Imperial College, London, who is also a co-author of the paper, “we see up to a five-times increase in activity. I think the most exciting part of this research is the way it opens up a whole new dimension in the way we think about catalysis.” The field had hit “a kind of roadblock,” she says, in finding ways to design better materials. By focusing on the liquid layer rather than the surface of the material, “that’s kind of a whole different way of looking at this problem, and opens up a whole new dimension, a whole new axis along which we can change things and optimize some of these reaction rates.”

    The team also included Botao Huang, Bin Cai, and Livia Giordano in the MIT’s Research Laboratory of Electronics, and Shi-Gang Sun at Xiamen University in China. The work was supported by the Toyota Research Institute, and used the National Science Foundation’s Extreme Science and Engineering Environment. More