More stories

  • in

    Ocean microbes get their diet through a surprising mix of sources, study finds

    One of the smallest and mightiest organisms on the planet is a plant-like bacterium known to marine biologists as Prochlorococcus. The green-tinted microbe measures less than a micron across, and its populations suffuse through the upper layers of the ocean, where a single teaspoon of seawater can hold millions of the tiny organisms.

    Prochlorococcus grows through photosynthesis, using sunlight to convert the atmosphere’s carbon dioxide into organic carbon molecules. The microbe is responsible for 5 percent of the world’s photosynthesizing activity, and scientists have assumed that photosynthesis is the microbe’s go-to strategy for acquiring the carbon it needs to grow.

    But a new MIT study in Nature Microbiology today has found that Prochlorococcus relies on another carbon-feeding strategy, more than previously thought.

    Organisms that use a mix of strategies to provide carbon are known as mixotrophs. Most marine plankton are mixotrophs. And while Prochlorococcus is known to occasionally dabble in mixotrophy, scientists have assumed the microbe primarily lives a phototrophic lifestyle.

    The new MIT study shows that in fact, Prochlorococcus may be more of a mixotroph than it lets on. The microbe may get as much as one-third of its carbon through a second strategy: consuming the dissolved remains of other dead microbes.

    The new estimate may have implications for climate models, as the microbe is a significant force in capturing and “fixing” carbon in the Earth’s atmosphere and ocean.

    “If we wish to predict what will happen to carbon fixation in a different climate, or predict where Prochlorococcus will or will not live in the future, we probably won’t get it right if we’re missing a process that accounts for one-third of the population’s carbon supply,” says Mick Follows, a professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), and its Department of Civil and Environmental Engineering.

    The study’s co-authors include first author and MIT postdoc Zhen Wu, along with collaborators from the University of Haifa, the Leibniz-Institute for Baltic Sea Research, the Leibniz-Institute of Freshwater Ecology and Inland Fisheries, and Potsdam University.

    Persistent plankton

    Since Prochlorococcus was first discovered in the Sargasso Sea in 1986, by MIT Institute Professor Sallie “Penny” Chisholm and others, the microbe has been observed throughout the world’s oceans, inhabiting the upper sunlit layers ranging from the surface down to about 160 meters. Within this range, light levels vary, and the microbe has evolved a number of ways to photosynthesize carbon in even low-lit regions.

    The organism has also evolved ways to consume organic compounds including glucose and certain amino acids, which could help the microbe survive for limited periods of time in dark ocean regions. But surviving on organic compounds alone is a bit like only eating junk food, and there is evidence that Prochlorococcus will die after a week in regions where photosynthesis is not an option.

    And yet, researchers including Daniel Sher of the University of Haifa, who is a co-author of the new study, have observed healthy populations of Prochlorococcus that persist deep in the sunlit zone, where the light intensity should be too low to maintain a population. This suggests that the microbes must be switching to a non-photosynthesizing, mixotrophic lifestyle in order to consume other organic sources of carbon.

    “It seems that at least some Prochlorococcus are using existing organic carbon in a mixotrophic way,” Follows says. “That stimulated the question: How much?”

    What light cannot explain

    In their new paper, Follows, Wu, Sher, and their colleagues looked to quantify the amount of carbon that Prochlorococcus is consuming through processes other than photosynthesis.

    The team looked first to measurements taken by Sher’s team, which previously took ocean samples at various depths in the Mediterranean Sea and measured the concentration of phytoplankton, including Prochlorococcus, along with the associated intensity of light and the concentration of nitrogen — an essential nutrient that is richly available in deeper layers of the ocean and that plankton can assimilate to make proteins.

    Wu and Follows used this data, and similar information from the Pacific Ocean, along with previous work from Chisholm’s lab, which established the rate of photosynthesis that Prochlorococcus could carry out in a given intensity of light.

    “We converted that light intensity profile into a potential growth rate — how fast the population of Prochlorococcus could grow if it was acquiring all it’s carbon by photosynthesis, and light is the limiting factor,” Follows explains.

    The team then compared this calculated rate to growth rates that were previously observed in the Pacific Ocean by several other research teams.

    “This data showed that, below a certain depth, there’s a lot of growth happening that photosynthesis simply cannot explain,” Follows says. “Some other process must be at work to make up the difference in carbon supply.”

    The researchers inferred that, in deeper, darker regions of the ocean, Prochlorococcus populations are able to survive and thrive by resorting to mixotrophy, including consuming organic carbon from detritus. Specifically, the microbe may be carrying out osmotrophy — a process by which an organism passively absorbs organic carbon molecules via osmosis.

    Judging by how fast the microbe is estimated to be growing below the sunlit zone, the team calculates that Prochlorococcus obtains up to one-third of its carbon diet through mixotrophic strategies.

    “It’s kind of like going from a specialist to a generalist lifestyle,” Follows says. “If I only eat pizza, then if I’m 20 miles from a pizza place, I’m in trouble, whereas if I eat burgers as well, I could go to the nearby McDonald’s. People had thought of Prochlorococcus as a specialist, where they do this one thing (photosynthesis) really well. But it turns out they may have more of a generalist lifestyle than we previously thought.”

    Chisholm, who has both literally and figuratively written the book on Prochlorococcus, says the group’s findings “expand the range of conditions under which their populations can not only survive, but also thrive. This study changes the way we think about the role of Prochlorococcus in the microbial food web.”

    This research was supported, in part, by the Israel Science Foundation, the U.S. National Science Foundation, and the Simons Foundation. More

  • in

    Methane research takes on new urgency at MIT

    One of the most notable climate change provisions in the 2022 Inflation Reduction Act is the first U.S. federal tax on a greenhouse gas (GHG). That the fee targets methane (CH4), rather than carbon dioxide (CO2), emissions is indicative of the urgency the scientific community has placed on reducing this short-lived but powerful gas. Methane persists in the air about 12 years — compared to more than 1,000 years for CO2 — yet it immediately causes about 120 times more warming upon release. The gas is responsible for at least a quarter of today’s gross warming. 

    “Methane has a disproportionate effect on near-term warming,” says Desiree Plata, the director of MIT Methane Network. “CH4 does more damage than CO2 no matter how long you run the clock. By removing methane, we could potentially avoid critical climate tipping points.” 

    Because GHGs have a runaway effect on climate, reductions made now will have a far greater impact than the same reductions made in the future. Cutting methane emissions will slow the thawing of permafrost, which could otherwise lead to massive methane releases, as well as reduce increasing emissions from wetlands.  

    “The goal of MIT Methane Network is to reduce methane emissions by 45 percent by 2030, which would save up to 0.5 degree C of warming by 2100,” says Plata, an associate professor of civil and environmental engineering at MIT and director of the Plata Lab. “When you consider that governments are trying for a 1.5-degree reduction of all GHGs by 2100, this is a big deal.” 

    Under normal concentrations, methane, like CO2, poses no health risks. Yet methane assists in the creation of high levels of ozone. In the lower atmosphere, ozone is a key component of air pollution, which leads to “higher rates of asthma and increased emergency room visits,” says Plata. 

    Methane-related projects at the Plata Lab include a filter made of zeolite — the same clay-like material used in cat litter — designed to convert methane into CO2 at dairy farms and coal mines. At first glance, the technology would appear to be a bit of a hard sell, since it converts one GHG into another. Yet the zeolite filter’s low carbon and dollar costs, combined with the disproportionate warming impact of methane, make it a potential game-changer.

    The sense of urgency about methane has been amplified by recent studies that show humans are generating far more methane emissions than previously estimated, and that the rates are rising rapidly. Exactly how much methane is in the air is uncertain. Current methods for measuring atmospheric methane, such as ground, drone, and satellite sensors, “are not readily abundant and do not always agree with each other,” says Plata.  

    The Plata Lab is collaborating with Tim Swager in the MIT Department of Chemistry to develop low-cost methane sensors. “We are developing chemiresisitive sensors that cost about a dollar that you could place near energy infrastructure to back-calculate where leaks are coming from,” says Plata.  

    The researchers are working on improving the accuracy of the sensors using machine learning techniques and are planning to integrate internet-of-things technology to transmit alerts. Plata and Swager are not alone in focusing on data collection: the Inflation Reduction Act adds significant funding for methane sensor research. 

    Other research at the Plata Lab includes the development of nanomaterials and heterogeneous catalysis techniques for environmental applications. The lab also explores mitigation solutions for industrial waste, particularly those related to the energy transition. Plata is the co-founder of an lithium-ion battery recycling startup called Nth Cycle. 

    On a more fundamental level, the Plata Lab is exploring how to develop products with environmental and social sustainability in mind. “Our overarching mission is to change the way that we invent materials and processes so that environmental objectives are incorporated along with traditional performance and cost metrics,” says Plata. “It is important to do that rigorous assessment early in the design process.”

    Play video

    MIT amps up methane research 

    The MIT Methane Network brings together 26 researchers from MIT along with representatives of other institutions “that are dedicated to the idea that we can reduce methane levels in our lifetime,” says Plata. The organization supports research such as Plata’s zeolite and sensor projects, as well as designing pipeline-fixing robots, developing methane-based fuels for clean hydrogen, and researching the capture and conversion of methane into liquid chemical precursors for pharmaceuticals and plastics. Other members are researching policies to encourage more sustainable agriculture and land use, as well as methane-related social justice initiatives. 

    “Methane is an especially difficult problem because it comes from all over the place,” says Plata. A recent Global Carbon Project study estimated that half of methane emissions are caused by humans. This is led by waste and agriculture (28 percent), including cow and sheep belching, rice paddies, and landfills.  

    Fossil fuels represent 18 percent of the total budget. Of this, about 63 percent is derived from oil and gas production and pipelines, 33 percent from coal mining activities, and 5 percent from industry and transportation. Human-caused biomass burning, primarily from slash-and-burn agriculture, emits about 4 percent of the global total.  

    The other half of the methane budget includes natural methane emissions from wetlands (20 percent) and other natural sources (30 percent). The latter includes permafrost melting and natural biomass burning, such as forest fires started by lightning.  

    With increases in global warming and population, the line between anthropogenic and natural causes is getting fuzzier. “Human activities are accelerating natural emissions,” says Plata. “Climate change increases the release of methane from wetlands and permafrost and leads to larger forest and peat fires.”  

    The calculations can get complicated. For example, wetlands provide benefits from CO2 capture, biological diversity, and sea level rise resiliency that more than compensate for methane releases. Meanwhile, draining swamps for development increases emissions. 

    Over 100 nations have signed onto the U.N.’s Global Methane Pledge to reduce at least 30 percent of anthropogenic emissions within the next 10 years. The U.N. report estimates that this goal can be achieved using proven technologies and that about 60 percent of these reductions can be accomplished at low cost. 

    Much of the savings would come from greater efficiencies in fossil fuel extraction, processing, and delivery. The methane fees in the Inflation Reduction Act are primarily focused on encouraging fossil fuel companies to accelerate ongoing efforts to cap old wells, flare off excess emissions, and tighten pipeline connections.  

    Fossil fuel companies have already made far greater pledges to reduce methane than they have with CO2, which is central to their business. This is due, in part, to the potential savings, as well as in preparation for methane regulations expected from the Environmental Protection Agency in late 2022. The regulations build upon existing EPA oversight of drilling operations, and will likely be exempt from the U.S. Supreme Court’s ruling that limits the federal government’s ability to regulate GHGs. 

    Zeolite filter targets methane in dairy and coal 

    The “low-hanging fruit” of gas stream mitigation addresses most of the 20 percent of total methane emissions in which the gas is released in sufficiently high concentrations for flaring. Plata’s zeolite filter aims to address the thornier challenge of reducing the 80 percent of non-flammable dilute emissions. 

    Plata found inspiration in decades-old catalysis research for turning methane into methanol. One strategy has been to use an abundant, low-cost aluminosilicate clay called zeolite.  

    “The methanol creation process is challenging because you need to separate a liquid, and it has very low efficiency,” says Plata. “Yet zeolite can be very efficient at converting methane into CO2, and it is much easier because it does not require liquid separation. Converting methane to CO2 sounds like a bad thing, but there is a major anti-warming benefit. And because methane is much more dilute than CO2, the relative CO2 contribution is minuscule.”  

    Using zeolite to create methanol requires highly concentrated methane, high temperatures and pressures, and industrial processing conditions. Yet Plata’s process, which dopes the zeolite with copper, operates in the presence of oxygen at much lower temperatures under typical pressures. “We let the methane proceed the way it wants from a thermodynamic perspective from methane to methanol down to CO2,” says Plata. 

    Researchers around the world are working on other dilute methane removal technologies. Projects include spraying iron salt aerosols into sea air where they react with natural chlorine or bromine radicals, thereby capturing methane. Most of these geoengineering solutions, however, are difficult to measure and would require massive scale to make a difference.  

    Plata is focusing her zeolite filters on environments where concentrations are high, but not so high as to be flammable. “We are trying to scale zeolite into filters that you could snap onto the side of a cross-ventilation fan in a dairy barn or in a ventilation air shaft in a coal mine,” says Plata. “For every packet of air we bring in, we take a lot of methane out, so we get more bang for our buck.”  

    The major challenge is creating a filter that can handle high flow rates without getting clogged or falling apart. Dairy barn air handlers can push air at up to 5,000 cubic feet per minute and coal mine handlers can approach 500,000 CFM. 

    Plata is exploring engineering options including fluidized bed reactors with floating catalyst particles. Another filter solution, based in part on catalytic converters, features “higher-order geometric structures where you have a porous material with a long path length where the gas can interact with the catalyst,” says Plata. “This avoids the challenge with fluidized beds of containing catalyst particles in the reactor. Instead, they are fixed within a structured material.”  

    Competing technologies for removing methane from mine shafts “operate at temperatures of 1,000 to 1,200 degrees C, requiring a lot of energy and risking explosion,” says Plata. “Our technology avoids safety concerns by operating at 300 to 400 degrees C. It reduces energy use and provides more tractable deployment costs.” 

    Potentially, energy and dollar costs could be further reduced in coal mines by capturing the heat generated by the conversion process. “In coal mines, you have enrichments above a half-percent methane, but below the 4 percent flammability threshold,” says Plata. “The excess heat from the process could be used to generate electricity using off-the-shelf converters.” 

    Plata’s dairy barn research is funded by the Gerstner Family Foundation and the coal mining project by the U.S. Department of Energy. “The DOE would like us to spin out the technology for scale-up within three years,” says Plata. “We cannot guarantee we will hit that goal, but we are trying to develop this as quickly as possible. Our society needs to start reducing methane emissions now.”  More

  • in

    Coordinating climate and air-quality policies to improve public health

    As America’s largest investment to fight climate change, the Inflation Reduction Act positions the country to reduce its greenhouse gas emissions by an estimated 40 percent below 2005 levels by 2030. But as it edges the United States closer to achieving its international climate commitment, the legislation is also expected to yield significant — and more immediate — improvements in the nation’s health. If successful in accelerating the transition from fossil fuels to clean energy alternatives, the IRA will sharply reduce atmospheric concentrations of fine particulates known to exacerbate respiratory and cardiovascular disease and cause premature deaths, along with other air pollutants that degrade human health. One recent study shows that eliminating air pollution from fossil fuels in the contiguous United States would prevent more than 50,000 premature deaths and avoid more than $600 billion in health costs each year.

    While national climate policies such as those advanced by the IRA can simultaneously help mitigate climate change and improve air quality, their results may vary widely when it comes to improving public health. That’s because the potential health benefits associated with air quality improvements are much greater in some regions and economic sectors than in others. Those benefits can be maximized, however, through a prudent combination of climate and air-quality policies.

    Several past studies have evaluated the likely health impacts of various policy combinations, but their usefulness has been limited due to a reliance on a small set of standard policy scenarios. More versatile tools are needed to model a wide range of climate and air-quality policy combinations and assess their collective effects on air quality and human health. Now researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Institute for Data, Systems and Society (IDSS) have developed a publicly available, flexible scenario tool that does just that.

    In a study published in the journal Geoscientific Model Development, the MIT team introduces its Tool for Air Pollution Scenarios (TAPS), which can be used to estimate the likely air-quality and health outcomes of a wide range of climate and air-quality policies at the regional, sectoral, and fuel-based level. 

    “This tool can help integrate the siloed sustainability issues of air pollution and climate action,” says the study’s lead author William Atkinson, who recently served as a Biogen Graduate Fellow and research assistant at the IDSS Technology and Policy Program’s (TPP) Research to Policy Engagement Initiative. “Climate action does not guarantee a clean air future, and vice versa — but the issues have similar sources that imply shared solutions if done right.”

    The study’s initial application of TAPS shows that with current air-quality policies and near-term Paris Agreement climate pledges alone, short-term pollution reductions give way to long-term increases — given the expected growth of emissions-intensive industrial and agricultural processes in developing regions. More ambitious climate and air-quality policies could be complementary, each reducing different pollutants substantially to give tremendous near- and long-term health benefits worldwide.

    “The significance of this work is that we can more confidently identify the long-term emission reduction strategies that also support air quality improvements,” says MIT Joint Program Deputy Director C. Adam Schlosser, a co-author of the study. “This is a win-win for setting climate targets that are also healthy targets.”

    TAPS projects air quality and health outcomes based on three integrated components: a recent global inventory of detailed emissions resulting from human activities (e.g., fossil fuel combustion, land-use change, industrial processes); multiple scenarios of emissions-generating human activities between now and the year 2100, produced by the MIT Economic Projection and Policy Analysis model; and emissions intensity (emissions per unit of activity) scenarios based on recent data from the Greenhouse Gas and Air Pollution Interactions and Synergies model.

    “We see the climate crisis as a health crisis, and believe that evidence-based approaches are key to making the most of this historic investment in the future, particularly for vulnerable communities,” says Johanna Jobin, global head of corporate reputation and responsibility at Biogen. “The scientific community has spoken with unanimity and alarm that not all climate-related actions deliver equal health benefits. We’re proud of our collaboration with the MIT Joint Program to develop this tool that can be used to bridge research-to-policy gaps, support policy decisions to promote health among vulnerable communities, and train the next generation of scientists and leaders for far-reaching impact.”

    The tool can inform decision-makers about a wide range of climate and air-quality policies. Policy scenarios can be applied to specific regions, sectors, or fuels to investigate policy combinations at a more granular level, or to target short-term actions with high-impact benefits.

    TAPS could be further developed to account for additional emissions sources and trends.

    “Our new tool could be used to examine a large range of both climate and air quality scenarios. As the framework is expanded, we can add detail for specific regions, as well as additional pollutants such as air toxics,” says study supervising co-author Noelle Selin, professor at IDSS and the MIT Department of Earth, Atmospheric and Planetary Sciences, and director of TPP.    

    This research was supported by the U.S. Environmental Protection Agency and its Science to Achieve Results (STAR) program; Biogen; TPP’s Leading Technology and Policy Initiative; and TPP’s Research to Policy Engagement Initiative. More

  • in

    Doubling down on sustainability innovation in Kendall Square

    From its new headquarters in Cambridge’s Kendall Square, The Engine is investing in a number of “tough tech” startups seeking to transform the world’s energy systems. A few blocks away, the startup Inari is using gene editing to improve seeds’ resilience to climate change. On the MIT campus nearby, researchers are working on groundbreaking innovations to meet the urgent challenges our planet faces.

    Kendall Square is known as the biotech capital of the world, but as the latest annual meeting of the Kendal Square Association (KSA) made clear, it’s also a thriving hub of sustainability-related innovation.

    The Oct. 20 event, which began at MIT’s Welcome Center before moving to the MIT Museum for a panel discussion, brought together professionals from across Cambridge’s prolific innovation ecosystem — not just entrepreneurs working at startups, but also students, restaurant and retail shop owners, and people from local nonprofits.

    Titled “[Re] Imagining a Sustainable Future,” the meeting highlighted advances in climate change technologies that are afoot in Kendall Square, to help inspire and connect the community as it works toward common sustainability goals.

    “Our focus is on building a better future together — and together is the most important word there,” KSA Executive Director Beth O’Neill Maloney said in her opening remarks. “This is an incredibly innovative ecosystem and community that’s making changes that affect us here in Kendall Square and far, far beyond.”

    The pace of change

    The main event of the evening was a panel discussion moderated by Lee McGuire, the chief communications officer of the Broad Institute of MIT and Harvard. The panel featured Stuart Brown, chief financial officer at Inari; Emily Knight, chief operating officer at The Engine; and Joe Higgins, vice president for campus services and stewardship at MIT.

    “Sustainability is obviously one of the most important — if not the most important — challenge facing us as a society today,” said McGuire, opening the discussion. “Kendall Square is known for its work in biotech, life sciences, AI, and climate, and the more we dug into it the more we realized how interconnected all of those things are. The talent in Kendall Square wants to work on problems relevant for humanity, and the tools and skills you need for that can be very similar depending on the problem you’re working on.”

    Higgins, who oversees the creation of programs to reduce MIT’s environmental impact and improve the resilience of campus operations, focused on the enormity of the problem humanity is facing. He showed the audience a map of the U.S. power grid, with power plants and transmission lines illuminated in a complex web across the country, to underscore the scale of electrification that will be needed to mitigate the worst effects of climate change.

    “The U.S. power grid is the largest machine ever made by mankind,” Higgins said. “It’s been developed over 100 years; it has 7,000 generating plants that feed into it every day; it has 7 million miles of cable and wires; there are transformers and substations; and it lives in every single one of your walls. But people don’t think about it that much.”

    Many cities, states, and organizations like MIT have made commitments to shift to 100 percent clean energy in coming decades. Higgins wanted the audience to try to grasp what that’s going to take.

    “Hundreds of millions of devices and equipment across the planet are going to have to be swapped from fossil fuel to electric-based,” Higgins said. “Our cars, appliances, processes in industry, like making steel and concrete, are going to need to come from this grid. It’ll need to undergo a major modernization and transformation. The good news is it’s already changing.”

    Multiple panelists pointed to developments like the passing of the Inflation Reduction Act to show there was progress being made in reaching urgent sustainability goals.

    “There is a tide change coming, and it’s not only being driven by private capital,” Knight said. “There’s a huge opportunity here, and it’s a really important part of this [Kendall Square] ecosystem.”

    Chief among the topics of discussion was technology development. Even as leaders implement today’s technologies to decarbonize, people in Kendall Square keep a close eye on the new tech being developed and commercialized nearby.

    “I was trying to think about where we are with gene editing,” Brown said. “CRISPR’s been around for 10 years. Compare that to video games. Pong was the first video game when it came out in 1972. Today you have Chess.com using artificial intelligence to power chess games. On gene editing and a lot of these other technologies, we’re much closer to Pong than we are to where it’s going to be. We just can’t imagine today the technology changes we’re going to see over the next five to 10 years.”

    In that regard, Knight discussed some of the promising portfolio companies of The Engine, which invests in early stage, technologically innovative companies. In particular, she highlighted two companies seeking to transform the world’s energy systems with entirely new, 100 percent clean energy sources. MIT spinout Commonwealth Fusion Systems is working on nuclear fusion reactors that could provide abundant, safe, and constant streams of clean energy to our grids, while fellow MIT spinout Quaise Energy is seeking to harvest a new kind of deep geothermal energy using millimeter wave drilling technology.

    “All of our portfolio companies have a focus on sustainability in one way or another,” Knight said. “People who are working on these very hard technologies will change the world.”

    Knight says the kind of collaboration championed by the KSA is important for startups The Engine invests in.

    “We know these companies need a lot of people around them, whether from government, academia, advisors, corporate partners, anyone who can help them on their path, because for a lot of them this is a new path and a new market,” Knight said.

    Reasons for hope

    The KSA is made up of over 150 organizations across Kendall Square. From major employers like Sanofi, Pfizer, MIT, and the Broad Institute to local nonprofit organizations, startups, and independent shops and restaurants, the KSA represents the entire Kendall ecosystem.

    O’Neill Maloney celebrated a visible example of sustainability in Kendall Square early on by the Charles River Conservancy, which has built a floating wetland designed to naturally remove harmful algae blooms from Charles River.

    Other examples of sustainability work in the neighborhood can be found at MIT. Under its “Fast Forward” climate action plan, the Institute has set a goal of eliminating direct emissions from its campus by 2050, including a near-term milestone of achieving net-zero emissions by 2026. Since 2014, when MIT launched a five-year plan for action on climate change, net campus emissions have already been cut by 20 percent by making its campus buildings more energy efficient, transitioning to electric vehicles, and enabling large-scale renewable energy projects, among other strategies.

    In the face of a daunting global challenge, such milestones are reason for optimism.

    “If anybody’s going to be able to do this [shift to 100 percent clean energy] and show how it can be done at an urban, city scale, it’s probably MIT and the city of Cambridge,” McGuire said. “We have a lot of good ingredients to figure this out.”

    Throughout the night, many speakers, attendees, and panelists echoed that sentiment. They said they see plenty of reasons for hope.

    “I’m absolutely optimistic,” Higgins said. “I’m seeing utility companies working with businesses working with regulators — people are coming together on this topic. And one of these new technologies being commercialized is going to change things before 2030, whether its fusion, deep geothermal, small modular nuclear reactors, the technology is just moving so quickly.” More

  • in

    3 Questions: Blue hydrogen and the world’s energy systems

    In the past several years, hydrogen energy has increasingly become a more central aspect of the clean energy transition. Hydrogen can produce clean, on-demand energy that could complement variable renewable energy sources such as wind and solar power. That being said, pathways for deploying hydrogen at scale have yet to be fully explored. In particular, the optimal form of hydrogen production remains in question.

    MIT Energy Initiative Research Scientist Emre Gençer and researchers from a wide range of global academic and research institutions recently published “On the climate impacts of blue hydrogen production,” a comprehensive life-cycle assessment analysis of blue hydrogen, a term referring to natural gas-based hydrogen production with carbon capture and storage. Here, Gençer describes blue hydrogen and the role that hydrogen will play more broadly in decarbonizing the world’s energy systems.

    Q: What are the differences between gray, green, and blue hydrogen?

    A: Though hydrogen does not generate any emissions directly when it is used, hydrogen production can have a huge environmental impact. Colors of hydrogen are increasingly used to distinguish different production methods and as a proxy to represent the associated environmental impact. Today, close to 95 percent of hydrogen production comes from fossil resources. As a result, the carbon dioxide (CO2) emissions from hydrogen production are quite high. Gray, black, and brown hydrogen refer to fossil-based production. Gray is the most common form of production and comes from natural gas, or methane, using steam methane reformation but without capturing CO2.

    There are two ways to move toward cleaner hydrogen production. One is applying carbon capture and storage to the fossil fuel-based hydrogen production processes. Natural gas-based hydrogen production with carbon capture and storage is referred to as blue hydrogen. If substantial amounts of CO2 from natural gas reforming are captured and permanently stored, such hydrogen could be a low-carbon energy carrier. The second way to produce cleaner hydrogen is by using electricity to produce hydrogen via electrolysis. In this case, the source of the electricity determines the environmental impact of the hydrogen, with the lowest impact being achieved when electricity is generated from renewable sources, such as wind and solar. This is known as green hydrogen.

    Q: What insights have you gleaned with a life cycle assessment (LCA) of blue hydrogen and other low-carbon energy systems?

    A: Mitigating climate change requires significant decarbonization of the global economy. Accurate estimation of cumulative greenhouse gas (GHG) emissions and its reduction pathways is critical irrespective of the source of emissions. An LCA approach allows the quantification of the environmental life cycle of a commercial product, process, or service impact with all the stages (cradle-to-grave). The LCA-based comparison of alternative energy pathways, fuel options, etc., provides an apples-to-apples comparison of low-carbon energy choices. In the context of low-carbon hydrogen, it is essential to understand the GHG impact of supply chain options. Depending on the production method, contribution of life-cycle stages to the total emissions might vary. For example, with natural gas–based hydrogen production, emissions associated with production and transport of natural gas might be a significant contributor based on its leakage and flaring rates. If these rates are not precisely accounted for, the environmental impact of blue hydrogen can be underestimated. However, the same rationale is also true for electricity-based hydrogen production. If the electricity is not supplied from low-
carbon sources such as wind, solar, or nuclear, the carbon intensity of hydrogen can be significantly underestimated. In the case of nuclear, there are also other environmental impact considerations.

    An LCA approach — if performed with consistent system boundaries — can provide an accurate environmental impact comparison. It should also be noted that these estimations can only be as good as the assumptions and correlations used unless they are supported by measurements. 

    Q: What conditions are needed to make blue hydrogen production most effective, and how can it complement other decarbonization pathways?

    A: Hydrogen is considered one of the key vectors for the decarbonization of hard-to-abate sectors such as heavy-duty transportation. Currently, more than 95 percent of global hydrogen production is fossil-fuel based. In the next decade, massive amounts of hydrogen must be produced to meet this anticipated demand. It is very hard, if not impossible, to meet this demand without leveraging existing production assets. The immediate and relatively cost-effective option is to retrofit existing plants with carbon capture and storage (blue hydrogen).

    The environmental impact of blue hydrogen may vary over large ranges but depends on only a few key parameters: the methane emission rate of the natural gas supply chain, the CO2 removal rate at the hydrogen production plant, and the global warming metric applied. State-of-the-art reforming with high CO2 capture rates, combined with natural gas supply featuring low methane emissions, substantially reduces GHG emissions compared to conventional natural gas reforming. Under these conditions, blue hydrogen is compatible with low-carbon economies and exhibits climate change impacts at the upper end of the range of those caused by hydrogen production from renewable-based electricity. However, neither current blue nor green hydrogen production pathways render fully “net-zero” hydrogen without additional CO2 removal.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Studying floods to better predict their dangers

    “My job is basically flooding Cambridge,” says Katerina “Katya” Boukin, a graduate student in civil and environmental engineering at MIT and the MIT Concrete Sustainability Hub’s resident expert on flood simulations. 

    You can often find her fine-tuning high-resolution flood risk models for the City of Cambridge, Massachusetts, or talking about hurricanes with fellow researcher Ipek Bensu Manav.

    Flooding represents one of the world’s gravest natural hazards. Extreme climate events inducing flooding, like severe storms, winter storms, and tropical cyclones, caused an estimated $128.1 billion of damages in 2021 alone. 

    Climate simulation models suggest that severe storms will become more frequent in the coming years, necessitating a better understanding of which parts of cities are most vulnerable — an understanding that can be improved through modeling.

    A problem with current flood models is that they struggle to account for an oft-misunderstood type of flooding known as pluvial flooding. 

    “You might think of flooding as the overflowing of a body of water, like a river. This is fluvial flooding. This can be somewhat predictable, as you can think of proximity to water as a risk factor,” Boukin explains.

    However, the “flash flooding” that causes many deaths each year can happen even in places nowhere near a body of water. This is an example of pluvial flooding, which is affected by terrain, urban infrastructure, and the dynamic nature of storm loads.

    “If we don’t know how a flood is propagating, we don’t know the risk it poses to the urban environment. And if we don’t understand the risk, we can’t really discuss mitigation strategies,” says Boukin, “That’s why I pursue improving flood propagation models.”

    Boukin is leading development of a new flood prediction method that seeks to address these shortcomings. By better representing the complex morphology of cities, Boukin’s approach may provide a clearer forecast of future urban flooding.

    Katya Boukin developed this model of the City of Cambridge, Massachusetts. The base model was provided through a collaboration between MIT, the City of Cambridge, and Dewberry Engineering.

    Image: Katya Boukin

    Previous item
    Next item

    “In contrast to the more typical traditional catchment model, our method has rainwater spread around the urban environment based on the city’s topography, below-the-surface features like sewer pipes, and the characteristics of local soils,” notes Boukin.

    “We can simulate the flooding of regions with local rain forecasts. Our results can show how flooding propagates by the foot and by the second,” she adds.

    While Boukin’s current focus is flood simulation, her unconventional academic career has taken her research in many directions, like examining structural bottlenecks in dense urban rail systems and forecasting ground displacement due to tunneling. 

    “I’ve always been interested in the messy side of problem-solving. I think that difficult problems present a real chance to gain a deeper understanding,” says Boukin.

    Boukin credits her upbringing for giving her this perspective. A native of Israel, Boukin says that civil engineering is the family business. “My parents are civil engineers, my mom’s parents are, too, her grandfather was a professor in civil engineering, and so on. Civil engineering is my bloodline.”

    However, the decision to follow the family tradition did not come so easily. “After I took the Israeli equivalent of the SAT, I was at a decision point: Should I go to engineering school or medical school?” she recalls.

    “I decided to go on a backpacking trip to help make up my mind. It’s sort of an Israeli rite to explore internationally, so I spent six months in South America. I think backpacking is something everyone should do.”

    After this soul searching, Boukin landed on engineering school, where she fell in love with structural engineering. “It was the option that felt most familiar and interesting. I grew up playing with AutoCAD on the family computer, and now I use AutoCAD professionally!” she notes.

    “For my master’s degree, I was looking to study in a department that would help me integrate knowledge from fields like climatology and civil engineering. I found the MIT Department of Civil and Environmental Engineering to be an excellent fit,” she says.

    “I am lucky that MIT has so many people that work together as well as they do. I ended up at the Concrete Sustainability Hub, where I’m working on projects which are the perfect fit between what I wanted to do and what the department wanted to do.” 

    Boukin’s move to Cambridge has given her a new perspective on her family and childhood. 

    “My parents brought me to Israel when I was just 1 year old. In moving here as a second-time immigrant, I have a new perspective on what my parents went through during the move to Israel. I moved when I was 27 years old, the same age as they were. They didn’t have a support network and worked any job they could find,” she explains.

    “I am incredibly grateful to them for the morals they instilled in my sister, who recently graduated medical school, and I. I know I can call my parents if I ever need something, and they will do whatever they can to help.”

    Boukin hopes to honor her parents’ efforts through her research.

    “Not only do I want to help stakeholders understand flood risks, I want to make awareness of flooding more accessible. Each community needs different things to be resilient, and different cultures have different ways of delivering and receiving information,” she says.

    “Everyone should understand that they, in addition to the buildings and infrastructure around them, are part of a complex ecosystem. Any change to a city can affect the rest of it. If designers and residents are aware of this when considering flood mitigation strategies, we can better design cities and understand the consequences of damage.” More

  • in

    3Q: Why Europe is so vulnerable to heat waves

    This year saw high-temperature records shattered across much of Europe, as crops withered in the fields due to widespread drought. Is this a harbinger of things to come as the Earth’s climate steadily warms up?

    Elfatih Eltahir, MIT professor of civil and environmental engineering and H. M. King Bhumibol Professor of Hydrology and Climate, and former doctoral student Alexandre Tuel PhD ’20 recently published a piece in the Bulletin of the Atomic Scientists describing how their research helps explain this anomalous European weather. The findings are based in part on analyses described in their book “Future Climate of the Mediterranean and Europe,” published earlier this year. MIT News asked the two authors to describe the dynamics behind these extreme weather events.

    Q: Was the European heat wave this summer anticipated based on existing climate models?

    Eltahir: Climate models project increasingly dry summers over Europe. This is especially true for the second half of the 21st century, and for southern Europe. Extreme dryness is often associated with hot conditions and heat waves, since any reduction in evaporation heats the soil and the air above it. In general, models agree in making such projections about European summers. However, understanding the physical mechanisms responsible for these projections is an active area of research.

    The same models that project dry summers over southern Europe also project dry winters over the neighboring Mediterranean Sea. In fact, the Mediterranean Sea stands out as one of the most significantly impacted regions — a literal “hot spot” — for winter droughts triggered by climate change. Again, until recently, the association between the projections of summer dryness over Europe and dry winters over the Mediterranean was not understood.

    In recent MIT doctoral research, carried out in the Department of Civil and Environmental Engineering, a hypothesis was developed to explain why the Mediterranean stands out as a hot spot for winter droughts under climate change. Further, the same theory offers a mechanistic understanding that connects the projections of dry summers over southern Europe and dry winters over the Mediterranean.

    What is exciting about the observed climate over Europe last summer is the fact that the observed drought started and developed with spatial and temporal patterns that are consistent with our proposed theory, and in particular the connection to the dry conditions observed over the Mediterranean during the previous winter.

    Q: What is it about the area around the Mediterranean basin that produces such unusual weather extremes?

    Eltahir: Multiple factors come together to cause extreme heat waves such as the one that Europe has experienced this summer, as well as previously, in 2003, 2015, 2018, 2019, and 2020. Among these, however, mutual influences between atmospheric dynamics and surface conditions, known as land-atmosphere feedbacks, seem to play a very important role.

    In the current climate, southern Europe is located in the transition zone between the dry subtropics (the Sahara Desert in North Africa) and the relatively wet midlatitudes (with a climate similar to that of the Pacific Northwest). High summertime temperatures tend to make the precipitation that falls to the ground evaporate quickly, and as a consequence soil moisture during summer is very dependent on springtime precipitation. A dry spring in Europe (such as the 2022 one) causes dry soils in late spring and early summer. This lack of surface water in turn limits surface evaporation during summer. Two important consequences follow: First, incoming radiative energy from the sun preferentially goes into increasing air temperature rather than evaporating water; and second, the inflow of water into air layers near the surface decreases, which makes the air drier and precipitation less likely. Combined, these two influences increase the likelihood of heat waves and droughts.

    Tuel: Through land-atmosphere feedbacks, dry springs provide a favorable environment for persistent warm and dry summers but are of course not enough to directly cause heat waves. A spark is required to ignite the fuel. In Europe and elsewhere, this spark is provided by large-scale atmospheric dynamics. If an anticyclone sets over an area with very dry soils, surface temperature can quickly shoot up as land-atmosphere feedbacks come into play, developing into a heat wave that can persist for weeks.

    The sensitivity to springtime precipitation makes southern Europe and the Mediterranean particularly prone to persistent summer heat waves. This will play an increasingly important role in the future, as spring precipitation is expected to decline, making scorching summers even more likely in this corner of the world. The decline in spring precipitation, which originates as an anomalously dry winter around the Mediterranean, is very robust across climate projections. Southern Europe and the Mediterranean really stand out from most other land areas, where precipitation will on average increase with global warming.

    In our work, we showed that this Mediterranean winter decline was driven by two independent factors: on the one hand, trends in the large-scale circulation, notably stationary atmospheric waves, and on the other hand, reduced warming of the Mediterranean Sea relative to the surrounding continents — a well-known feature of global warming. Both factors lead to increased surface air pressure and reduced precipitation over the Mediterranean and Southern Europe.

    Q: What can we expect over the coming decades in terms of the frequency and severity of these kinds of droughts, floods, and other extremes in European weather?

    Tuel: Climate models have long shown that the frequency and intensity of heat waves was bound to increase as the global climate warms, and Europe is no exception. The reason is simple: As the global temperature rises, the temperature distribution shifts toward higher values, and heat waves become more intense and more frequent. Southern Europe and the Mediterranean, however, will be hit particularly hard. The reason for this is related to the land-atmosphere feedbacks we just discussed. Winter precipitation over the Mediterranean and spring precipitation over southern Europe will decline significantly, which will lead to a decrease in early summer soil moisture over southern Europe and will push average summer temperatures even higher; the region will become a true climate change hot spot. In that sense, 2022 may really be a taste of the future. The succession of recent heat waves in Europe, however, suggests that things may be going faster than climate model projections imply. Decadal variability or badly understood trends in large-scale atmospheric dynamics may play a role here, though that is still debated. Another possibility is that climate models tend to underestimate the magnitude of land-atmosphere feedbacks and downplay the influence of dry soil moisture anomalies on summertime weather.

    Potential trends in floods are more difficult to assess because floods result from a multiplicity of factors, like extreme precipitation, soil moisture levels, or land cover. Extreme precipitation is generally expected to increase in most regions, but very high uncertainties remain, notably because extreme precipitation is highly dependent on atmospheric dynamics about which models do not always agree. What is almost certain is that with warming, the water content of the atmosphere increases (following a law of thermodynamics known as the Clausius-Clapeyron relationship). Thus, if the dynamics are favorable to precipitation, a lot more of it may fall in a warmer climate. Last year’s floods in Germany, for example, were triggered by unprecedented heavy rainfall which climate change made more likely. More

  • in

    Professor Emeritus Richard “Dick” Eckaus, who specialized in development economics, dies at 96

    Richard “Dick” Eckaus, Ford Foundation International Professor of Economics, emeritus, in the Department of Economics, died on Sept. 11 in Boston. He was 96 years old.

    Eckaus was born in Kansas City, Missouri on April 30, 1926, the youngest of three children to parents who had emigrated from Lithuania. His father, Julius Eckaus, was a tailor, and his mother, Bessie (Finkelstein) Eckaus helped run the business. The family struggled to make ends meet financially but academic success offered Eckaus a way forward.

    He graduated from Westport High School, joined the United States Navy, and was awarded a college scholarship via the V-12 Navy College Training Program during World War II to study electrical engineering at Iowa State University. After graduating in 1944, Eckaus served on a base in New York State until he was discharged in 1946 as lieutenant junior grade.

    He attended Washington University in St. Louis, Missouri, on the GI Bill, graduating in 1948 with a master’s degree in economics, before relocating to Boston and serving as instructor of economics at Babson Institute, and then assistant and associate professor of economics at Brandeis University from 1951 to 1962. He concurrently earned a PhD in economics from MIT in 1954.

    The following year, the American Economic Review published “The Factor Proportions Problem in Economic Development,” a paper written by Eckaus that remained part of the macroeconomics canon for decades. He returned to MIT in 1962 and went on to teach development economics to generations of MIT students, serving as head of the department from 1986 to 1990 and continuing to work there for the remainder of his career.

    The development economist Paul Rosenstein-Rodan (1902-85), Eckaus’ mentor at MIT, took him to live and work first in Italy in 1954 and then in India in 1961. These stints helping governments abroad solidified Eckaus’ commitment to not only excelling in the field, but also creating opportunities for colleagues and students to contribute as well — occasionally in conjunction with the World Bank.

    Longtime colleague Abhijit Banerjee, a Nobel laureate, Ford Foundation International Professor of Economics, and director of the Abdul Latif Jameel Poverty Action Lab at MIT, recalls reading a reprint of Eckaus’ 1955 paper as an undergraduate in India. When he subsequently arrived at MIT as a doctoral candidate, he remembers “trying to tread lightly and not to take up too much space,” around the senior economist. “In fact, he made me feel so welcome,” Banerjee says. “He was both an outstanding scholar and someone who had the modesty and generosity to make younger scholars feel valued and heard.”

    The field of development economics provided Eckaus with a broad, powerful platform to work with governments in developing countries — including India, Egypt, Bhutan, Mexico, and Portugal — to set up economic systems. His development planning models helped governments to forecast where their economies were headed and how public policies could be implemented to shift or accelerate the direction.

    The Government of Portugal awarded Eckaus the Great-Cross of the Order of Prince Henry the Navigator after he brought teams from MIT to assist the country in its peaceful transition to democracy following the 1974 Carnation Revolution. Initiated at the request of the Portuguese Central Bank, these graduate students became some of the most prominent economists of their generation in America. They include Paul Krugman, Andrew Abel, Jeremy I. Bulow, and Kenneth Rogoff.

    His colleague for five decades, Paul Joskow, the Elizabeth and James Killian Professor of Economics at MIT, says that’s no surprise. “He was a real rock of the economics department. He deeply cared about the graduate students and younger faculty. He was a very supportive person.”

    Eckaus was also deeply interested in economic aspects of energy and environment, and in 1991 was instrumental in the formation of the MIT Joint Program on the Science and Policy of Global Change, a program that integrates the natural and social sciences in analysis of global climate threat. As Joint Program co-founder Henry Jacoby observes, “Dick provided crucial ideas as to how that kind of interdisciplinary work might be done at MIT. He was already 65 at the time, and continued for three decades to be active in guiding the research and analysis.”

    Although Eckaus retired officially in 1996, he continued to attend weekly faculty lunches, conduct research, mentor colleagues, and write papers related to climate change and the energy crisis. He leaves behind a trove of more than 100 published papers and eight authored and co-authored books.

    “He was continuously retooling himself and creating new interests. I was impressed by his agility of mind and his willingness to shift to new areas,” says his oldest living friend and peer, Jagdish Bhagwati, Columbia University professor of economics, law, and international relations, emeritus, and director of the Raj Center on Indian Economic Policies. “In their early career, economists usually write short theoretical articles that make large points, and Dick did that with two seminal articles in the leading professional journals of the time, the Quarterly Journal of Economics and the American Economic Review. Then, he shifted his focus to building large computable models. He also diversified by working in an advisory capacity in countries as diverse as Portugal and India. He was a ‘complete’ economist who straddled all styles of economics with distinction.” 

    Eckaus is survived by his beloved wife of 32 years Patricia Leahy Meaney of Brookline, Massachusetts. The two traveled the world, hiked the Alps, and collected pre-Columbian and contemporary art. He is lovingly remembered by his daughter Susan Miller; his step-son James Meaney (Bruna); step-daughter Caitlin Meaney Burrows (Lee); and four grandchildren, Chloe Burrows, Finley Burrows, Brandon Meaney, and Maria Sophia Meaney.

    In lieu of flowers, please consider a donation in Eckaus’ name to MIT Economics (77 Massachusetts Ave., Building E52-300, Cambridge, MA 02139). A memorial in his honor will be held later this year. More