More stories

  • in

    Is there enough land on Earth to fight climate change and feed the world?

    Capping global warming at 1.5 degrees Celsius is a tall order. Achieving that goal will not only require a massive reduction in greenhouse gas emissions from human activities, but also a substantial reallocation of land to support that effort and sustain the biosphere, including humans. More land will be needed to accommodate a growing demand for bioenergy and nature-based carbon sequestration while ensuring sufficient acreage for food production and ecological sustainability.The expanding role of land in a 1.5 C world will be twofold — to remove carbon dioxide from the atmosphere and to produce clean energy. Land-based carbon dioxide removal strategies include bioenergy with carbon capture and storage; direct air capture; and afforestation/reforestation and other nature-based solutions. Land-based clean energy production includes wind and solar farms and sustainable bioenergy cropland. Any decision to allocate more land for climate mitigation must also address competing needs for long-term food security and ecosystem health.Land-based climate mitigation choices vary in terms of costs — amount of land required, implications for food security, impact on biodiversity and other ecosystem services — and benefits — potential for sequestering greenhouse gases and producing clean energy.Now a study in the journal Frontiers in Environmental Science provides the most comprehensive analysis to date of competing land-use and technology options to limit global warming to 1.5 C. Led by researchers at the MIT Center for Sustainability Science and Strategy (CS3), the study applies the MIT Integrated Global System Modeling (IGSM) framework to evaluate costs and benefits of different land-based climate mitigation options in Sky2050, a 1.5 C climate-stabilization scenario developed by Shell.Under this scenario, demand for bioenergy and natural carbon sinks increase along with the need for sustainable farming and food production. To determine if there’s enough land to meet all these growing demands, the research team uses the global hectare (gha) — an area of 10,000 square meters, or 2.471 acres — as the standard unit of measurement, and current estimates of the Earth’s total habitable land area (about 10 gha) and land area used for food production and bioenergy (5 gha).The team finds that with transformative changes in policy, land management practices, and consumption patterns, global land is sufficient to provide a sustainable supply of food and ecosystem services throughout this century while also reducing greenhouse gas emissions in alignment with the 1.5 C goal. These transformative changes include policies to protect natural ecosystems; stop deforestation and accelerate reforestation and afforestation; promote advances in sustainable agriculture technology and practice; reduce agricultural and food waste; and incentivize consumers to purchase sustainably produced goods.If such changes are implemented, 2.5–3.5 gha of land would be used for NBS practices to sequester 3–6 gigatonnes (Gt) of CO2 per year, and 0.4–0.6 gha of land would be allocated for energy production — 0.2–0.3 gha for bioenergy and 0.2–0.35 gha for wind and solar power generation.“Our scenario shows that there is enough land to support a 1.5 degree C future as long as effective policies at national and global levels are in place,” says CS3 Principal Research Scientist Angelo Gurgel, the study’s lead author. “These policies must not only promote efficient use of land for food, energy, and nature, but also be supported by long-term commitments from government and industry decision-makers.” More

  • in

    New AI tool generates realistic satellite images of future flooding

    Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.Generative adversarial imagesThe new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”Flood hallucinationsIn their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud. More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More

  • in

    3 Questions: The past, present, and future of sustainability science

    It was 1978, over a decade before the word “sustainable” would infiltrate environmental nomenclature, and Ronald Prinn, MIT professor of atmospheric science, had just founded the Advanced Global Atmospheric Gases Experiment (AGAGE). Today, AGAGE provides real-time measurements for well over 50 environmentally harmful trace gases, enabling us to determine emissions at the country level, a key element in verifying national adherence to the Montreal Protocol and the Paris Accord. This, Prinn says, started him thinking about doing science that informed decision making.Much like global interest in sustainability, Prinn’s interest and involvement continued to grow into what would become three decades worth of achievements in sustainability science. The Center for Global Change Science (CGCS) and Joint Program on the Science and Policy Global Change, respectively founded and co-founded by Prinn, have recently joined forces to create the MIT School of Science’s new Center for Sustainability Science and Strategy (CS3), lead by former CGCS postdoc turned MIT professor, Noelle Selin.As he prepares to pass the torch, Prinn reflects on how far sustainability has come, and where it all began.Q: Tell us about the motivation for the MIT centers you helped to found around sustainability.A: In 1990 after I founded the Center for Global Change Science, I also co-founded the Joint Program on the Science and Policy Global Change with a very important partner, [Henry] “Jake” Jacoby. He’s now retired, but at that point he was a professor in the MIT Sloan School of Management. Together, we determined that in order to answer questions related to what we now call sustainability of human activities, you need to combine the natural and social sciences involved in these processes. Based on this, we decided to make a joint program between the CGCS and a center that he directed, the Center for Energy and Environmental Policy Research (CEEPR).It was called the “joint program” and was joint for two reasons — not only were two centers joining, but two disciplines were joining. It was not about simply doing the same science. It was about bringing a team of people together that could tackle these coupled issues of environment, human development and economy. We were the first group in the world to fully integrate these elements together.Q: What has been your most impactful contribution and what effect did it have on the greater public’s overall understanding?A: Our biggest contribution is the development, and more importantly, the application of the Integrated Global System Model [IGSM] framework, looking at human development in both developing countries and developed countries that had a significant impact on the way people thought about climate issues. With IGSM, we were able to look at the interactions among human and natural components, studying the feedbacks and impacts that climate change had on human systems; like how it would alter agriculture and other land activities, how it would alter things we derive from the ocean, and so on.Policies were being developed largely by economists or climate scientists working independently, and we started showing how the real answers and analysis required a coupling of all of these components. We showed, and I think convincingly, that what people used to study independently, must be coupled together, because the impacts of climate change and air pollution affected so many things.To address the value of policy, despite the uncertainty in climate projections, we ran multiple runs of the IGSM with and without policy, with different choices for uncertain IGSM variables. For public communication, around 2005, we introduced our signature Greenhouse Gamble interactive visualization tools; these have been renewed over time as science and policies evolved.Q: What can MIT provide now at this critical juncture in understanding climate change and its impact?A: We need to further push the boundaries of integrated global system modeling to ensure full sustainability of human activity and all of its beneficial dimensions, which is the exciting focus that the CS3 is designed to address. We need to focus on sustainability as a central core element and use it to not just analyze existing policies but to propose new ones. Sustainability is not just climate or air pollution, it’s got to do with human impacts in general. Human health is central to sustainability, and equally important to equity. We need to expand the capability for credibly assessing what the impact policies have not just on developed countries, but on developing countries, taking into account that many places around the world are at artisanal levels of their economies. They cannot be blamed for anything that is changing climate and causing air pollution and other detrimental things that are currently going on. They need our help. That’s what sustainability is in its full dimensions.Our capabilities are evolving toward a modeling system so detailed that we can find out detrimental things about policies even at local levels before investing in changing infrastructure. This is going to require collaboration among even more disciplines and creating a seamless connection between research and decision making; not just for policies enacted in the public sector, but also for decisions that are made in the private sector.  More

  • in

    Study: Rocks from Mars’ Jezero Crater, which likely predate life on Earth, contain signs of water

    In a new study appearing today in the journal AGU Advances, scientists at MIT and NASA report that seven rock samples collected along the “fan front” of Mars’ Jezero Crater contain minerals that are typically formed in water. The findings suggest that the rocks were originally deposited by water, or may have formed in the presence of water.The seven samples were collected by NASA’s Perseverance rover in 2022 during its exploration of the crater’s western slope, where some rocks were hypothesized to have formed in what is now a dried-up ancient lake. Members of the Perseverance science team, including MIT scientists, have studied the rover’s images and chemical analyses of the samples, and confirmed that the rocks indeed contain signs of water, and that the crater was likely once a watery, habitable environment.Whether the crater was actually inhabited is yet unknown. The team found that the presence of organic matter — the starting material for life — cannot be confirmed, at least based on the rover’s measurements. But judging from the rocks’ mineral content, scientists believe the samples are their best chance of finding signs of ancient Martian life once the rocks are returned to Earth for more detailed analysis.“These rocks confirm the presence, at least temporarily, of habitable environments on Mars,” says the study’s lead author, Tanja Bosak, professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What we’ve found is that indeed there was a lot of water activity. For how long, we don’t know, but certainly for long enough to create these big sedimentary deposits.”What’s more, some of the collected samples may have originally been deposited in the ancient lake more than 3.5 billion years ago — before even the first signs of life on Earth.“These are the oldest rocks that may have been deposited by water, that we’ve ever laid hands or rover arms on,” says co-author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That’s exciting, because it means these are the most promising rocks that may have preserved fossils, and signatures of life.”The study’s MIT co-authors include postdoc Eva Scheller, and research scientist Elias Mansbach, along with members of the Perseverance science team.At the front

    NASA’s Perseverance rover collected rock samples from two locations seen in this image of Mars’ Jezero Crater: “Wildcat Ridge” (lower left) and “Skinner Ridge” (upper right).

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    The new rock samples were collected in 2022 as part of the rover’s Fan Front Campaign — an exploratory phase during which Perseverance traversed Jezero Crater’s western slope, where a fan-like region contains sedimentary, layered rocks. Scientists suspect that this “fan front” is an ancient delta that was created by sediment that flowed with a river and settled into a now bone-dry lakebed. If life existed on Mars, scientists believe that it could be preserved in the layers of sediment along the fan front.In the end, Perseverance collected seven samples from various locations along the fan front. The rover obtained each sample by drilling into the Martian bedrock and extracting a pencil-sized core, which it then sealed in a tube to one day be retrieved and returned to Earth for detailed analysis.

    Composed of multiple images from NASA’s Perseverance Mars rover, this mosaic shows a rocky outcrop called “Wildcat Ridge,” where the rover extracted two rock cores and abraded a circular patch to investigate the rock’s composition.

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    Prior to extracting the cores, the rover took images of the surrounding sediments at each of the seven locations. The science team then processed the imaging data to estimate a sediment’s average grain size and mineral composition. This analysis showed that all seven collected samples likely contain signs of water, suggesting that they were initially deposited by water.Specifically, Bosak and her colleagues found evidence of certain minerals in the sediments that are known to precipitate out of water.“We found lots of minerals like carbonates, which are what make reefs on Earth,” Bosak says. “And it’s really an ideal material that can preserve fossils of microbial life.”Interestingly, the researchers also identified sulfates in some samples that were collected at the base of the fan front. Sulfates are minerals that form in very salty water — another sign that water was present in the crater at one time — though very salty water, Bosak notes, “is not necessarily the best thing for life.” If the entire crater was once filled with very salty water, then it would be difficult for any form of life to thrive. But if only the bottom of the lake were briny, that could be an advantage, at least for preserving any signs of life that may have lived further up, in less salty layers, that eventually died and drifted down to the bottom.“However salty it was, if there were any organics present, it’s like pickling something in salt,” Bosak says. “If there was life that fell into the salty layer, it would be very well-preserved.”Fuzzy fingerprintsBut the team emphasizes that organic matter has not been confidently detected by the rover’s instruments. Organic matter can be signs of life, but can also be produced by certain geological processes that have nothing to do with living matter. Perseverance’s predecessor, the Curiosity rover, had detected organic matter throughout Mars’ Gale Crater, which scientists suspect may have come from asteroids that made impact with Mars in the past.And in a previous campaign, Perseverance detected what appeared to be organic molecules at multiple locations along Jezero Crater’s floor. These observations were taken by the rover’s Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instrument, which uses ultraviolet light to scan the Martian surface. If organics are present, they can glow, similar to material under a blacklight. The wavelengths at which the material glows act as a sort of fingerprint for the kind of organic molecules that are present.In Perseverance’s previous exploration of the crater floor, SHERLOC appeared to pick up signs of organic molecules throughout the region, and later, at some locations along the fan front. But a careful analysis, led by MIT’s Eva Scheller, has found that while the particular wavelengths observed could be signs of organic matter, they could just as well be signatures of substances that have nothing to do with organic matter.“It turns out that cerium metals incorporated in minerals actually produce very similar signals as the organic matter,” Scheller says. “When investigated, the potential organic signals were strongly correlated with phosphate minerals, which always contain some cerium.”Scheller’s work shows that the rover’s measurements cannot be interpreted definitively as organic matter.“This is not bad news,” Bosak says. “It just tells us there is not very abundant organic matter. It’s still possible that it’s there. It’s just below the rover’s detection limit.”When the collected samples are finally sent back to Earth, Bosak says laboratory instruments will have more than enough sensitivity to detect any organic matter that might lie within.“On Earth, once we have microscopes with nanometer-scale resolution, and various types of instruments that we cannot staff on one rover, then we can actually attempt to look for life,” she says.This work was supported, in part, by NASA. More

  • in

    MIT School of Science launches Center for Sustainability Science and Strategy

    The MIT School of Science is launching a center to advance knowledge and computational capabilities in the field of sustainability science, and support decision-makers in government, industry, and civil society to achieve sustainable development goals. Aligned with the Climate Project at MIT, researchers at the MIT Center for Sustainability Science and Strategy will develop and apply expertise from across the Institute to improve understanding of sustainability challenges, and thereby provide actionable knowledge and insight to inform strategies for improving human well-being for current and future generations.Noelle Selin, professor at MIT’s Institute for Data, Systems and Society and the Department of Earth, Atmospheric and Planetary Sciences, will serve as the center’s inaugural faculty director. C. Adam Schlosser and Sergey Paltsev, senior research scientists at MIT, will serve as deputy directors, with Anne Slinn as executive director.Incorporating and succeeding both the Center for Global Change Science and Joint Program on the Science and Policy of Global Change while adding new capabilities, the center aims to produce leading-edge research to help guide societal transitions toward a more sustainable future. Drawing on the long history of MIT’s efforts to address global change and its integrated environmental and human dimensions, the center is well-positioned to lead burgeoning global efforts to advance the field of sustainability science, which seeks to understand nature-society systems in their full complexity. This understanding is designed to be relevant and actionable for decision-makers in government, industry, and civil society in their efforts to develop viable pathways to improve quality of life for multiple stakeholders.“As critical challenges such as climate, health, energy, and food security increasingly affect people’s lives around the world, decision-makers need a better understanding of the earth in its full complexity — and that includes people, technologies, and institutions as well as environmental processes,” says Selin. “Better knowledge of these systems and how they interact can lead to more effective strategies that avoid unintended consequences and ensure an improved quality of life for all.”    Advancing knowledge, computational capability, and decision supportTo produce more precise and comprehensive knowledge of sustainability challenges and guide decision-makers to formulate more effective strategies, the center has set the following goals:Advance fundamental understanding of the complex interconnected physical and socio-economic systems that affect human well-being. As new policies and technologies are developed amid climate and other global changes, they interact with environmental processes and institutions in ways that can alter the earth’s critical life-support systems. Fundamental mechanisms that determine many of these systems’ behaviors, including those related to interacting climate, water, food, and socio-economic systems, remain largely unknown and poorly quantified. Better understanding can help society mitigate the risks of abrupt changes and “tipping points” in these systems.Develop, establish and disseminate new computational tools toward better understanding earth systems, including both environmental and human dimensions. The center’s work will integrate modeling and data analysis across disciplines in an era of increasing volumes of observational data. MIT multi-system models and data products will provide robust information to inform decision-making and shape the next generation of sustainability science and strategy.Produce actionable science that supports equity and justice within and across generations. The center’s research will be designed to inform action associated with measurable outcomes aligned with supporting human well-being across generations. This requires engaging a broad range of stakeholders, including not only nations and companies, but also nongovernmental organizations and communities that take action to promote sustainable development — with special attention to those who have historically borne the brunt of environmental injustice.“The center’s work will advance fundamental understanding in sustainability science, leverage leading-edge computing and data, and promote engagement and impact,” says Selin. “Our researchers will help lead scientists and strategists across the globe who share MIT’s commitment to mobilizing knowledge to inform action toward a more sustainable world.”Building a better world at MITBuilding on existing MIT capabilities in sustainability, science, and strategy, the center aims to: focus research, education, and outreach under a theme that reflects a comprehensive state of the field and international research directions, fostering a dynamic community of students, researchers, and faculty;raise the visibility of sustainability science at MIT, emphasizing links between science and action, in the context of existing Institute goals and other efforts on climate and sustainability, and in a way that reflects the vital contributions of a range of natural and social science disciplines to understanding human-environment systems; andre-emphasize MIT’s long-standing expertise in integrated systems modeling while leveraging the Institute’s concurrent leading-edge strengths in data and computing, establishing leadership that harnesses recent innovations, including those in machine learning and artificial intelligence, toward addressing the science challenges of global change and sustainability.“The Center for Sustainability Science and Strategy will provide the necessary synergy for our MIT researchers to develop, deploy, and scale up serious solutions to climate change and other critical sustainability challenges,” says Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics and dean of the MIT School of Science. “With Professor Selin at its helm, the center will also ensure that these solutions are created in concert with the people who are directly affected now and in the future.”The center builds on more than three decades of achievements by the Center for Global Change Science and the Joint Program on the Science and Policy of Global Change, both of which were directed or co-directed by professor of atmospheric science Ronald Prinn. More

  • in

    Scientists find a human “fingerprint” in the upper troposphere’s increasing ozone

    Ozone can be an agent of good or harm, depending on where you find it in the atmosphere. Way up in the stratosphere, the colorless gas shields the Earth from the sun’s harsh ultraviolet rays. But closer to the ground, ozone is a harmful air pollutant that can trigger chronic health problems including chest pain, difficulty breathing, and impaired lung function.And somewhere in between, in the upper troposphere — the layer of the atmosphere just below the stratosphere, where most aircraft cruise — ozone contributes to warming the planet as a potent greenhouse gas.There are signs that ozone is continuing to rise in the upper troposphere despite efforts to reduce its sources at the surface in many nations. Now, MIT scientists confirm that much of ozone’s increase in the upper troposphere is likely due to humans.In a paper appearing today in the journal Environmental Science and Technology, the team reports that they detected a clear signal of human influence on upper tropospheric ozone trends in a 17-year satellite record starting in 2005.“We confirm that there’s a clear and increasing trend in upper tropospheric ozone in the northern midlatitudes due to human beings rather than climate noise,” says study lead author Xinyuan Yu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).“Now we can do more detective work and try to understand what specific human activities are leading to this ozone trend,” adds co-author Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences.The study’s MIT authors include Sebastian Eastham and Qindan Zhu, along with Benjamin Santer at the University of California at Los Angeles, Gustavo Correa of Columbia University, Jean-François Lamarque at the National Center for Atmospheric Research, and Jerald Zimeke at NASA Goddard Space Flight Center.Ozone’s tangled webUnderstanding ozone’s causes and influences is a challenging exercise. Ozone is not emitted directly, but instead is a product of “precursors” — starting ingredients, such as nitrogen oxides and volatile organic compounds (VOCs), that react in the presence of sunlight to form ozone. These precursors are generated from vehicle exhaust, power plants, chemical solvents, industrial processes, aircraft emissions, and other human-induced activities.Whether and how long ozone lingers in the atmosphere depends on a tangle of variables, including the type and extent of human activities in a given area, as well as natural climate variability. For instance, a strong El Niño year could nudge the atmosphere’s circulation in a way that affects ozone’s concentrations, regardless of how much ozone humans are contributing to the atmosphere that year.Disentangling the human- versus climate-driven causes of ozone trend, particularly in the upper troposphere, is especially tricky. Complicating matters is the fact that in the lower troposphere — the lowest layer of the atmosphere, closest to ground level — ozone has stopped rising, and has even fallen in some regions at northern midlatitudes in the last few decades. This decrease in lower tropospheric ozone is mainly a result of efforts in North America and Europe to reduce industrial sources of air pollution.“Near the surface, ozone has been observed to decrease in some regions, and its variations are more closely linked to human emissions,” Yu notes. “In the upper troposphere, the ozone trends are less well-monitored but seem to decouple with those near the surface, and ozone is more easily influenced by climate variability. So, we don’t know whether and how much of that increase in observed ozone in the upper troposphere is attributed to humans.”A human signal amid climate noiseYu and Fiore wondered whether a human “fingerprint” in ozone levels, caused directly by human activities, could be strong enough to be detectable in satellite observations in the upper troposphere. To see such a signal, the researchers would first have to know what to look for.For this, they looked to simulations of the Earth’s climate and atmospheric chemistry. Following approaches developed in climate science, they reasoned that if they could simulate a number of possible climate variations in recent decades, all with identical human-derived sources of ozone precursor emissions, but each starting with a slightly different climate condition, then any differences among these scenarios should be due to climate noise. By inference, any common signal that emerged when averaging over the simulated scenarios should be due to human-driven causes. Such a signal, then, would be a “fingerprint” revealing human-caused ozone, which the team could look for in actual satellite observations.With this strategy in mind, the team ran simulations using a state-of-the-art chemistry climate model. They ran multiple climate scenarios, each starting from the year 1950 and running through 2014.From their simulations, the team saw a clear and common signal across scenarios, which they identified as a human fingerprint. They then looked to tropospheric ozone products derived from multiple instruments aboard NASA’s Aura satellite.“Quite honestly, I thought the satellite data were just going to be too noisy,” Fiore admits. “I didn’t expect that the pattern would be robust enough.”But the satellite observations they used gave them a good enough shot. The team looked through the upper tropospheric ozone data derived from the satellite products, from the years 2005 to 2021, and found that, indeed, they could see the signal of human-caused ozone that their simulations predicted. The signal is especially pronounced over Asia, where industrial activity has risen significantly in recent decades and where abundant sunlight and frequent weather events loft pollution, including ozone and its precursors, to the upper troposphere.Yu and Fiore are now looking to identify the specific human activities that are leading to ozone’s increase in the upper troposphere.“Where is this increasing trend coming from? Is it the near-surface emissions from combusting fossil fuels in vehicle engines and power plants? Is it the aircraft that are flying in the upper troposphere? Is it the influence of wildland fires? Or some combination of all of the above?” Fiore says. “Being able to separate human-caused impacts from natural climate variations can help to inform strategies to address climate change and air pollution.”This research was funded, in part, by NASA. More

  • in

    A bright and airy hub for climate at MIT

    Seen from a distance, MIT’s Cecil and Ida Green Building (Building 54) — designed by renowned architect and MIT alumnus I.M. Pei ’40 — is one of the most iconic buildings on the Cambridge, Massachusetts, skyline. Home to the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS), the 21-story concrete structure soars over campus, topped with its distinctive spherical radar dome. Close up, however, it was a different story.A sunless, two-story, open-air plaza beneath the tower previously served as a nondescript gateway to the department’s offices, labs, and classrooms above. “It was cold and windy — probably the windiest place on campus,” EAPS department head Robert van der Hilst, the Schlumberger Professor of Earth and Planetary Sciences, told a packed auditorium inside the building in March. “You would pass through the elevators and disappear into the corridors, never to be seen again until the end of the day.”Van der Hilst was speaking at a dedication event to celebrate the opening of the renovated and expanded space, 60 years after the Green Building’s original dedication in 1964. In a dramatic transformation, the perpetually-shaded expanse beneath the tower has been filled with an airy, glassed-in structure that is as inviting as the previous space was forbidding.Designed to meet LEED-platinum certification, the newly-constructed Tina and Hamid Moghadam Building (Building 55) seems to float next to the Brutalist tower, its glass façade both opening up the interior and reflecting the sunlight and green space outside. The 300-seat auditorium within the original tower has been similarly transformed, bringing light and space to the newly dubbed Dixie Lee Bryant (1891) Lecture Hall, named after the first person to earn a geology degree at MIT.Catalyzing collaborationThe project is about more than updating an overlooked space. “The building we’re here to celebrate today does something else,” MIT President Sally Kornbluth said at the dedication.“In its lightness, in its transparency, it calls attention not to itself, but to the people gathered inside it. In its warmth, its openness, it makes room for culture and community. And it welcomes in those who don’t yet belong … as we take on the immense challenges of climate together,” she continued, referencing the recent launch of The Climate Project at MIT — a whole-of-MIT initiative to innovate bold solutions to climate change. In MIT’s famously decentralized structure, the Moghadam Building provides a new physical hub for students, scientists, and engineers interested in climate and the environment to congregate and share ideas.From the start, fostering this kind of multidisciplinary collaboration was part of Van der Hilst’s vision. In addition to serving as the flagship location for EAPS, Building 54 has long been the administrative home of the MIT-WHOI Joint Program in Oceanography/Applied Ocean Science and Engineering — a graduate program in partnership with Woods Hole Oceanographic Institute. With the addition of Building 55, EAPS has now been joined by the MIT Environmental Solutions Initiative (ESI) — a campus-wide program fostering education, outreach, and innovation in earth system science, urban infrastructure, and sustainability — and will welcome closer collaboration with Terrascope — a first-year learning community which invites its students to take on real-world environmental challenges.A shared vision comes to lifeThe building project dovetailed with the long-overdue refurbishment of the Green Building. After a multi-year fundraising campaign where Van der Hilst spearheaded the department’s efforts, the project received a major boost from lead donors Tina and Hamid Moghadam ’77, SM ’78, allowing the department to break ground in November 2021.In Moghadam, chair and CEO of Prologis, which owns 1.2 billion square feet of warehouses and other logistics infrastructure worldwide, EAPS found a fellow champion for climate and environmental innovation. By putting solar panels on the roofs of Prologis buildings, the company is now the second largest on-site producer of solar energy in the United States. “I don’t think there needs to be a trade-off between good sound economics and return on investment and solving climate change problems,” Moghadam said at the dedication. “The solutions that really work are the ones that actually make sense in a market economy.”Architectural firm AW-ARCH designed the Moghadam Building with a light touch, emphasizing spaciousness in contrast to the heavy concrete buildings that surround it. “The kind of delicacy and fragility of the thing is in some ways a depiction of what happens here,” said architect and co-founding partner Alex Anmahian at the dedication reception, giving a nod to the study of the delicate balance of the earth system itself. The sense is further illustrated by the responsiveness of the façade to the surrounding environment, which, depending on the time of day and quality of light, makes the glass alternately reflective and transparent.Inside, the 11,900-square foot pavilion is highly flexible and serves as a showcase for the science that happens in the labs and offices above. Central to the space is a 16-foot by 9-foot video wall featuring vivid footage of field work, lab research, data visualizations, and natural phenomena — visible even to passers-by outside. The video wall is counterposed to an unpretentious set of stair-step bleachers leading to the second floor that could play host to anything from a scientific lecture to a community pizza-and-movie night.Van der Hilst has referred to his vision for the atrium as a “campus living room,” and the furniture throughout is intentionally chosen to allow for impromptu rearrangements, providing a valuable public space on campus for students to work and socialize.The second level is similarly adaptable, featuring three classrooms with state-of-the-art teaching technologies that can be transformed from a single large space for a hackathon to intimate rooms for discussion.“The space is really meant for a yet unforeseen experience,” Anmahian says. “The reason it is so open is to allow for any possibility.”The inviting, dynamic design of the pavilion has also become an instant point of pride for the building’s inhabitants. At the dedication, School of Science dean Nergis Mavalvala quipped that anyone walking into the space “gains two inches in height.”Van der Hilst quoted a colleague with a similar observation: “Now, when I come into this space, I feel respected by it.”The perfect complementAnother significant feature of the project is the List Visual Arts Center Percent-for-Art Program installation by conceptual artist Julian Charrière, entitled “Everything Was Forever Until It Was No More.”Consisting of three interrelated works, the commission includes: “Not All Who Wander Are Lost,” three glacial erratic boulders which sit atop their own core samples in the surrounding green space; “We Are All Astronauts,” a trio of glass pillars containing vintage globes with distinctions between nations, land, and sea removed; and “Pure Waste,” a synthetic diamond embedded in the foundation, created from carbon captured from the air and the breath of researchers who work in the building.Known for themes that explore the transformation of the natural world over time and humanity’s complex relationship with our environment, Charrière was a perfect fit to complement the new Building 55 — offering a thought-provoking perspective on our current environmental challenges while underscoring the value of the research that happens within its walls. More