More stories

  • in

    Study: Ice flow is more sensitive to stress than previously thought

    The rate of glacier ice flow is more sensitive to stress than previously calculated, according to a new study by MIT researchers that upends a decades-old equation used to describe ice flow.

    Stress in this case refers to the forces acting on Antarctic glaciers, which are primarily influenced by gravity that drags the ice down toward lower elevations. Viscous glacier ice flows “really similarly to honey,” explains Joanna Millstein, a PhD student in the Glacier Dynamics and Remote Sensing Group and lead author of the study. “If you squeeze honey in the center of a piece of toast, and it piles up there before oozing outward, that’s the exact same motion that’s happening for ice.”

    The revision to the equation proposed by Millstein and her colleagues should improve models for making predictions about the ice flow of glaciers. This could help glaciologists predict how Antarctic ice flow might contribute to future sea level rise, although Millstein said the equation change is unlikely to raise estimates of sea level rise beyond the maximum levels already predicted under climate change models.

    “Almost all our uncertainties about sea level rise coming from Antarctica have to do with the physics of ice flow, though, so this will hopefully be a constraint on that uncertainty,” she says.

    Other authors on the paper, published in Nature Communications Earth and Environment, include Brent Minchew, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and Samuel Pegler, a university academic fellow at the University of Leeds.

    Benefits of big data

    The equation in question, called Glen’s Flow Law, is the most widely used equation to describe viscous ice flow. It was developed in 1958 by British scientist J.W. Glen, one of the few glaciologists working on the physics of ice flow in the 1950s, according to Millstein.

    With relatively few scientists working in the field until recently, along with the remoteness and inaccessibility of most large glacier ice sheets, there were few attempts to calibrate Glen’s Flow Law outside the lab until recently. In the recent study, Millstein and her colleagues took advantage of a new wealth of satellite imagery over Antarctic ice shelves, the floating extensions of the continent’s ice sheet, to revise the stress exponent of the flow law.

    “In 2002, this major ice shelf [Larsen B] collapsed in Antarctica, and all we have from that collapse is two satellite images that are a month apart,” she says. “Now, over that same area we can get [imagery] every six days.”

    The new analysis shows that “the ice flow in the most dynamic, fastest-changing regions of Antarctica — the ice shelves, which basically hold back and hug the interior of the continental ice — is more sensitive to stress than commonly assumed,” Millstein says. She’s optimistic that the growing record of satellite data will help capture rapid changes on Antarctica in the future, providing insights into the underlying physical processes of glaciers.   

    But stress isn’t the only thing that affects ice flow, the researchers note. Other parts of the flow law equation represent differences in temperature, ice grain size and orientation, and impurities and water contained in the ice — all of which can alter flow velocity. Factors like temperature could be especially important in understanding how ice flow impacts sea level rise in the future, Millstein says.

    Cracking under strain

    Millstein and colleagues are also studying the mechanics of ice sheet collapse, which involves different physical models than those used to understand the ice flow problem. “The cracking and breaking of ice is what we’re working on now, using strain rate observations,” Millstein says.

    The researchers use InSAR, radar images of the Earth’s surface collected by satellites, to observe deformations of the ice sheets that can be used to make precise measurements of strain. By observing areas of ice with high strain rates, they hope to better understand the rate at which crevasses and rifts propagate to trigger collapse.

    The research was supported by the National Science Foundation. More

  • in

    Using soap to remove micropollutants from water

    Imagine millions of soapy sponges the size of human cells that can clean water by soaking up contaminants. This simplistic model is used to describe technology that MIT chemical engineers have recently developed to remove micropollutants from water — a concerning, worldwide problem.

    Patrick S. Doyle, the Robert T. Haslam Professor of Chemical Engineering, PhD student Devashish Pratap Gokhale, and undergraduate Ian Chen recently published their research on micropollutant removal in the journal ACS Applied Polymer Materials. The work is funded by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

    In spite of their low concentrations (about 0.01–100 micrograms per liter), micropollutants can be hazardous to the ecosystem and to human health. They come from a variety of sources and have been detected in almost all bodies of water, says Gokhale. Pharmaceuticals passing through people and animals, for example, can end up as micropollutants in the water supply. Others, like endocrine disruptor bisphenol A (BPA), can leach from plastics during industrial manufacturing. Pesticides, dyes, petrochemicals, and per-and polyfluoroalkyl substances, more commonly known as PFAS, are also examples of micropollutants, as are some heavy metals like lead and arsenic. These are just some of the kinds of micropollutants, all of which can be toxic to humans and animals over time, potentially causing cancer, organ damage, developmental defects, or other adverse effects.

    Micropollutants are numerous but since their collective mass is small, they are difficult to remove from water. Currently, the most common practice for removing micropollutants from water is activated carbon adsorption. In this process, water passes through a carbon filter, removing only 30 percent of micropollutants. Activated carbon requires high temperatures to produce and regenerate, requiring specialized equipment and consuming large amounts of energy. Reverse osmosis can also be used to remove micropollutants from water; however, “it doesn’t lead to good elimination of this class of molecules, because of both their concentration and their molecular structure,” explains Doyle.

    Inspired by soap

    When devising their solution for how to remove micropollutants from water, the MIT researchers were inspired by a common household cleaning supply — soap. Soap cleans everything from our hands and bodies to dirty dishes to clothes, so perhaps the chemistry of soap could also be applied to sanitizing water. Soap has molecules called surfactants which have both hydrophobic (water-hating) and hydrophilic (water-loving) components. When water comes in contact with soap, the hydrophobic parts of the surfactant stick together, assembling into spherical structures called micelles with the hydrophobic portions of the molecules in the interior. The hydrophobic micelle cores trap and help carry away oily substances like dirt. 

    Doyle’s lab synthesized micelle-laden hydrogel particles to essentially cleanse water. Gokhale explains that they used microfluidics which “involve processing fluids on very small, micron-like scales” to generate uniform polymeric hydrogel particles continuously and reproducibly. These hydrogels, which are porous and absorbent, incorporate a surfactant, a photoinitiator (a molecule that creates reactive species), and a cross-linking agent known as PEGDA. The surfactant assembles into micelles that are chemically bonded to the hydrogel using ultraviolet light. When water flows through this micro-particle system, micropollutants latch onto the micelles and separate from the water. The physical interaction used in the system is strong enough to pull micropollutants from water, but weak enough that the hydrogel particles can be separated from the micropollutants, restabilized, and reused. Lab testing shows that both the speed and extent of pollutant removal increase when the amount of surfactant incorporated into the hydrogels is increased.

    “We’ve shown that in terms of rate of pullout, which is what really matters when you scale this up for industrial use, that with our initial format, we can already outperform the activated carbon,” says Doyle. “We can actually regenerate these particles very easily at room temperature. Nearly 10 regeneration cycles with minimal change in performance,” he adds.

    Regeneration of the particles occurs by soaking the micelles in 90 percent ethanol, whereby “all the pollutants just come out of the particles and back into the ethanol” says Gokhale. Ethanol is biosafe at low concentrations, inexpensive, and combustible, allowing for safe and economically feasible disposal. The recycling of the hydrogel particles makes this technology sustainable, which is a large advantage over activated carbon. The hydrogels can also be tuned to any hydrophobic micropollutant, making this system a novel, flexible approach to water purification.

    Scaling up

    The team experimented in the lab using 2-naphthol, a micropollutant that is an organic pollutant of concern and known to be difficult to remove by using conventional water filtration methods. They hope to continue testing with real water samples. 

    “Right now, we spike one micropollutant into pure lab water. We’d like to get water samples from the natural environment, that we can study and look at experimentally,” says Doyle. 

    By using microfluidics to increase particle production, Doyle and his lab hope to make household-scale filters to be tested with real wastewater. They then anticipate scaling up to municipal water treatment or even industrial wastewater treatment. 

    The lab recently filed an international patent application for their hydrogel technology that uses immobilized micelles. They plan to continue this work by experimenting with different kinds of hydrogels for the removal of heavy metal contaminants like lead from water. 

    Societal impacts

    Funded by a 2019 J-WAFS seed grant that is currently ongoing, this research has the potential to improve the speed, precision, efficiency, and environmental sustainability of water purification systems across the world. 

    “I always wanted to do work which had a social impact, and I was also always interested in water, because I think it’s really cool,” says Gokhale. He notes, “it’s really interesting how water sort of fits into different kinds of fields … we have to consider the cultures of peoples, how we’re going to use this, and then just the equity of these water processes.” Originally from India, Gokhale says he’s seen places that have barely any water at all and others that have floods year after year. “There’s a lot of interesting work to be done, and I think it’s work in this area that’s really going to impact a lot of people’s lives in years to come,” Gokhale says.

    Doyle adds, “water is the most important thing, perhaps for the next decades to come, so it’s very fulfilling to work on something that is so important to the whole world.” More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    MIT Center for Real Estate launches the Asia Real Estate Initiative

    To appreciate the explosive urbanization taking place in Asia, consider this analogy: Every 40 days, a city the equivalent size of Boston is built in Asia. Of the $24.7 trillion real estate investment opportunities predicted by 2030 in emerging cities, $17.8 trillion (72 percent) will be in Asia. While this growth is exciting to the real estate industry, it brings with it the attendant social and environmental issues.

    To promote a sustainable and innovative approach to this growth, leadership at the MIT Center for Real Estate (MIT CRE) recently established the Asia Real Estate Initiative (AREI), which aims to become a platform for industry leaders, entrepreneurs, and the academic community to find solutions to the practical concerns of real estate development across these countries.

    “Behind the creation of this initiative is the understanding that Asia is a living lab for the study of future global urban development,” says Hashim Sarkis, dean of the MIT School of Architecture and Planning.

    An investment in cities of the future

    One of the areas in AREI’s scope of focus is connecting sustainability and technology in real estate.

    “We believe the real estate sector should work cooperatively with the energy, science, and technology sectors to solve the climate challenges,” says Richard Lester, the Institute’s associate provost for international activities. “AREI will engage academics and industry leaders, nongovernment organizations, and civic leaders globally and in Asia, to advance sharing knowledge and research.”

    In its effort to understand how trends and new technologies will impact the future of real estate, AREI has received initial support from a prominent alumnus of MIT CRE who wishes to remain anonymous. The gift will support a cohort of researchers working on innovative technologies applicable to advancing real estate sustainability goals, with a special focus on the global and Asia markets. The call for applications is already under way, with AREI seeking to collaborate with scholars who have backgrounds in economics, finance, urban planning, technology, engineering, and other disciplines.

    “The research on real estate sustainability and technology could transform this industry and help invent global real estate of the future,” says Professor Siqi Zheng, faculty director of MIT CRE and AREI faculty chair. “The pairing of real estate and technology often leads to innovative and differential real estate development strategies such as buildings that are green, smart, and healthy.”

    The initiative arrives at a key time to make a significant impact and cement a leadership role in real estate development across Asia. MIT CRE is positioned to help the industry increase its efficiency and social responsibility, with nearly 40 years of pioneering research in the field. Zheng, an established scholar with expertise on urban growth in fast-urbanizing regions, is the former president of the Asia Real Estate Society and sits on the Board of American Real Estate and Urban Economics Association. Her research has been supported by international institutions including the World Bank, the Asian Development Bank, and the Lincoln Institute of Land Policy.

    “The researchers in AREI are now working on three interrelated themes: the future of real estate and live-work-play dynamics; connecting sustainability and technology in real estate; and innovations in real estate finance and business,” says Zheng.

    The first theme has already yielded a book — “Toward Urban Economic Vibrancy: Patterns and Practices in Asia’s New Cities” — recently published by SA+P Press.

    Engaging thought leaders and global stakeholders

    AREI also plans to collaborate with counterparts in Asia to contribute to research, education, and industry dialogue to meet the challenges of sustainable city-making across the continent and identify areas for innovation. Traditionally, real estate has been a very local business with a lengthy value chain, according to Zhengzhen Tan, director of AREI. Most developers focused their career on one particular product type in one particular regional market. AREI is working to change that dynamic.

    “We want to create a cross-border dialogue within Asia and among Asia, North America, and European leaders to exchange knowledge and practices,” says Tan. “The real estate industry’s learning costs are very high compared to other sectors. Collective learning will reduce the cost of failure and have a significant impact on these global issues.”

    The 2021 United Nations Climate Change Conference in Glasgow shed additional light on environmental commitments being made by governments in Asia. With real estate representing 40 percent of global greenhouse gas emissions, the Asian real estate market is undergoing an urgent transformation to deliver on this commitment.

    “One of the most pressing calls is to get to net-zero emissions for real estate development and operation,” says Tan. “Real estate investors and developers are making short- and long-term choices that are locking in environmental footprints for the ‘decisive decade.’ We hope to inspire developers and investors to think differently and get out of their comfort zone.” More

  • in

    New maps show airplane contrails over the U.S. dropped steeply in 2020

    As Covid-19’s initial wave crested around the world, travel restrictions and a drop in passengers led to a record number of grounded flights in 2020. The air travel reduction cleared the skies of not just jets but also the fluffy white contrails they produce high in the atmosphere.

    MIT engineers have mapped the contrails that were generated over the United States in 2020, and compared the results to prepandemic years. They found that on any given day in 2018, and again in 2019, contrails covered a total area equal to Massachusetts and Connecticut combined. In 2020, this contrail coverage shrank by about 20 percent, mirroring a similar drop in U.S. flights.  

    While 2020’s contrail dip may not be surprising, the findings are proof that the team’s mapping technique works. Their study marks the first time researchers have captured the fine and ephemeral details of contrails over a large continental scale.

    Now, the researchers are applying the technique to predict where in the atmosphere contrails are likely to form. The cloud-like formations are known to play a significant role in aviation-related global warming. The team is working with major airlines to forecast regions in the atmosphere where contrails may form, and to reroute planes around these regions to minimize contrail production.

    “This kind of technology can help divert planes to prevent contrails, in real time,” says Steven Barrett, professor and associate head of MIT’s Department of Aeronautics and Astronautics. “There’s an unusual opportunity to halve aviation’s climate impact by eliminating most of the contrails produced today.”

    Barrett and his colleagues have published their results today in the journal Environmental Research Letters. His co-authors at MIT include graduate student Vincent Meijer, former graduate student Luke Kulik, research scientists Sebastian Eastham, Florian Allroggen, and Raymond Speth, and LIDS Director and professor Sertac Karaman.

    Trail training

    About half of the aviation industry’s contribution to global warming comes directly from planes’ carbon dioxide emissions. The other half is thought to be a consequence of their contrails. The signature white tails are produced when a plane’s hot, humid exhaust mixes with cool humid air high in the atmosphere. Emitted in thin lines, contrails quickly spread out and can act as blankets that trap the Earth’s outgoing heat.

    While a single contrail may not have much of a warming effect, taken together contrails have a significant impact. But the estimates of this effect are uncertain and based on computer modeling as well as limited satellite data. What’s more, traditional computer vision algorithms that analyze contrail data have a hard time discerning the wispy tails from natural clouds.

    To precisely pick out and track contrails over a large scale, the MIT team looked to images taken by NASA’s GOES-16, a geostationary satellite that hovers over the same swath of the Earth, including the United States, taking continuous, high-resolution images.

    The team first obtained about 100 images taken by the satellite, and trained a set of people to interpret remote sensing data and label each image’s pixel as either part of a contrail or not. They used this labeled dataset to train a computer-vision algorithm to discern a contrail from a cloud or other image feature.

    The researchers then ran the algorithm on about 100,000 satellite images, amounting to nearly 6 trillion pixels, each pixel representing an area of about 2 square kilometers. The images covered the contiguous U.S., along with parts of Canada and Mexico, and were taken about every 15 minutes, between Jan. 1, 2018, and Dec. 31, 2020.

    The algorithm automatically classified each pixel as either a contrail or not a contrail, and generated daily maps of contrails over the United States. These maps mirrored the major flight paths of most U.S. airlines, with some notable differences. For instance, contrail “holes” appeared around major airports, which reflects the fact that planes landing and taking off around airports are generally not high enough in the atmosphere for contrails to form.

    “The algorithm knows nothing about where planes fly, and yet when processing the satellite imagery, it resulted in recognizable flight routes,” Barrett says. “That’s one piece of evidence that says this method really does capture contrails over a large scale.”

    Cloudy patterns

    Based on the algorithm’s maps, the researchers calculated the total area covered each day by contrails in the US. On an average day in 2018 and in 2019, U.S. contrails took up about 43,000 square kilometers. This coverage dropped by 20 percent in March of 2020 as the pandemic set in. From then on, contrails slowly reappeared as air travel resumed through the year.

    The team also observed daily and seasonal patterns. In general, contrails appeared to peak in the morning and decline in the afternoon. This may be a training artifact: As natural cirrus clouds are more likely to form in the afternoon, the algorithm may have trouble discerning contrails amid the clouds later in the day. But it might also be an important indication about when contrails form most. Contrails also peaked in late winter and early spring, when more of the air is naturally colder and more conducive for contrail formation.

    The team has now adapted the technique to predict where contrails are likely to form in real time. Avoiding these regions, Barrett says, could take a significant, almost immediate chunk out of aviation’s global warming contribution.  

    “Most measures to make aviation sustainable take a long time,” Barrett says. “(Contrail avoidance) could be accomplished in a few years, because it requires small changes to how aircraft are flown, with existing airplanes and observational technology. It’s a near-term way of reducing aviation’s warming by about half.”

    The team is now working towards this objective of large-scale contrail avoidance using realtime satellite observations.

    This research was supported in part by NASA and the MIT Environmental Solutions Initiative. More

  • in

    Q&A: Climate Grand Challenges finalists on building equity and fairness into climate solutions

    Note: This is the first in a four-part interview series that will highlight the work of the Climate Grand Challenges finalists, ahead of the April announcement of several multiyear, flagship projects.

    The finalists in MIT’s first-ever Climate Grand Challenges competition each received $100,000 to develop bold, interdisciplinary research and innovation plans designed to attack some of the world’s most difficult and unresolved climate problems. The 27 teams are addressing four Grand Challenge problem areas: building equity and fairness into climate solutions; decarbonizing complex industries and processes; removing, managing, and storing greenhouse gases; and using data and science for improved climate risk forecasting.  

    In a conversation prepared for MIT News, faculty from three of the teams in the competition’s “Building equity and fairness into climate solutions” category share their thoughts on the need for inclusive solutions that prioritize disadvantaged and vulnerable populations, and discuss how they are working to accelerate their research to achieve the greatest impact. The following responses have been edited for length and clarity.

    The Equitable Resilience Framework

    Any effort to solve the most complex global climate problems must recognize the unequal burdens borne by different groups, communities, and societies — and should be equitable as well as effective. Janelle Knox-Hayes, associate professor in the Department of Urban Studies and Planning, leads a team that is developing processes and practices for equitable resilience, starting with a local pilot project in Boston over the next five years and extending to other cities and regions of the country. The Equitable Resilience Framework (ERF) is designed to create long-term economic, social, and environmental transformations by increasing the capacity of interconnected systems and communities to respond to a broad range of climate-related events. 

    Q: What is the problem you are trying to solve?

    A: Inequity is one of the severe impacts of climate change and resonates in both mitigation and adaptation efforts. It is important for climate strategies to address challenges of inequity and, if possible, to design strategies that enhance justice, equity, and inclusion, while also enhancing the efficacy of mitigation and adaptation efforts. Our framework offers a blueprint for how communities, cities, and regions can begin to undertake this work.

    Q: What are the most significant barriers that have impacted progress to date?

    A: There is considerable inertia in policymaking. Climate change requires a rethinking, not only of directives but pathways and techniques of policymaking. This is an obstacle and part of the reason our project was designed to scale up from local pilot projects. Another consideration is that the private sector can be more adaptive and nimble in its adoption of creative techniques. Working with the MIT Climate and Sustainability Consortium there may be ways in which we could modify the ERF to help companies address similar internal adaptation and resilience challenges.

    Protecting and enhancing natural carbon sinks

    Deforestation and forest degradation of strategic ecosystems in the Amazon, Central Africa, and Southeast Asia continue to reduce capacity to capture and store carbon through natural systems and threaten even the most aggressive decarbonization plans. John Fernandez, professor in the Department of Architecture and director of the Environmental Solutions Initiative, reflects on his work with Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and Artificial Intelligence Laboratory, and Joann de Zegher, assistant professor of Operations Management at MIT Sloan, to protect tropical forests by deploying a three-part solution that integrates targeted technology breakthroughs, deep community engagement, and innovative bioeconomic opportunities. 

    Q: Why is the problem you seek to address a “grand challenge”?

    A: We are trying to bring the latest technology to monitoring, assessing, and protecting tropical forests, as well as other carbon-rich and highly biodiverse ecosystems. This is a grand challenge because natural sinks around the world are threatening to release enormous quantities of stored carbon that could lead to runaway global warming. When combined with deep community engagement, particularly with indigenous and afro-descendant communities, this integrated approach promises to deliver substantially enhanced efficacy in conservation coupled to robust and sustainable local development.

    Q: What is known about this problem and what questions remain unanswered?

    A: Satellites, drones, and other technologies are acquiring more data about natural carbon sinks than ever before. The problem is well-described in certain locations such as the eastern Amazon, which has shifted from a net carbon sink to now a net positive carbon emitter. It is also well-known that indigenous peoples are the most effective stewards of the ecosystems that store the greatest amounts of carbon. One of the key questions that remains to be answered is determining the bioeconomy opportunities inherent within the natural wealth of tropical forests and other important ecosystems that are important to sustained protection and conservation.

    Reducing group-based disparities in climate adaptation

    Race, ethnicity, caste, religion, and nationality are often linked to vulnerability to the adverse effects of climate change, and if left unchecked, threaten to exacerbate long standing inequities. A team led by Evan Lieberman, professor of political science and director of the MIT Global Diversity Lab and MIT International Science and Technology Initiatives, Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics, and Siqi Zheng, professor of urban and real estate sustainability in the Center for Real Estate and the Department of Urban Studies and Planning, is seeking to  reduce ethnic and racial group-based disparities in the capacity of urban communities to adapt to the changing climate. Working with partners in nine coastal cities, they will measure the distribution of climate-related burdens and resiliency through satellites, a custom mobile app, and natural language processing of social media, to help design and test communication campaigns that provide accurate information about risks and remediation to impacted groups. 

    Q: How has this problem evolved?

    A: Group-based disparities continue to intensify within and across countries, owing in part to some randomness in the location of adverse climate events, as well as deep legacies of unequal human development. In turn, economically and politically privileged groups routinely hoard resources for adaptation. In a few cases — notably the United States, Brazil, and with respect to climate-related migrancy, in South Asia — there has been a great deal of research documenting the extent of such disparities. However, we lack common metrics, and for the most part, such disparities are only understood where key actors have politicized the underlying problems. In much of the world, relatively vulnerable and excluded groups may not even be fully aware of the nature of the challenges they face or the resources they require.

    Q: Who will benefit most from your research? 

    A: The greatest beneficiaries will be members of those vulnerable groups who lack the resources and infrastructure to withstand adverse climate shocks. We believe that it will be important to develop solutions such that relatively privileged groups do not perceive them as punitive or zero-sum, but rather as long-term solutions for collective benefit that are both sound and just. More

  • in

    Can the world meet global climate targets without coordinated global action?

    Like many of its predecessors, the 2021 United Nations Climate Change Conference (COP26) in Glasgow, Scotland concluded with bold promises on international climate action aimed at keeping global warming well below 2 degrees Celsius, but few concrete plans to ensure that those promises will be kept. While it’s not too late for the Paris Agreement’s nearly 200 signatory nations to take concerted action to cap global warming at 2 C — if not 1.5 C — there is simply no guarantee that they will do so. If they fail, how much warming is the Earth likely to see in the 21st century and beyond?

    A new study by researchers at the MIT Joint Program on the Science and Policy of Global Change and the Shell Scenarios Team projects that without a globally coordinated mitigation effort to reduce greenhouse gas emissions, the planet’s average surface temperature will reach 2.8 C, much higher than the “well below 2 C” level to which the Paris Agreement aspires, but a lot lower than what many widely used “business-as-usual” scenarios project.  

    Recognizing the limitations of such scenarios, which generally assume that historical trends in energy technology choices and climate policy inaction will persist for decades to come, the researchers have designed a “Growing Pressures” scenario that accounts for mounting social, technological, business, and political pressures that are driving a transition away from fossil-fuel use and toward a low-carbon future. Such pressures have already begun to expand low-carbon technology and policy options, which, in turn, have escalated demand to utilize those options — a trend that’s expected to self-reinforce. Under this scenario, an array of future actions and policies cause renewable energy and energy storage costs to decline; fossil fuels to be phased out; electrification to proliferate; and emissions from agriculture and industry to be sharply reduced.

    Incorporating these growing pressures in the MIT Joint Program’s integrated model of Earth and human systems, the study’s co-authors project future energy use, greenhouse gas emissions, and global average surface temperatures in a world that fails to implement coordinated, global climate mitigation policies, and instead pursues piecemeal actions at mostly local and national levels.

    “Few, if any, previous studies explore scenarios of how piecemeal climate policies might plausibly unfold into the future and impact global temperature,” says MIT Joint Program research scientist Jennifer Morris, the study’s lead author. “We offer such a scenario, considering a future in which the increasingly visible impacts of climate change drive growing pressure from voters, shareholders, consumers, and investors, which in turn drives piecemeal action by governments and businesses that steer investments away from fossil fuels and toward low-carbon alternatives.”

    In the study’s central case (representing the mid-range climate response to greenhouse gas emissions), fossil fuels persist in the global energy mix through 2060 and then slowly decline toward zero by 2130; global carbon dioxide emissions reach near-zero levels by 2130 (total greenhouse gas emissions decline to near-zero by 2150); and global surface temperatures stabilize at 2.8 C by 2150, 2.5 C lower than a widely used “business-as-usual” projection. The results appear in the journal Environmental Economics and Policy Studies.

    Such a transition could bring the global energy system to near-zero emissions, but more aggressive climate action would be needed to keep global temperatures well below 2 C in alignment with the Paris Agreement.

    “While we fully support the need to decarbonize as fast as possible, it is critical to assess realistic alternative scenarios of world development,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “We investigate plausible actions that could bring society closer to the long-term goals of the Paris Agreement. To actually meet those goals will require an accelerated transition away from fossil energy through a combination of R&D, technology deployment, infrastructure development, policy incentives, and business practices.”

    The study was funded by government, foundation, and industrial sponsors of the MIT Joint Program, including Shell International Ltd. More

  • in

    Study reveals chemical link between wildfire smoke and ozone depletion

    The Australian wildfires in 2019 and 2020 were historic for how far and fast they spread, and for how long and powerfully they burned. All told, the devastating “Black Summer” fires blazed across more than 43 million acres of land, and extinguished or displaced nearly 3 billion animals. The fires also injected over 1 million tons of smoke particles into the atmosphere, reaching up to 35 kilometers above Earth’s surface — a mass and reach comparable to that of an erupting volcano.

    Now, atmospheric chemists at MIT have found that the smoke from those fires set off chemical reactions in the stratosphere that contributed to the destruction of ozone, which shields the Earth from incoming ultraviolet radiation. The team’s study, appearing this week in the Proceedings of the National Academy of Sciences, is the first to establish a chemical link between wildfire smoke and ozone depletion.

    In March 2020, shortly after the fires subsided, the team observed a sharp drop in nitrogen dioxide in the stratosphere, which is the first step in a chemical cascade that is known to end in ozone depletion. The researchers found that this drop in nitrogen dioxide directly correlates with the amount of smoke that the fires released into the stratosphere. They estimate that this smoke-induced chemistry depleted the column of ozone by 1 percent.

    To put this in context, they note that the phaseout of ozone-depleting gases under a worldwide agreement to stop their production has led to about a 1 percent ozone recovery from earlier ozone decreases over the past 10 years — meaning that the wildfires canceled those hard-won diplomatic gains for a short period. If future wildfires grow stronger and more frequent, as they are predicted to do with climate change, ozone’s projected recovery could be delayed by years. 

    “The Australian fires look like the biggest event so far, but as the world continues to warm, there is every reason to think these fires will become more frequent and more intense,” says lead author Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT. “It’s another wakeup call, just as the Antarctic ozone hole was, in the sense of showing how bad things could actually be.”

    The study’s co-authors include Kane Stone, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, along with collaborators at multiple institutions including the University of Saskatchewan, Jinan University, the National Center for Atmospheric Research, and the University of Colorado at Boulder.

    Chemical trace

    Massive wildfires are known to generate pyrocumulonimbus — towering clouds of smoke that can reach into the stratosphere, the layer of the atmosphere that lies between about 15 and 50 kilometers above the Earth’s surface. The smoke from Australia’s wildfires reached well into the stratosphere, as high as 35 kilometers.

    In 2021, Solomon’s co-author, Pengfei Yu at Jinan University, carried out a separate study of the fires’ impacts and found that the accumulated smoke warmed parts of the stratosphere by as much as 2 degrees Celsius — a warming that persisted for six months. The study also found hints of ozone destruction in the Southern Hemisphere following the fires.

    Solomon wondered whether smoke from the fires could have depleted ozone through a chemistry similar to volcanic aerosols. Major volcanic eruptions can also reach into the stratosphere, and in 1989, Solomon discovered that the particles in these eruptions can destroy ozone through a series of chemical reactions. As the particles form in the atmosphere, they gather moisture on their surfaces. Once wet, the particles can react with circulating chemicals in the stratosphere, including dinitrogen pentoxide, which reacts with the particles to form nitric acid.

    Normally, dinitrogen pentoxide reacts with the sun to form various nitrogen species, including nitrogen dioxide, a compound that binds with chlorine-containing chemicals in the stratosphere. When volcanic smoke converts dinitrogen pentoxide into nitric acid, nitrogen dioxide drops, and the chlorine compounds take another path, morphing into chlorine monoxide, the main human-made agent that destroys ozone.

    “This chemistry, once you get past that point, is well-established,” Solomon says. “Once you have less nitrogen dioxide, you have to have more chlorine monoxide, and that will deplete ozone.”

    Cloud injection

    In the new study, Solomon and her colleagues looked at how concentrations of nitrogen dioxide in the stratosphere changed following the Australian fires. If these concentrations dropped significantly, it would signal that wildfire smoke depletes ozone through the same chemical reactions as some volcanic eruptions.

    The team looked to observations of nitrogen dioxide taken by three independent satellites that have surveyed the Southern Hemisphere for varying lengths of time. They compared each satellite’s record in the months and years leading up to and following the Australian fires. All three records showed a significant drop in nitrogen dioxide in March 2020. For one satellite’s record, the drop represented a record low among observations spanning the last 20 years.

    To check that the nitrogen dioxide decrease was a direct chemical effect of the fires’ smoke, the researchers carried out atmospheric simulations using a global, three-dimensional model that simulates hundreds of chemical reactions in the atmosphere, from the surface on up through the stratosphere.

    The team injected a cloud of smoke particles into the model, simulating what was observed from the Australian wildfires. They assumed that the particles, like volcanic aerosols, gathered moisture. They then ran the model multiple times and compared the results to simulations without the smoke cloud.

    In every simulation incorporating wildfire smoke, the team found that as the amount of smoke particles increased in the stratosphere, concentrations of nitrogen dioxide decreased, matching the observations of the three satellites.

    “The behavior we saw, of more and more aerosols, and less and less nitrogen dioxide, in both the model and the data, is a fantastic fingerprint,” Solomon says. “It’s the first time that science has established a chemical mechanism linking wildfire smoke to ozone depletion. It may only be one chemical mechanism among several, but it’s clearly there. It tells us these particles are wet and they had to have caused some ozone depletion.”

    She and her collaborators are looking into other reactions triggered by wildfire smoke that might further contribute to stripping ozone. For the time being, the major driver of ozone depletion remains chlorofluorocarbons, or CFCs — chemicals such as old refrigerants that have been banned under the Montreal Protocol, though they continue to linger in the stratosphere. But as global warming leads to stronger, more frequent wildfires, their smoke could have a serious, lasting impact on ozone.

    “Wildfire smoke is a toxic brew of organic compounds that are complex beasts,” Solomon says. “And I’m afraid ozone is getting pummeled by a whole series of reactions that we are now furiously working to unravel.”

    This research was supported in part by the National Science Foundation and NASA. More