More stories

  • in

    Researchers return to Arctic to test integrated sensor nodes

    Shimmering ice extends in all directions as far as the eye can see. Air temperatures plunge to minus 40 degrees Fahrenheit and colder with wind chills. Ocean currents drag large swaths of ice floating at sea. Polar bears, narwhals, and other iconic Arctic species roam wild.For a week this past spring, MIT Lincoln Laboratory researchers Ben Evans and Dave Whelihan called this place — drifting some 200 nautical miles offshore from Prudhoe Bay, Alaska, on the frozen Beaufort Sea in the Arctic Circle — home. Two ice runways for small aircraft provided their only way in and out of this remote wilderness; heated tents provided their only shelter from the bitter cold.

    Play video

    Video: MIT Lincoln Laboratory

    Here, in the northernmost region on Earth, Evans and Whelihan joined other groups conducting fieldwork in the Arctic as part of Operation Ice Camp (OIC) 2024, an operational exercise run by the U.S. Navy’s Arctic Submarine Laboratory (ASL). Riding on snowmobiles and helicopters, the duo deployed a small set of integrated sensor nodes that measure everything from atmospheric conditions to ice properties to the structure of water deep below the surface.Ultimately, they envision deploying an unattended network of these low-cost sensor nodes across the Arctic to increase scientific understanding of the trending loss in sea ice extent and thickness. Warming much faster than the rest of the world, the Arctic is a ground zero for climate change, with cascading impacts across the planet that include rising sea levels and extreme weather. Openings in the sea ice cover, or leads, are concerning not only for climate change but also for global geopolitical competition over transit routes and natural resources. A synoptic view of the physical processes happening above, at, and below sea ice is key to determining why the ice is diminishing. In turn, this knowledge can help predict when and where fractures will occur, to inform planning and decision-making.Winter “camp”Every two years, OIC, previously called Ice Exercise (ICEX), provides a way for the international community to access the Arctic for operational readiness exercises and scientific research, with the focus switching back and forth; this year’s focus was scientific research. Coordination, planning, and execution of the month-long operation is led by ASL, a division of the U.S. Navy’s Undersea Warfighting Development Center responsible for ensuring the submarine force can effectively operate in the Arctic Ocean.Making this inhospitable and unforgiving environment safe for participants takes considerable effort. The critical first step is determining where to set up camp. In the weeks before the first participants arrived for OIC 2024, ASL — with assistance from the U.S. National Ice Center, University of Alaska Fairbanks Geophysical Institute, and UIC Science — flew over large sheets of floating ice (ice floes) identified via satellite imagery, landed on some they thought might be viable sites, and drilled through the ice to check its thickness. The ice floe must not only be large enough to accommodate construction of a camp and two runways but also feature both multiyear ice and first-year ice. Multiyear ice is thick and strong but rough, making it ideal for camp setup, while the smooth but thinner first-year ice is better suited for building runways. Once the appropriate ice floe was selected, ASL began to haul in equipment and food, build infrastructure like lodging and a command center, and fly in a small group before fully operationalizing the site. They also identified locations near the camp for two Navy submarines to surface through the ice.The more than 200 participants represented U.S. and allied forces and scientists from research organizations and universities. Distinguished visitors from government offices also attended OIC to see the unique Arctic environment and unfolding challenges firsthand.“Our ASL hosts do incredible work to build this camp from scratch and keep us alive,” Evans says.Evans and Whelihan, part of the laboratory’s Advanced Undersea Systems and Technology Group, first trekked to the Arctic in March 2022 for ICEX 2022. (The laboratory in general has been participating since 2016 in these events, the first iteration of which occurred in 1946.) There, they deployed a suite of commercial off-the-shelf sensors for detecting acoustic (sound) and seismic (vibration) events created by ice fractures or collisions, and for measuring salinity, temperature, and pressure in the water below the ice. They also deployed a prototype fiber-based temperature sensor array developed by the laboratory and research partners for precisely measuring temperature across the entire water column at one location, and a University of New Hampshire (UNH)−supplied echosounder to investigate the different layers present in the water column. In this maiden voyage, their goals were to assess how these sensors fared in the harsh Arctic conditions and to collect a dataset from which characteristic signatures of ice-fracturing events could begin to be identified. These events would be correlated with weather and water conditions to eventually offer a predictive capability.“We saw real phenomenology in our data,” Whelihan says. “But, we’re not ice experts. What we’re good at here at the laboratory is making and deploying sensors. That’s our place in the world of climate science: to be a data provider. In fact, we hope to open source all of our data this year so that ice scientists can access and analyze them and then we can make enhanced sensors and collect more data.”Interim iceIn the two years since that expedition, they and their colleagues have been modifying their sensor designs and deployment strategies. As Evans and Whelihan learned at ICEX 2022, to be resilient in the Arctic, a sensor must not only be kept warm and dry during deployment but also be deployed in a way to prevent breaking. Moreover, sufficient power and data links are needed to collect and access sensor data.“We can make cold-weather electronics, no problem,” Whelihan says. “The two drivers are operating the sensors in an energy-starved environment — the colder it is, the worse batteries perform — and keeping them from getting destroyed when ice floes crash together as leads in the ice open up.”Their work in the interim to OIC 2024 involved integrating the individual sensors into hardened sensor nodes and practicing deploying these nodes in easier-to-access locations. To facilitate incorporating additional sensors into a node, Whelihan spearheaded the development of an open-source, easily extensible hardware and software architecture.In March 2023, the Lincoln Laboratory team deployed three sensor nodes for a week on Huron Bay off Lake Superior through Michigan Tech’s Great Lakes Research Center (GLRC). Engineers from GLRC helped the team safely set up an operations base on the ice. They demonstrated that the sensor integration worked, and the sensor nodes proved capable of surviving for at least a week in relatively harsh conditions. The researchers recorded seismic activity on all three nodes, corresponding to some ice breaking further up the bay.“Proving our sensor node in an Arctic surrogate environment provided a stepping stone for testing in the real Arctic,” Evans says.Evans then received an invitation from Ignatius Rigor, the coordinator of the International Arctic Buoy Program (IABP), to join him on an upcoming trip to Utqiaġvik (formerly Barrow), Alaska, and deploy one of their seismic sensor nodes on the ice there (with support from UIC Science). The IABP maintains a network of Arctic buoys equipped with meteorological and oceanic sensors. Data collected by these buoys are shared with the operational and research communities to support real-time operations (e.g., forecasting sea ice conditions for coastal Alaskans) and climate research. However, these buoys are typically limited in the frequency at which they collect data, so phenomenology on shorter time scales important to climate change may be missed. Moreover, these buoys are difficult and expensive to deploy because they are designed to survive in the harshest environments for years at a time.  The laboratory-developed sensor nodes could offer an inexpensive, easier-to-deploy option for collecting more data over shorter periods of time. In April 2023, Evans placed a sensor node in Utqiaġvik on landfast sea ice, which is stationary ice anchored to the seabed just off the coast. During the sensor node’s week-long deployment, a big piece of drift ice (ice not attached to the seabed or other fixed object) broke off and crashed into the landfast ice. The event was recorded by a radar maintained by the University of Alaska Fairbanks that monitors sea ice movement in near real time to warn of any instability. Though this phenomenology is not exactly the same as that expected for Arctic sea ice, the researchers were encouraged to see seismic activity recorded by their sensor node.In December 2023, Evans and Whelihan headed to New Hampshire, where they conducted echosounder testing in UNH’s engineering test tank and on the Piscataqua River. Together with their UNH partners, they sought to determine whether a low-cost, hobby-grade echosounder could detect the same phenomenology of interest as the high-fidelity UNH echosounder, which would be far too costly to deploy in sensor nodes across the Arctic. In the test tank and on the river, the low-cost echosounder proved capable of detecting masses of water moving in the water column, but with considerably less structural detail than afforded by the higher-cost option. Seeing such dynamics is important to inferring where water comes from and understanding how it affects sea ice breakup — for example, how warm water moving in from the Pacific Ocean is coming into contact with and melting the ice. So, the laboratory researchers and UNH partners have been building a medium-fidelity, medium-cost echosounder.In January 2024, Evans and Whelihan — along with Jehan Diaz, a fellow staff member in their research group — returned to GLRC. With logistical support from their GLRC hosts, they snowmobiled across the ice on Portage Lake, where they practiced several activities to prepare for OIC 2024: augering (drilling) six-inch holes in the ice, albeit in thinner ice than that in the Arctic; placing their long, pipe-like sensor nodes through these holes; operating cold-hardened drones to interact with the nodes; and retrieving the nodes. They also practiced sensor calibration by hitting the ice with an iron bar some distance away from the nodes and correlating this distance with the resulting measured acoustic and seismic intensity.“Our time at GLRC helped us mitigate a lot of risks and prepare to deploy these complex systems in the Arctic,” Whelihan says.Arctic againTo get to OIC, Evans and Whelihan first flew to Prudhoe Bay and reacclimated to the frigid temperatures. They spent the next two days at the Deadhorse Aviation Center hangar inspecting their equipment for transit-induced damage, which included squashed cables and connectors that required rejiggering.“That’s part of the adventure story,” Evans says. “Getting stuff to Prudhoe Bay is not your standard shipping; it’s ice-road trucking.”From there, they boarded a small aircraft to the ice camp.“Even though this trip marked our second time coming here, it was still disorienting,” Evans continues. “You land in the middle of nowhere on a small aircraft after a couple-hour flight. You get out bundled in all of your Arctic gear in this remote, pristine environment.”After unloading and rechecking their equipment for any damage, calibrating their sensors, and attending safety briefings, they were ready to begin their experiments.An icy situationInside the project tent, Evans and Whelihan deployed the UNH-supplied echosounder and a suite of ground-truth sensors on an automated winch to profile water conductivity, temperature, and depth (CTD). Echosounder data needed to be validated with associated CTD data to determine the source of the water in the water column. Ocean properties change as a function of depth, and these changes are important to capture, in part because masses of water coming in from the Atlantic and Pacific oceans arrive at different depths. Though masses of warm water have always existed, climate change–related mechanisms are now bringing them into contact with the ice.  “As ice breaks up, wind can directly interact with the ocean because it’s lacking that barrier of ice cover,” Evans explains. “Kinetic energy from the wind causes mixing in the ocean; all the warm water that used to stay at depth instead gets brought up and interacts with the ice.”They also deployed four of their sensor nodes several miles outside of camp. To access this deployment site, they rode on a sled pulled via a snowmobile driven by Ann Hill, an ASL field party leader trained in Arctic survival and wildlife encounters. The temperature that day was -55 F. At such a dangerously cold temperature, frostnip and frostbite are all too common. To avoid removal of gloves or other protective clothing, the researchers enabled the nodes with WiFi capability (the nodes also have a satellite communications link to transmit low-bandwidth data). Large amounts of data are automatically downloaded over WiFi to an arm-wearable haptic (touch-based) system when a user walks up to a node.“It was so cold that the holes we were drilling in the ice to reach the water column were freezing solid,” Evans explains. “We realized it was going to be quite an ordeal to get our sensor nodes out of the ice.”So, after drilling a big hole in the ice, they deployed only one central node with all the sensor components: a commercial echosounder, an underwater microphone, a seismometer, and a weather station. They deployed the other three nodes, each with a seismometer and weather station, atop the ice.“One of our design considerations was flexibility,” Whelihan says. “Each node can integrate as few or as many sensors as desired.”The small sensor array was only collecting data for about a day when Evans and Whelihan, who were at the time on a helicopter, saw that their initial field site had become completely cut off from camp by a 150-meter-wide ice lead. They quickly returned to camp to load the tools needed to pull the nodes, which were no longer accessible by snowmobile. Two recently arrived staff members from the Ted Stevens Center for Arctic Security Studies offered to help them retrieve their nodes. The helicopter landed on the ice floe near a crack, and the pilot told them they had half an hour to complete their recovery mission. By the time they had retrieved all four sensors, the crack had increased from thumb to fist size.“When we got home, we analyzed the collected sensor data and saw a spike in seismic activity corresponding to what could be the major ice-fracturing event that necessitated our node recovery mission,” Whelihan says.  The researchers also conducted experiments with their Arctic-hardened drones to evaluate their utility for retrieving sensor node data and to develop concepts of operations for future capabilities.“The idea is to have some autonomous vehicle land next to the node, download data, and come back, like a data mule, rather than having to expend energy getting data off the system, say via high-speed satellite communications,” Whelihan says. “We also started testing whether the drone is capable on its own of finding sensors that are constantly moving and getting close enough to them. Even flying in 25-mile-per-hour winds, and at very low temperatures, the drone worked well.”Aside from carrying out their experiments, the researchers had the opportunity to interact with other participants. Their “roommates” were ice scientists from Norway and Finland. They met other ice and water scientists conducting chemistry experiments on the salt content of ice taken from different depths in the ice sheet (when ocean water freezes, salt tends to get pushed out of the ice). One of their collaborators — Nicholas Schmerr, an ice seismologist from the University of Maryland — placed high-quality geophones (for measuring vibrations in the ice) alongside their nodes deployed on the camp field site. They also met with junior enlisted submariners, who temporarily came to camp to open up spots on the submarine for distinguished visitors.“Part of what we’ve been doing over the last three years is building connections within the Arctic community,” Evans says. “Every time I start to get a handle on the phenomenology that exists out here, I learn something new. For example, I didn’t know that sometimes a layer of ice forms a little bit deeper than the primary ice sheet, and you can actually see fish swimming in between the layers.”“One day, we were out with our field party leader, who saw fog while she was looking at the horizon and said the ice was breaking up,” Whelihan adds. “I said, ‘Wait, what?’ As she explained, when an ice lead forms, fog comes out of the ocean. Sure enough, within 30 minutes, we had quarter-mile visibility, whereas beforehand it was unlimited.”Back to solid groundBefore leaving, Whelihan and Evans retrieved and packed up all the remaining sensor nodes, adopting the “leave no trace” philosophy of preserving natural places.“Only a limited number of people get access to this special environment,” Whelihan says. “We hope to grow our footprint at these events in future years, giving opportunities to other laboratory staff members to attend.”In the meantime, they will analyze the collected sensor data and refine their sensor node design. One design consideration is how to replenish the sensors’ battery power. A potential path forward is to leverage the temperature difference between water and air, and harvest energy from the water currents moving under ice floes. Wind energy may provide another viable solution. Solar power would only work for part of the year because the Arctic Circle undergoes periods of complete darkness.The team is also seeking external sponsorship to continue their work engineering sensing systems that advance the scientific community’s understanding of changes to Arctic ice; this work is currently funded through Lincoln Laboratory’s internally administered R&D portfolio on climate change. And, in learning more about this changing environment and its critical importance to strategic interests, they are considering other sensing problems that they could tackle using their Arctic engineering expertise.“The Arctic is becoming a more visible and important region because of how it’s changing,” Evans concludes. “Going forward as a country, we must be able to operate there.” More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Study: Heavy snowfall and rain may contribute to some earthquakes

    When scientists look for an earthquake’s cause, their search often starts underground. As centuries of seismic studies have made clear, it’s the collision of tectonic plates and the movement of subsurface faults and fissures that primarily trigger a temblor.But MIT scientists have now found that certain weather events may also play a role in setting off some quakes.In a study appearing today in Science Advances, the researchers report that episodes of heavy snowfall and rain likely contributed to a swarm of earthquakes over the past several years in northern Japan. The study is the first to show that climate conditions could initiate some quakes.“We see that snowfall and other environmental loading at the surface impacts the stress state underground, and the timing of intense precipitation events is well-correlated with the start of this earthquake swarm,” says study author William Frank, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, climate obviously has an impact on the response of the solid earth, and part of that response is earthquakes.”The new study focuses on a series of ongoing earthquakes in Japan’s Noto Peninsula. The team discovered that seismic activity in the region is surprisingly synchronized with certain changes in underground pressure, and that those changes are influenced by seasonal patterns of snowfall and precipitation. The scientists suspect that this new connection between quakes and climate may not be unique to Japan and could play a role in shaking up other parts of the world.Looking to the future, they predict that the climate’s influence on earthquakes could be more pronounced with global warming.“If we’re going into a climate that’s changing, with more extreme precipitation events, and we expect a redistribution of water in the atmosphere, oceans, and continents, that will change how the Earth’s crust is loaded,” Frank adds. “That will have an impact for sure, and it’s a link we could further explore.”The study’s lead author is former MIT research associate Qing-Yu Wang (now at Grenoble Alpes University), and also includes EAPS postdoc Xin Cui, Yang Lu of the University of Vienna, Takashi Hirose of Tohoku University, and Kazushige Obara of the University of Tokyo.Seismic speedSince late 2020, hundreds of small earthquakes have shaken up Japan’s Noto Peninsula — a finger of land that curves north from the country’s main island into the Sea of Japan. Unlike a typical earthquake sequence, which begins as a main shock that gives way to a series of aftershocks before dying out, Noto’s seismic activity is an “earthquake swarm” — a pattern of multiple, ongoing quakes with no obvious main shock, or seismic trigger.The MIT team, along with their colleagues in Japan, aimed to spot any patterns in the swarm that would explain the persistent quakes. They started by looking through the Japanese Meteorological Agency’s catalog of earthquakes that provides data on seismic activity throughout the country over time. They focused on quakes in the Noto Peninsula over the last 11 years, during which the region has experienced episodic earthquake activity, including the most recent swarm.With seismic data from the catalog, the team counted the number of seismic events that occurred in the region over time, and found that the timing of quakes prior to 2020 appeared sporadic and unrelated, compared to late 2020, when earthquakes grew more intense and clustered in time, signaling the start of the swarm, with quakes that are correlated in some way.The scientists then looked to a second dataset of seismic measurements taken by monitoring stations over the same 11-year period. Each station continuously records any displacement, or local shaking that occurs. The shaking from one station to another can give scientists an idea of how fast a seismic wave travels between stations. This “seismic velocity” is related to the structure of the Earth through which the seismic wave is traveling. Wang used the station measurements to calculate the seismic velocity between every station in and around Noto over the last 11 years.The researchers generated an evolving picture of seismic velocity beneath the Noto Peninsula and observed a surprising pattern: In 2020, around when the earthquake swarm is thought to have begun, changes in seismic velocity appeared to be synchronized with the seasons.“We then had to explain why we were observing this seasonal variation,” Frank says.Snow pressureThe team wondered whether environmental changes from season to season could influence the underlying structure of the Earth in a way that would set off an earthquake swarm. Specifically, they looked at how seasonal precipitation would affect the underground “pore fluid pressure” — the amount of pressure that fluids in the Earth’s cracks and fissures exert within the bedrock.“When it rains or snows, that adds weight, which increases pore pressure, which allows seismic waves to travel through slower,” Frank explains. “When all that weight is removed, through evaporation or runoff, all of a sudden, that pore pressure decreases and seismic waves are faster.”Wang and Cui developed a hydromechanical model of the Noto Peninsula to simulate the underlying pore pressure over the last 11 years in response to seasonal changes in precipitation. They fed into the model meteorological data from this same period, including measurements of daily snow, rainfall, and sea-level changes. From their model, they were able to track changes in excess pore pressure beneath the Noto Peninsula, before and during the earthquake swarm. They then compared this timeline of evolving pore pressure with their evolving picture of seismic velocity.“We had seismic velocity observations, and we had the model of excess pore pressure, and when we overlapped them, we saw they just fit extremely well,” Frank says.In particular, they found that when they included snowfall data, and especially, extreme snowfall events, the fit between the model and observations was stronger than if they only considered rainfall and other events. In other words, the ongoing earthquake swarm that Noto residents have been experiencing can be explained in part by seasonal precipitation, and particularly, heavy snowfall events.“We can see that the timing of these earthquakes lines up extremely well with multiple times where we see intense snowfall,” Frank says. “It’s well-correlated with earthquake activity. And we think there’s a physical link between the two.”The researchers suspect that heavy snowfall and similar extreme precipitation could play a role in earthquakes elsewhere, though they emphasize that the primary trigger will always originate underground.“When we first want to understand how earthquakes work, we look to plate tectonics, because that is and will always be the number one reason why an earthquake happens,” Frank says. “But, what are the other things that could affect when and how an earthquake happens? That’s when you start to go to second-order controlling factors, and the climate is obviously one of those.”This research was supported, in part, by the National Science Foundation. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Gosha Geogdzhayev and Sadhana Lolla named 2024 Gates Cambridge Scholars

    This article was updated on April 23 to reflect the promotion of Gosha Geogdzhayev from alternate to winner of the Gates Cambridge Scholarship.

    MIT seniors Gosha Geogdzhayev and Sadhana Lolla have won the prestigious Gates Cambridge Scholarship, which offers students an opportunity to pursue graduate study in the field of their choice at Cambridge University in the U.K.

    Established in 2000, Gates Cambridge offers full-cost post-graduate scholarships to outstanding applicants from countries outside of the U.K. The mission of Gates Cambridge is to build a global network of future leaders committed to improving the lives of others.

    Gosha Geogdzhayev

    Originally from New York City, Geogdzhayev is a senior majoring in physics with minors in mathematics and computer science. At Cambridge, Geogdzhayev intends to pursue an MPhil in quantitative climate and environmental science. He is interested in applying these subjects to climate science and intends to spend his career developing novel statistical methods for climate prediction.

    At MIT, Geogdzhayev researches climate emulators with Professor Raffaele Ferrari’s group in the Department of Earth, Atmospheric and Planetary Sciences and is part of the “Bringing Computation to the Climate Challenge” Grand Challenges project. He is currently working on an operator-based emulator for the projection of climate extremes. Previously, Geogdzhayev studied the statistics of changing chaotic systems, work that has recently been published as a first-author paper.

    As a recipient of the National Oceanic and Atmospheric Agency (NOAA) Hollings Scholarship, Geogdzhayev has worked on bias correction methods for climate data at the NOAA Geophysical Fluid Dynamics Laboratory. He is the recipient of several other awards in the field of earth and atmospheric sciences, notably the American Meteorological Society Ward and Eileen Seguin Scholarship.

    Outside of research, Geogdzhayev enjoys writing poetry and is actively involved with his living community, Burton 1, for which he has previously served as floor chair.

    Sadhana Lolla

    Lolla, a senior from Clarksburg, Maryland, is majoring in computer science and minoring in mathematics and literature. At Cambridge, she will pursue an MPhil in technology policy.

    In the future, Lolla aims to lead conversations on deploying and developing technology for marginalized communities, such as the rural Indian village that her family calls home, while also conducting research in embodied intelligence.

    At MIT, Lolla conducts research on safe and trustworthy robotics and deep learning at the Distributed Robotics Laboratory with Professor Daniela Rus. Her research has spanned debiasing strategies for autonomous vehicles and accelerating robotic design processes. At Microsoft Research and Themis AI, she works on creating uncertainty-aware frameworks for deep learning, which has impacts across computational biology, language modeling, and robotics. She has presented her work at the Neural Information Processing Systems (NeurIPS) conference and the International Conference on Machine Learning (ICML). 

    Outside of research, Lolla leads initiatives to make computer science education more accessible globally. She is an instructor for class 6.s191 (MIT Introduction to Deep Learning), one of the largest AI courses in the world, which reaches millions of students annually. She serves as the curriculum lead for Momentum AI, the only U.S. program that teaches AI to underserved students for free, and she has taught hundreds of students in Northern Scotland as part of the MIT Global Teaching Labs program.

    Lolla was also the director for xFair, MIT’s largest student-run career fair, and is an executive board member for Next Sing, where she works to make a cappella more accessible for students across musical backgrounds. In her free time, she enjoys singing, solving crossword puzzles, and baking. More

  • in

    New tool predicts flood risk from hurricanes in a warming climate

    Coastal cities and communities will face more frequent major hurricanes with climate change in the coming years. To help prepare coastal cities against future storms, MIT scientists have developed a method to predict how much flooding a coastal community is likely to experience as hurricanes evolve over the next decades.

    When hurricanes make landfall, strong winds whip up salty ocean waters that generate storm surge in coastal regions. As the storms move over land, torrential rainfall can induce further flooding inland. When multiple flood sources such as storm surge and rainfall interact, they can compound a hurricane’s hazards, leading to significantly more flooding than would result from any one source alone. The new study introduces a physics-based method for predicting how the risk of such complex, compound flooding may evolve under a warming climate in coastal cities.

    One example of compound flooding’s impact is the aftermath from Hurricane Sandy in 2012. The storm made landfall on the East Coast of the United States as heavy winds whipped up a towering storm surge that combined with rainfall-driven flooding in some areas to cause historic and devastating floods across New York and New Jersey.

    In their study, the MIT team applied the new compound flood-modeling method to New York City to predict how climate change may influence the risk of compound flooding from Sandy-like hurricanes over the next decades.  

    They found that, in today’s climate, a Sandy-level compound flooding event will likely hit New York City every 150 years. By midcentury, a warmer climate will drive up the frequency of such flooding, to every 60 years. At the end of the century, destructive Sandy-like floods will deluge the city every 30 years — a fivefold increase compared to the present climate.

    “Long-term average damages from weather hazards are usually dominated by the rare, intense events like Hurricane Sandy,” says study co-author Kerry Emanuel, professor emeritus of atmospheric science at MIT. “It is important to get these right.”

    While these are sobering projections, the researchers hope the flood forecasts can help city planners prepare and protect against future disasters. “Our methodology equips coastal city authorities and policymakers with essential tools to conduct compound flooding risk assessments from hurricanes in coastal cities at a detailed, granular level, extending to each street or building, in both current and future decades,” says study author Ali Sarhadi, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    The team’s open-access study appears online today in the Bulletin of the American Meteorological Society. Co-authors include Raphaël Rousseau-Rizzi at MIT’s Lorenz Center, Kyle Mandli at Columbia University, Jeffrey Neal at the University of Bristol, Michael Wiper at the Charles III University of Madrid, and Monika Feldmann at the Swiss Federal Institute of Technology Lausanne.

    The seeds of floods

    To forecast a region’s flood risk, weather modelers typically look to the past. Historical records contain measurements of previous hurricanes’ wind speeds, rainfall, and spatial extent, which scientists use to predict where and how much flooding may occur with coming storms. But Sarhadi believes that the limitations and brevity of these historical records are insufficient for predicting future hurricanes’ risks.

    “Even if we had lengthy historical records, they wouldn’t be a good guide for future risks because of climate change,” he says. “Climate change is changing the structural characteristics, frequency, intensity, and movement of hurricanes, and we cannot rely on the past.”

    Sarhadi and his colleagues instead looked to predict a region’s risk of hurricane flooding in a changing climate using a physics-based risk assessment methodology. They first paired simulations of hurricane activity with coupled ocean and atmospheric models over time. With the hurricane simulations, developed originally by Emanuel, the researchers virtually scatter tens of thousands of “seeds” of hurricanes into a simulated climate. Most seeds dissipate, while a few grow into category-level storms, depending on the conditions of the ocean and atmosphere.

    When the team drives these hurricane simulations with climate models of ocean and atmospheric conditions under certain global temperature projections, they can see how hurricanes change, for instance in terms of intensity, frequency, and size, under past, current, and future climate conditions.

    The team then sought to precisely predict the level and degree of compound flooding from future hurricanes in coastal cities. The researchers first used rainfall models to simulate rain intensity for a large number of simulated hurricanes, then applied numerical models to hydraulically translate that rainfall intensity into flooding on the ground during landfalling of hurricanes, given information about a region such as its surface and topography characteristics. They also simulated the same hurricanes’ storm surges, using hydrodynamic models to translate hurricanes’ maximum wind speed and sea level pressure into surge height in coastal areas. The simulation further assessed the propagation of ocean waters into coastal areas, causing coastal flooding.

    Then, the team developed a numerical hydrodynamic model to predict how two sources of hurricane-induced flooding, such as storm surge and rain-driven flooding, would simultaneously interact through time and space, as simulated hurricanes make landfall in coastal regions such as New York City, in both current and future climates.  

    “There’s a complex, nonlinear hydrodynamic interaction between saltwater surge-driven flooding and freshwater rainfall-driven flooding, that forms compound flooding that a lot of existing methods ignore,” Sarhadi says. “As a result, they underestimate the risk of compound flooding.”

    Amplified risk

    With their flood-forecasting method in place, the team applied it to a specific test case: New York City. They used the multipronged method to predict the city’s risk of compound flooding from hurricanes, and more specifically from Sandy-like hurricanes, in present and future climates. Their simulations showed that the city’s odds of experiencing Sandy-like flooding will increase significantly over the next decades as the climate warms, from once every 150 years in the current climate, to every 60 years by 2050, and every 30 years by 2099.

    Interestingly, they found that much of this increase in risk has less to do with how hurricanes themselves will change with warming climates, but with how sea levels will increase around the world.

    “In future decades, we will experience sea level rise in coastal areas, and we also incorporated that effect into our models to see how much that would increase the risk of compound flooding,” Sarhadi explains. “And in fact, we see sea level rise is playing a major role in amplifying the risk of compound flooding from hurricanes in New York City.”

    The team’s methodology can be applied to any coastal city to assess the risk of compound flooding from hurricanes and extratropical storms. With this approach, Sarhadi hopes decision-makers can make informed decisions regarding the implementation of adaptive measures, such as reinforcing coastal defenses to enhance infrastructure and community resilience.

    “Another aspect highlighting the urgency of our research is the projected 25 percent increase in coastal populations by midcentury, leading to heightened exposure to damaging storms,” Sarhadi says. “Additionally, we have trillions of dollars in assets situated in coastal flood-prone areas, necessitating proactive strategies to reduce damages from compound flooding from hurricanes under a warming climate.”

    This research was supported, in part, by Homesite Insurance. More

  • in

    A mineral produced by plate tectonics has a global cooling effect, study finds

    MIT geologists have found that a clay mineral on the seafloor, called smectite, has a surprisingly powerful ability to sequester carbon over millions of years.

    Under a microscope, a single grain of the clay resembles the folds of an accordion. These folds are known to be effective traps for organic carbon.

    Now, the MIT team has shown that the carbon-trapping clays are a product of plate tectonics: When oceanic crust crushes against a continental plate, it can bring rocks to the surface that, over time, can weather into minerals including smectite. Eventually, the clay sediment settles back in the ocean, where the minerals trap bits of dead organisms in their microscopic folds. This keeps the organic carbon from being consumed by microbes and expelled back into the atmosphere as carbon dioxide.

    Over millions of years, smectite can have a global effect, helping to cool the entire planet. Through a series of analyses, the researchers showed that smectite was likely produced after several major tectonic events over the last 500 million years. During each tectonic event, the clays trapped enough carbon to cool the Earth and induce the subsequent ice age.

    The findings are the first to show that plate tectonics can trigger ice ages through the production of carbon-trapping smectite.

    These clays can be found in certain tectonically active regions today, and the scientists believe that smectite continues to sequester carbon, providing a natural, albeit slow-acting, buffer against humans’ climate-warming activities.

    “The influence of these unassuming clay minerals has wide-ranging implications for the habitability of planets,” says Joshua Murray, a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “There may even be a modern application for these clays in offsetting some of the carbon that humanity has placed into the atmosphere.”

    Murray and Oliver Jagoutz, professor of geology at MIT, have published their findings today in Nature Geoscience.

    A clear and present clay

    The new study follows up on the team’s previous work, which showed that each of the Earth’s major ice ages was likely triggered by a tectonic event in the tropics. The researchers found that each of these tectonic events exposed ocean rocks called ophiolites to the atmosphere. They put forth the idea that, when a tectonic collision occurs in a tropical region, ophiolites can undergo certain weathering effects, such as exposure to wind, rain, and chemical interactions, that transform the rocks into various minerals, including clays.

    “Those clay minerals, depending on the kinds you create, influence the climate in different ways,” Murray explains.

    At the time, it was unclear which minerals could come out of this weathering effect, and whether and how these minerals could directly contribute to cooling the planet. So, while it appeared there was a link between plate tectonics and ice ages, the exact mechanism by which one could trigger the other was still in question.

    With the new study, the team looked to see whether their proposed tectonic tropical weathering process would produce carbon-trapping minerals, and in quantities that would be sufficient to trigger a global ice age.

    The team first looked through the geologic literature and compiled data on the ways in which major magmatic minerals weather over time, and on the types of clay minerals this weathering can produce. They then worked these measurements into a weathering simulation of different rock types that are known to be exposed in tectonic collisions.

    “Then we look at what happens to these rock types when they break down due to weathering and the influence of a tropical environment, and what minerals form as a result,” Jagoutz says.

    Next, they plugged each weathered, “end-product” mineral into a simulation of the Earth’s carbon cycle to see what effect a given mineral might have, either in interacting with organic carbon, such as bits of dead organisms, or with inorganic, in the form of carbon dioxide in the atmosphere.

    From these analyses, one mineral had a clear presence and effect: smectite. Not only was the clay a naturally weathered product of tropical tectonics, it was also highly effective at trapping organic carbon. In theory, smectite seemed like a solid connection between tectonics and ice ages.

    But were enough of the clays actually present to trigger the previous four ice ages? Ideally, researchers should confirm this by finding smectite in ancient rock layers dating back to each global cooling period.

    “Unfortunately, as clays are buried by other sediments, they get cooked a bit, so we can’t measure them directly,” Murray says. “But we can look for their fingerprints.”

    A slow build

    The team reasoned that, as smectites are a product of ophiolites, these ocean rocks also bear characteristic elements such as nickel and chromium, which would be preserved in ancient sediments. If smectites were present in the past, nickel and chromium should be as well.

    To test this idea, the team looked through a database containing thousands of oceanic sedimentary rocks that were deposited over the last 500 million years. Over this time period, the Earth experienced four separate ice ages. Looking at rocks around each of these periods, the researchers observed large spikes of nickel and chromium, and inferred from this that smectite must also have been present.

    By their estimates, the clay mineral could have increased the preservation of organic carbon by less than one-tenth of a percent. In absolute terms, this is a miniscule amount. But over millions of years, they calculated that the clay’s accumulated, sequestered carbon was enough to trigger each of the four major ice ages.

    “We found that you really don’t need much of this material to have a huge effect on the climate,” Jagoutz says.

    “These clays also have probably contributed some of the Earth’s cooling in the last 3 to 5 million years, before humans got involved,” Murray adds. “In the absence of humans, these clays are probably making a difference to the climate. It’s just such a slow process.”

    “Jagoutz and Murray’s work is a nice demonstration of how important it is to consider all biotic and physical components of the global carbon cycle,” says Lee Kump, a professor of geosciences at Penn State University, who was not involved with the study. “Feedbacks among all these components control atmospheric greenhouse gas concentrations on all time scales, from the annual rise and fall of atmospheric carbon dioxide levels to the swings from icehouse to greenhouse over millions of years.”

    Could smectites be harnessed intentionally to further bring down the world’s carbon emissions? Murray sees some potential, for instance to shore up carbon reservoirs such as regions of permafrost. Warming temperatures are predicted to melt permafrost and expose long-buried organic carbon. If smectites could be applied to these regions, the clays could prevent this exposed carbon from escaping into and further warming the atmosphere.

    “If you want to understand how nature works, you have to understand it on the mineral and grain scale,” Jagoutz says. “And this is also the way forward for us to find solutions for this climatic catastrophe. If you study these natural processes, there’s a good chance you will stumble on something that will be actually useful.”

    This research was funded, in part, by the National Science Foundation. More

  • in

    Explained: The 1.5 C climate benchmark

    The summer of 2023 has been a season of weather extremes.

    In June, uncontrolled wildfires ripped through parts of Canada, sending smoke into the U.S. and setting off air quality alerts in dozens of downwind states. In July, the world set the hottest global temperature on record, which it held for three days in a row, then broke again on day four.

    From July into August, unrelenting heat blanketed large parts of Europe, Asia, and the U.S., while India faced a torrential monsoon season, and heavy rains flooded regions in the northeastern U.S. And most recently, whipped up by high winds and dry vegetation, a historic wildfire tore through Maui, devastating an entire town.

    These extreme weather events are mainly a consequence of climate change driven by humans’ continued burning of coal, oil, and natural gas. Climate scientists agree that extreme weather such as what people experienced this summer will likely grow more frequent and intense in the coming years unless something is done, on a persistent and planet-wide scale, to rein in global temperatures.

    Just how much reining-in are they talking about? The number that is internationally agreed upon is 1.5 degrees Celsius. To prevent worsening and potentially irreversible effects of climate change, the world’s average temperature should not exceed that of preindustrial times by more than 1.5 degrees Celsius (2.7 degrees Fahrenheit).

    As more regions around the world face extreme weather, it’s worth taking stock of the 1.5-degree bar, where the planet stands in relation to this threshold, and what can be done at the global, regional, and personal level, to “keep 1.5 alive.”

    Why 1.5 C?

    In 2015, in response to the growing urgency of climate impacts, nearly every country in the world signed onto the Paris Agreement, a landmark international treaty under which 195 nations pledged to hold the Earth’s temperature to “well below 2 degrees Celsius above pre-industrial levels,” and going further, aim to “limit the temperature increase to 1.5 degrees Celsius above pre-industrial levels.”

    The treaty did not define a particular preindustrial period, though scientists generally consider the years from 1850 to 1900 to be a reliable reference; this time predates humans’ use of fossil fuels and is also the earliest period when global observations of land and sea temperatures are available. During this period, the average global temperature, while swinging up and down in certain years, generally hovered around 13.5 degrees Celsius, or 56.3 degrees Fahrenheit.

    The treaty was informed by a fact-finding report which concluded that, even global warming of 1.5 degrees Celsius above the preindustrial average, over an extended, decades-long period, would lead to high risks for “some regions and vulnerable ecosystems.” The recommendation then, was to set the 1.5 degrees Celsius limit as a “defense line” — if the world can keep below this line, it potentially could avoid the more extreme and irreversible climate effects that would occur with a 2 degrees Celsius increase, and for some places, an even smaller increase than that.

    But, as many regions are experiencing today, keeping below the 1.5 line is no guarantee of avoiding extreme, global warming effects.

    “There is nothing magical about the 1.5 number, other than that is an agreed aspirational target. Keeping at 1.4 is better than 1.5, and 1.3 is better than 1.4, and so on,” says Sergey Paltsev, deputy director of MIT’s Joint Program on the Science and Policy of Global Change. “The science does not tell us that if, for example, the temperature increase is 1.51 degrees Celsius, then it would definitely be the end of the world. Similarly, if the temperature would stay at 1.49 degrees increase, it does not mean that we will eliminate all impacts of climate change. What is known: The lower the target for an increase in temperature, the lower the risks of climate impacts.”

    How close are we to 1.5 C?

    In 2022, the average global temperature was about 1.15 degrees Celsius above preindustrial levels. According to the World Meteorological Organization (WMO), the cyclical weather phenomenon La Niña recently contributed to temporarily cooling and dampening the effects of human-induced climate change. La Niña lasted for three years and ended around March of 2023.

    In May, the WMO issued a report that projected a significant likelihood (66 percent) that the world would exceed the 1.5 degrees Celsius threshold in the next four years. This breach would likely be driven by human-induced climate change, combined with a warming El Niño — a cyclical weather phenomenon that temporarily heats up ocean regions and pushes global temperatures higher.

    This summer, an El Niño is currently underway, and the event typically raises global temperatures in the year after it sets in, which in this case would be in 2024. The WMO predicts that, for each of the next four years, the global average temperature is likely to swing between 1.1 and 1.8 degrees Celsius above preindustrial levels.

    Though there is a good chance the world will get hotter than the 1.5-degree limit as the result of El Niño, the breach would be temporary, and for now, would not have failed the Paris Agreement, which aims to keep global temperatures below the 1.5-degree limit over the long term (averaged over several decades rather than a single year).

    “But we should not forget that this is a global average, and there are variations regionally and seasonally,” says Elfatih Eltahir, the H.M. King Bhumibol Professor and Professor of Civil and Environmental Engineering at MIT. “This year, we had extreme conditions around the world, even though we haven’t reached the 1.5 C threshold. So, even if we control the average at a global magnitude, we are going to see events that are extreme, because of climate change.”

    More than a number

    To hold the planet’s long-term average temperature to below the 1.5-degree threshold, the world will have to reach net zero emissions by the year 2050, according to the Intergovernmental Panel on Climate Change (IPCC). This means that, in terms of the emissions released by the burning of coal, oil, and natural gas, the entire world will have to remove as much as it puts into the atmosphere.

    “In terms of innovations, we need all of them — even those that may seem quite exotic at this point: fusion, direct air capture, and others,” Paltsev says.

    The task of curbing emissions in time is particularly daunting for the United States, which generates the most carbon dioxide emissions of any other country in the world.

    “The U.S.’s burning of fossil fuels and consumption of energy is just way above the rest of the world. That’s a persistent problem,” Eltahir says. “And the national statistics are an aggregate of what a lot of individuals are doing.”

    At an individual level, there are things that can be done to help bring down one’s personal emissions, and potentially chip away at rising global temperatures.

    “We are consumers of products that either embody greenhouse gases, such as meat, clothes, computers, and homes, or we are directly responsible for emitting greenhouse gases, such as when we use cars, airplanes, electricity, and air conditioners,” Paltsev says. “Our everyday choices affect the amount of emissions that are added to the atmosphere.”

    But to compel people to change their emissions, it may be less about a number, and more about a feeling.

    “To get people to act, my hypothesis is, you need to reach them not just by convincing them to be good citizens and saying it’s good for the world to keep below 1.5 degrees, but showing how they individually will be impacted,” says Eltahir, who specializes on the study of regional climates, focusing on how climate change impacts the water cycle and frequency of extreme weather such as heat waves.

    “True climate progress requires a dramatic change in how the human system gets its energy,” Paltsev says. “It is a huge undertaking. Are you ready personally to make sacrifices and to change the way of your life? If one gets an honest answer to that question, it would help to understand why true climate progress is so difficult to achieve.” More