More stories

  • in

    So you want to build a solar or wind farm? Here’s how to decide where.

    Deciding where to build new solar or wind installations is often left up to individual developers or utilities, with limited overall coordination. But a new study shows that regional-level planning using fine-grained weather data, information about energy use, and energy system modeling can make a big difference in the design of such renewable power installations. This also leads to more efficient and economically viable operations.The findings show the benefits of coordinating the siting of solar farms, wind farms, and storage systems, taking into account local and temporal variations in wind, sunlight, and energy demand to maximize the utilization of renewable resources. This approach can reduce the need for sizable investments in storage, and thus the total system cost, while maximizing availability of clean power when it’s needed, the researchers found.The study, appearing today in the journal Cell Reports Sustainability, was co-authored by Liying Qiu and Rahman Khorramfar, postdocs in MIT’s Department of Civil and Environmental Engineering, and professors Saurabh Amin and Michael Howland.Qiu, the lead author, says that with the team’s new approach, “we can harness the resource complementarity, which means that renewable resources of different types, such as wind and solar, or different locations can compensate for each other in time and space. This potential for spatial complementarity to improve system design has not been emphasized and quantified in existing large-scale planning.”Such complementarity will become ever more important as variable renewable energy sources account for a greater proportion of power entering the grid, she says. By coordinating the peaks and valleys of production and demand more smoothly, she says, “we are actually trying to use the natural variability itself to address the variability.”Typically, in planning large-scale renewable energy installations, Qiu says, “some work on a country level, for example saying that 30 percent of energy should be wind and 20 percent solar. That’s very general.” For this study, the team looked at both weather data and energy system planning modeling on a scale of less than 10-kilometer (about 6-mile) resolution. “It’s a way of determining where should we, exactly, build each renewable energy plant, rather than just saying this city should have this many wind or solar farms,” she explains.To compile their data and enable high-resolution planning, the researchers relied on a variety of sources that had not previously been integrated. They used high-resolution meteorological data from the National Renewable Energy Laboratory, which is publicly available at 2-kilometer resolution but rarely used in a planning model at such a fine scale. These data were combined with an energy system model they developed to optimize siting at a sub-10-kilometer resolution. To get a sense of how the fine-scale data and model made a difference in different regions, they focused on three U.S. regions — New England, Texas, and California — analyzing up to 138,271 possible siting locations simultaneously for a single region.By comparing the results of siting based on a typical method vs. their high-resolution approach, the team showed that “resource complementarity really helps us reduce the system cost by aligning renewable power generation with demand,” which should translate directly to real-world decision-making, Qiu says. “If an individual developer wants to build a wind or solar farm and just goes to where there is the most wind or solar resource on average, it may not necessarily guarantee the best fit into a decarbonized energy system.”That’s because of the complex interactions between production and demand for electricity, as both vary hour by hour, and month by month as seasons change. “What we are trying to do is minimize the difference between the energy supply and demand rather than simply supplying as much renewable energy as possible,” Qiu says. “Sometimes your generation cannot be utilized by the system, while at other times, you don’t have enough to match the demand.”In New England, for example, the new analysis shows there should be more wind farms in locations where there is a strong wind resource during the night, when solar energy is unavailable. Some locations tend to be windier at night, while others tend to have more wind during the day.These insights were revealed through the integration of high-resolution weather data and energy system optimization used by the researchers. When planning with lower resolution weather data, which was generated at a 30-kilometer resolution globally and is more commonly used in energy system planning, there was much less complementarity among renewable power plants. Consequently, the total system cost was much higher. The complementarity between wind and solar farms was enhanced by the high-resolution modeling due to improved representation of renewable resource variability.The researchers say their framework is very flexible and can be easily adapted to any region to account for the local geophysical and other conditions. In Texas, for example, peak winds in the west occur in the morning, while along the south coast they occur in the afternoon, so the two naturally complement each other.Khorramfar says that this work “highlights the importance of data-driven decision making in energy planning.” The work shows that using such high-resolution data coupled with carefully formulated energy planning model “can drive the system cost down, and ultimately offer more cost-effective pathways for energy transition.”One thing that was surprising about the findings, says Amin, who is a principal investigator in the MIT Laboratory of Information and Data Systems, is how significant the gains were from analyzing relatively short-term variations in inputs and outputs that take place in a 24-hour period. “The kind of cost-saving potential by trying to harness complementarity within a day was not something that one would have expected before this study,” he says.In addition, Amin says, it was also surprising how much this kind of modeling could reduce the need for storage as part of these energy systems. “This study shows that there is actually a hidden cost-saving potential in exploiting local patterns in weather, that can result in a monetary reduction in storage cost.”The system-level analysis and planning suggested by this study, Howland says, “changes how we think about where we site renewable power plants and how we design those renewable plants, so that they maximally serve the energy grid. It has to go beyond just driving down the cost of energy of individual wind or solar farms. And these new insights can only be realized if we continue collaborating across traditional research boundaries, by integrating expertise in fluid dynamics, atmospheric science, and energy engineering.”The research was supported by the MIT Climate and Sustainability Consortium and MIT Climate Grand Challenges. More

  • in

    New AI tool generates realistic satellite images of future flooding

    Visualizing the potential impacts of a hurricane on people’s homes before it hits can help residents prepare and decide whether to evacuate.MIT scientists have developed a method that generates satellite imagery from the future to depict how a region would look after a potential flooding event. The method combines a generative artificial intelligence model with a physics-based flood model to create realistic, birds-eye-view images of a region, showing where flooding is likely to occur given the strength of an oncoming storm.As a test case, the team applied the method to Houston and generated satellite images depicting what certain locations around the city would look like after a storm comparable to Hurricane Harvey, which hit the region in 2017. The team compared these generated images with actual satellite images taken of the same regions after Harvey hit. They also compared AI-generated images that did not include a physics-based flood model.The team’s physics-reinforced method generated satellite images of future flooding that were more realistic and accurate. The AI-only method, in contrast, generated images of flooding in places where flooding is not physically possible.The team’s method is a proof-of-concept, meant to demonstrate a case in which generative AI models can generate realistic, trustworthy content when paired with a physics-based model. In order to apply the method to other regions to depict flooding from future storms, it will need to be trained on many more satellite images to learn how flooding would look in other regions.“The idea is: One day, we could use this before a hurricane, where it provides an additional visualization layer for the public,” says Björn Lütjens, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences, who led the research while he was a doctoral student in MIT’s Department of Aeronautics and Astronautics (AeroAstro). “One of the biggest challenges is encouraging people to evacuate when they are at risk. Maybe this could be another visualization to help increase that readiness.”To illustrate the potential of the new method, which they have dubbed the “Earth Intelligence Engine,” the team has made it available as an online resource for others to try.The researchers report their results today in the journal IEEE Transactions on Geoscience and Remote Sensing. The study’s MIT co-authors include Brandon Leshchinskiy; Aruna Sankaranarayanan; and Dava Newman, professor of AeroAstro and director of the MIT Media Lab; along with collaborators from multiple institutions.Generative adversarial imagesThe new study is an extension of the team’s efforts to apply generative AI tools to visualize future climate scenarios.“Providing a hyper-local perspective of climate seems to be the most effective way to communicate our scientific results,” says Newman, the study’s senior author. “People relate to their own zip code, their local environment where their family and friends live. Providing local climate simulations becomes intuitive, personal, and relatable.”For this study, the authors use a conditional generative adversarial network, or GAN, a type of machine learning method that can generate realistic images using two competing, or “adversarial,” neural networks. The first “generator” network is trained on pairs of real data, such as satellite images before and after a hurricane. The second “discriminator” network is then trained to distinguish between the real satellite imagery and the one synthesized by the first network.Each network automatically improves its performance based on feedback from the other network. The idea, then, is that such an adversarial push and pull should ultimately produce synthetic images that are indistinguishable from the real thing. Nevertheless, GANs can still produce “hallucinations,” or factually incorrect features in an otherwise realistic image that shouldn’t be there.“Hallucinations can mislead viewers,” says Lütjens, who began to wonder whether such hallucinations could be avoided, such that generative AI tools can be trusted to help inform people, particularly in risk-sensitive scenarios. “We were thinking: How can we use these generative AI models in a climate-impact setting, where having trusted data sources is so important?”Flood hallucinationsIn their new work, the researchers considered a risk-sensitive scenario in which generative AI is tasked with creating satellite images of future flooding that could be trustworthy enough to inform decisions of how to prepare and potentially evacuate people out of harm’s way.Typically, policymakers can get an idea of where flooding might occur based on visualizations in the form of color-coded maps. These maps are the final product of a pipeline of physical models that usually begins with a hurricane track model, which then feeds into a wind model that simulates the pattern and strength of winds over a local region. This is combined with a flood or storm surge model that forecasts how wind might push any nearby body of water onto land. A hydraulic model then maps out where flooding will occur based on the local flood infrastructure and generates a visual, color-coded map of flood elevations over a particular region.“The question is: Can visualizations of satellite imagery add another level to this, that is a bit more tangible and emotionally engaging than a color-coded map of reds, yellows, and blues, while still being trustworthy?” Lütjens says.The team first tested how generative AI alone would produce satellite images of future flooding. They trained a GAN on actual satellite images taken by satellites as they passed over Houston before and after Hurricane Harvey. When they tasked the generator to produce new flood images of the same regions, they found that the images resembled typical satellite imagery, but a closer look revealed hallucinations in some images, in the form of floods where flooding should not be possible (for instance, in locations at higher elevation).To reduce hallucinations and increase the trustworthiness of the AI-generated images, the team paired the GAN with a physics-based flood model that incorporates real, physical parameters and phenomena, such as an approaching hurricane’s trajectory, storm surge, and flood patterns. With this physics-reinforced method, the team generated satellite images around Houston that depict the same flood extent, pixel by pixel, as forecasted by the flood model.“We show a tangible way to combine machine learning with physics for a use case that’s risk-sensitive, which requires us to analyze the complexity of Earth’s systems and project future actions and possible scenarios to keep people out of harm’s way,” Newman says. “We can’t wait to get our generative AI tools into the hands of decision-makers at the local community level, which could make a significant difference and perhaps save lives.”The research was supported, in part, by the MIT Portugal Program, the DAF-MIT Artificial Intelligence Accelerator, NASA, and Google Cloud. More

  • in

    Scientists find a human “fingerprint” in the upper troposphere’s increasing ozone

    Ozone can be an agent of good or harm, depending on where you find it in the atmosphere. Way up in the stratosphere, the colorless gas shields the Earth from the sun’s harsh ultraviolet rays. But closer to the ground, ozone is a harmful air pollutant that can trigger chronic health problems including chest pain, difficulty breathing, and impaired lung function.And somewhere in between, in the upper troposphere — the layer of the atmosphere just below the stratosphere, where most aircraft cruise — ozone contributes to warming the planet as a potent greenhouse gas.There are signs that ozone is continuing to rise in the upper troposphere despite efforts to reduce its sources at the surface in many nations. Now, MIT scientists confirm that much of ozone’s increase in the upper troposphere is likely due to humans.In a paper appearing today in the journal Environmental Science and Technology, the team reports that they detected a clear signal of human influence on upper tropospheric ozone trends in a 17-year satellite record starting in 2005.“We confirm that there’s a clear and increasing trend in upper tropospheric ozone in the northern midlatitudes due to human beings rather than climate noise,” says study lead author Xinyuan Yu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).“Now we can do more detective work and try to understand what specific human activities are leading to this ozone trend,” adds co-author Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences.The study’s MIT authors include Sebastian Eastham and Qindan Zhu, along with Benjamin Santer at the University of California at Los Angeles, Gustavo Correa of Columbia University, Jean-François Lamarque at the National Center for Atmospheric Research, and Jerald Zimeke at NASA Goddard Space Flight Center.Ozone’s tangled webUnderstanding ozone’s causes and influences is a challenging exercise. Ozone is not emitted directly, but instead is a product of “precursors” — starting ingredients, such as nitrogen oxides and volatile organic compounds (VOCs), that react in the presence of sunlight to form ozone. These precursors are generated from vehicle exhaust, power plants, chemical solvents, industrial processes, aircraft emissions, and other human-induced activities.Whether and how long ozone lingers in the atmosphere depends on a tangle of variables, including the type and extent of human activities in a given area, as well as natural climate variability. For instance, a strong El Niño year could nudge the atmosphere’s circulation in a way that affects ozone’s concentrations, regardless of how much ozone humans are contributing to the atmosphere that year.Disentangling the human- versus climate-driven causes of ozone trend, particularly in the upper troposphere, is especially tricky. Complicating matters is the fact that in the lower troposphere — the lowest layer of the atmosphere, closest to ground level — ozone has stopped rising, and has even fallen in some regions at northern midlatitudes in the last few decades. This decrease in lower tropospheric ozone is mainly a result of efforts in North America and Europe to reduce industrial sources of air pollution.“Near the surface, ozone has been observed to decrease in some regions, and its variations are more closely linked to human emissions,” Yu notes. “In the upper troposphere, the ozone trends are less well-monitored but seem to decouple with those near the surface, and ozone is more easily influenced by climate variability. So, we don’t know whether and how much of that increase in observed ozone in the upper troposphere is attributed to humans.”A human signal amid climate noiseYu and Fiore wondered whether a human “fingerprint” in ozone levels, caused directly by human activities, could be strong enough to be detectable in satellite observations in the upper troposphere. To see such a signal, the researchers would first have to know what to look for.For this, they looked to simulations of the Earth’s climate and atmospheric chemistry. Following approaches developed in climate science, they reasoned that if they could simulate a number of possible climate variations in recent decades, all with identical human-derived sources of ozone precursor emissions, but each starting with a slightly different climate condition, then any differences among these scenarios should be due to climate noise. By inference, any common signal that emerged when averaging over the simulated scenarios should be due to human-driven causes. Such a signal, then, would be a “fingerprint” revealing human-caused ozone, which the team could look for in actual satellite observations.With this strategy in mind, the team ran simulations using a state-of-the-art chemistry climate model. They ran multiple climate scenarios, each starting from the year 1950 and running through 2014.From their simulations, the team saw a clear and common signal across scenarios, which they identified as a human fingerprint. They then looked to tropospheric ozone products derived from multiple instruments aboard NASA’s Aura satellite.“Quite honestly, I thought the satellite data were just going to be too noisy,” Fiore admits. “I didn’t expect that the pattern would be robust enough.”But the satellite observations they used gave them a good enough shot. The team looked through the upper tropospheric ozone data derived from the satellite products, from the years 2005 to 2021, and found that, indeed, they could see the signal of human-caused ozone that their simulations predicted. The signal is especially pronounced over Asia, where industrial activity has risen significantly in recent decades and where abundant sunlight and frequent weather events loft pollution, including ozone and its precursors, to the upper troposphere.Yu and Fiore are now looking to identify the specific human activities that are leading to ozone’s increase in the upper troposphere.“Where is this increasing trend coming from? Is it the near-surface emissions from combusting fossil fuels in vehicle engines and power plants? Is it the aircraft that are flying in the upper troposphere? Is it the influence of wildland fires? Or some combination of all of the above?” Fiore says. “Being able to separate human-caused impacts from natural climate variations can help to inform strategies to address climate change and air pollution.”This research was funded, in part, by NASA. More

  • in

    Researchers return to Arctic to test integrated sensor nodes

    Shimmering ice extends in all directions as far as the eye can see. Air temperatures plunge to minus 40 degrees Fahrenheit and colder with wind chills. Ocean currents drag large swaths of ice floating at sea. Polar bears, narwhals, and other iconic Arctic species roam wild.For a week this past spring, MIT Lincoln Laboratory researchers Ben Evans and Dave Whelihan called this place — drifting some 200 nautical miles offshore from Prudhoe Bay, Alaska, on the frozen Beaufort Sea in the Arctic Circle — home. Two ice runways for small aircraft provided their only way in and out of this remote wilderness; heated tents provided their only shelter from the bitter cold.

    Play video

    Video: MIT Lincoln Laboratory

    Here, in the northernmost region on Earth, Evans and Whelihan joined other groups conducting fieldwork in the Arctic as part of Operation Ice Camp (OIC) 2024, an operational exercise run by the U.S. Navy’s Arctic Submarine Laboratory (ASL). Riding on snowmobiles and helicopters, the duo deployed a small set of integrated sensor nodes that measure everything from atmospheric conditions to ice properties to the structure of water deep below the surface.Ultimately, they envision deploying an unattended network of these low-cost sensor nodes across the Arctic to increase scientific understanding of the trending loss in sea ice extent and thickness. Warming much faster than the rest of the world, the Arctic is a ground zero for climate change, with cascading impacts across the planet that include rising sea levels and extreme weather. Openings in the sea ice cover, or leads, are concerning not only for climate change but also for global geopolitical competition over transit routes and natural resources. A synoptic view of the physical processes happening above, at, and below sea ice is key to determining why the ice is diminishing. In turn, this knowledge can help predict when and where fractures will occur, to inform planning and decision-making.Winter “camp”Every two years, OIC, previously called Ice Exercise (ICEX), provides a way for the international community to access the Arctic for operational readiness exercises and scientific research, with the focus switching back and forth; this year’s focus was scientific research. Coordination, planning, and execution of the month-long operation is led by ASL, a division of the U.S. Navy’s Undersea Warfighting Development Center responsible for ensuring the submarine force can effectively operate in the Arctic Ocean.Making this inhospitable and unforgiving environment safe for participants takes considerable effort. The critical first step is determining where to set up camp. In the weeks before the first participants arrived for OIC 2024, ASL — with assistance from the U.S. National Ice Center, University of Alaska Fairbanks Geophysical Institute, and UIC Science — flew over large sheets of floating ice (ice floes) identified via satellite imagery, landed on some they thought might be viable sites, and drilled through the ice to check its thickness. The ice floe must not only be large enough to accommodate construction of a camp and two runways but also feature both multiyear ice and first-year ice. Multiyear ice is thick and strong but rough, making it ideal for camp setup, while the smooth but thinner first-year ice is better suited for building runways. Once the appropriate ice floe was selected, ASL began to haul in equipment and food, build infrastructure like lodging and a command center, and fly in a small group before fully operationalizing the site. They also identified locations near the camp for two Navy submarines to surface through the ice.The more than 200 participants represented U.S. and allied forces and scientists from research organizations and universities. Distinguished visitors from government offices also attended OIC to see the unique Arctic environment and unfolding challenges firsthand.“Our ASL hosts do incredible work to build this camp from scratch and keep us alive,” Evans says.Evans and Whelihan, part of the laboratory’s Advanced Undersea Systems and Technology Group, first trekked to the Arctic in March 2022 for ICEX 2022. (The laboratory in general has been participating since 2016 in these events, the first iteration of which occurred in 1946.) There, they deployed a suite of commercial off-the-shelf sensors for detecting acoustic (sound) and seismic (vibration) events created by ice fractures or collisions, and for measuring salinity, temperature, and pressure in the water below the ice. They also deployed a prototype fiber-based temperature sensor array developed by the laboratory and research partners for precisely measuring temperature across the entire water column at one location, and a University of New Hampshire (UNH)−supplied echosounder to investigate the different layers present in the water column. In this maiden voyage, their goals were to assess how these sensors fared in the harsh Arctic conditions and to collect a dataset from which characteristic signatures of ice-fracturing events could begin to be identified. These events would be correlated with weather and water conditions to eventually offer a predictive capability.“We saw real phenomenology in our data,” Whelihan says. “But, we’re not ice experts. What we’re good at here at the laboratory is making and deploying sensors. That’s our place in the world of climate science: to be a data provider. In fact, we hope to open source all of our data this year so that ice scientists can access and analyze them and then we can make enhanced sensors and collect more data.”Interim iceIn the two years since that expedition, they and their colleagues have been modifying their sensor designs and deployment strategies. As Evans and Whelihan learned at ICEX 2022, to be resilient in the Arctic, a sensor must not only be kept warm and dry during deployment but also be deployed in a way to prevent breaking. Moreover, sufficient power and data links are needed to collect and access sensor data.“We can make cold-weather electronics, no problem,” Whelihan says. “The two drivers are operating the sensors in an energy-starved environment — the colder it is, the worse batteries perform — and keeping them from getting destroyed when ice floes crash together as leads in the ice open up.”Their work in the interim to OIC 2024 involved integrating the individual sensors into hardened sensor nodes and practicing deploying these nodes in easier-to-access locations. To facilitate incorporating additional sensors into a node, Whelihan spearheaded the development of an open-source, easily extensible hardware and software architecture.In March 2023, the Lincoln Laboratory team deployed three sensor nodes for a week on Huron Bay off Lake Superior through Michigan Tech’s Great Lakes Research Center (GLRC). Engineers from GLRC helped the team safely set up an operations base on the ice. They demonstrated that the sensor integration worked, and the sensor nodes proved capable of surviving for at least a week in relatively harsh conditions. The researchers recorded seismic activity on all three nodes, corresponding to some ice breaking further up the bay.“Proving our sensor node in an Arctic surrogate environment provided a stepping stone for testing in the real Arctic,” Evans says.Evans then received an invitation from Ignatius Rigor, the coordinator of the International Arctic Buoy Program (IABP), to join him on an upcoming trip to Utqiaġvik (formerly Barrow), Alaska, and deploy one of their seismic sensor nodes on the ice there (with support from UIC Science). The IABP maintains a network of Arctic buoys equipped with meteorological and oceanic sensors. Data collected by these buoys are shared with the operational and research communities to support real-time operations (e.g., forecasting sea ice conditions for coastal Alaskans) and climate research. However, these buoys are typically limited in the frequency at which they collect data, so phenomenology on shorter time scales important to climate change may be missed. Moreover, these buoys are difficult and expensive to deploy because they are designed to survive in the harshest environments for years at a time.  The laboratory-developed sensor nodes could offer an inexpensive, easier-to-deploy option for collecting more data over shorter periods of time. In April 2023, Evans placed a sensor node in Utqiaġvik on landfast sea ice, which is stationary ice anchored to the seabed just off the coast. During the sensor node’s week-long deployment, a big piece of drift ice (ice not attached to the seabed or other fixed object) broke off and crashed into the landfast ice. The event was recorded by a radar maintained by the University of Alaska Fairbanks that monitors sea ice movement in near real time to warn of any instability. Though this phenomenology is not exactly the same as that expected for Arctic sea ice, the researchers were encouraged to see seismic activity recorded by their sensor node.In December 2023, Evans and Whelihan headed to New Hampshire, where they conducted echosounder testing in UNH’s engineering test tank and on the Piscataqua River. Together with their UNH partners, they sought to determine whether a low-cost, hobby-grade echosounder could detect the same phenomenology of interest as the high-fidelity UNH echosounder, which would be far too costly to deploy in sensor nodes across the Arctic. In the test tank and on the river, the low-cost echosounder proved capable of detecting masses of water moving in the water column, but with considerably less structural detail than afforded by the higher-cost option. Seeing such dynamics is important to inferring where water comes from and understanding how it affects sea ice breakup — for example, how warm water moving in from the Pacific Ocean is coming into contact with and melting the ice. So, the laboratory researchers and UNH partners have been building a medium-fidelity, medium-cost echosounder.In January 2024, Evans and Whelihan — along with Jehan Diaz, a fellow staff member in their research group — returned to GLRC. With logistical support from their GLRC hosts, they snowmobiled across the ice on Portage Lake, where they practiced several activities to prepare for OIC 2024: augering (drilling) six-inch holes in the ice, albeit in thinner ice than that in the Arctic; placing their long, pipe-like sensor nodes through these holes; operating cold-hardened drones to interact with the nodes; and retrieving the nodes. They also practiced sensor calibration by hitting the ice with an iron bar some distance away from the nodes and correlating this distance with the resulting measured acoustic and seismic intensity.“Our time at GLRC helped us mitigate a lot of risks and prepare to deploy these complex systems in the Arctic,” Whelihan says.Arctic againTo get to OIC, Evans and Whelihan first flew to Prudhoe Bay and reacclimated to the frigid temperatures. They spent the next two days at the Deadhorse Aviation Center hangar inspecting their equipment for transit-induced damage, which included squashed cables and connectors that required rejiggering.“That’s part of the adventure story,” Evans says. “Getting stuff to Prudhoe Bay is not your standard shipping; it’s ice-road trucking.”From there, they boarded a small aircraft to the ice camp.“Even though this trip marked our second time coming here, it was still disorienting,” Evans continues. “You land in the middle of nowhere on a small aircraft after a couple-hour flight. You get out bundled in all of your Arctic gear in this remote, pristine environment.”After unloading and rechecking their equipment for any damage, calibrating their sensors, and attending safety briefings, they were ready to begin their experiments.An icy situationInside the project tent, Evans and Whelihan deployed the UNH-supplied echosounder and a suite of ground-truth sensors on an automated winch to profile water conductivity, temperature, and depth (CTD). Echosounder data needed to be validated with associated CTD data to determine the source of the water in the water column. Ocean properties change as a function of depth, and these changes are important to capture, in part because masses of water coming in from the Atlantic and Pacific oceans arrive at different depths. Though masses of warm water have always existed, climate change–related mechanisms are now bringing them into contact with the ice.  “As ice breaks up, wind can directly interact with the ocean because it’s lacking that barrier of ice cover,” Evans explains. “Kinetic energy from the wind causes mixing in the ocean; all the warm water that used to stay at depth instead gets brought up and interacts with the ice.”They also deployed four of their sensor nodes several miles outside of camp. To access this deployment site, they rode on a sled pulled via a snowmobile driven by Ann Hill, an ASL field party leader trained in Arctic survival and wildlife encounters. The temperature that day was -55 F. At such a dangerously cold temperature, frostnip and frostbite are all too common. To avoid removal of gloves or other protective clothing, the researchers enabled the nodes with WiFi capability (the nodes also have a satellite communications link to transmit low-bandwidth data). Large amounts of data are automatically downloaded over WiFi to an arm-wearable haptic (touch-based) system when a user walks up to a node.“It was so cold that the holes we were drilling in the ice to reach the water column were freezing solid,” Evans explains. “We realized it was going to be quite an ordeal to get our sensor nodes out of the ice.”So, after drilling a big hole in the ice, they deployed only one central node with all the sensor components: a commercial echosounder, an underwater microphone, a seismometer, and a weather station. They deployed the other three nodes, each with a seismometer and weather station, atop the ice.“One of our design considerations was flexibility,” Whelihan says. “Each node can integrate as few or as many sensors as desired.”The small sensor array was only collecting data for about a day when Evans and Whelihan, who were at the time on a helicopter, saw that their initial field site had become completely cut off from camp by a 150-meter-wide ice lead. They quickly returned to camp to load the tools needed to pull the nodes, which were no longer accessible by snowmobile. Two recently arrived staff members from the Ted Stevens Center for Arctic Security Studies offered to help them retrieve their nodes. The helicopter landed on the ice floe near a crack, and the pilot told them they had half an hour to complete their recovery mission. By the time they had retrieved all four sensors, the crack had increased from thumb to fist size.“When we got home, we analyzed the collected sensor data and saw a spike in seismic activity corresponding to what could be the major ice-fracturing event that necessitated our node recovery mission,” Whelihan says.  The researchers also conducted experiments with their Arctic-hardened drones to evaluate their utility for retrieving sensor node data and to develop concepts of operations for future capabilities.“The idea is to have some autonomous vehicle land next to the node, download data, and come back, like a data mule, rather than having to expend energy getting data off the system, say via high-speed satellite communications,” Whelihan says. “We also started testing whether the drone is capable on its own of finding sensors that are constantly moving and getting close enough to them. Even flying in 25-mile-per-hour winds, and at very low temperatures, the drone worked well.”Aside from carrying out their experiments, the researchers had the opportunity to interact with other participants. Their “roommates” were ice scientists from Norway and Finland. They met other ice and water scientists conducting chemistry experiments on the salt content of ice taken from different depths in the ice sheet (when ocean water freezes, salt tends to get pushed out of the ice). One of their collaborators — Nicholas Schmerr, an ice seismologist from the University of Maryland — placed high-quality geophones (for measuring vibrations in the ice) alongside their nodes deployed on the camp field site. They also met with junior enlisted submariners, who temporarily came to camp to open up spots on the submarine for distinguished visitors.“Part of what we’ve been doing over the last three years is building connections within the Arctic community,” Evans says. “Every time I start to get a handle on the phenomenology that exists out here, I learn something new. For example, I didn’t know that sometimes a layer of ice forms a little bit deeper than the primary ice sheet, and you can actually see fish swimming in between the layers.”“One day, we were out with our field party leader, who saw fog while she was looking at the horizon and said the ice was breaking up,” Whelihan adds. “I said, ‘Wait, what?’ As she explained, when an ice lead forms, fog comes out of the ocean. Sure enough, within 30 minutes, we had quarter-mile visibility, whereas beforehand it was unlimited.”Back to solid groundBefore leaving, Whelihan and Evans retrieved and packed up all the remaining sensor nodes, adopting the “leave no trace” philosophy of preserving natural places.“Only a limited number of people get access to this special environment,” Whelihan says. “We hope to grow our footprint at these events in future years, giving opportunities to other laboratory staff members to attend.”In the meantime, they will analyze the collected sensor data and refine their sensor node design. One design consideration is how to replenish the sensors’ battery power. A potential path forward is to leverage the temperature difference between water and air, and harvest energy from the water currents moving under ice floes. Wind energy may provide another viable solution. Solar power would only work for part of the year because the Arctic Circle undergoes periods of complete darkness.The team is also seeking external sponsorship to continue their work engineering sensing systems that advance the scientific community’s understanding of changes to Arctic ice; this work is currently funded through Lincoln Laboratory’s internally administered R&D portfolio on climate change. And, in learning more about this changing environment and its critical importance to strategic interests, they are considering other sensing problems that they could tackle using their Arctic engineering expertise.“The Arctic is becoming a more visible and important region because of how it’s changing,” Evans concludes. “Going forward as a country, we must be able to operate there.” More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Study: Heavy snowfall and rain may contribute to some earthquakes

    When scientists look for an earthquake’s cause, their search often starts underground. As centuries of seismic studies have made clear, it’s the collision of tectonic plates and the movement of subsurface faults and fissures that primarily trigger a temblor.But MIT scientists have now found that certain weather events may also play a role in setting off some quakes.In a study appearing today in Science Advances, the researchers report that episodes of heavy snowfall and rain likely contributed to a swarm of earthquakes over the past several years in northern Japan. The study is the first to show that climate conditions could initiate some quakes.“We see that snowfall and other environmental loading at the surface impacts the stress state underground, and the timing of intense precipitation events is well-correlated with the start of this earthquake swarm,” says study author William Frank, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, climate obviously has an impact on the response of the solid earth, and part of that response is earthquakes.”The new study focuses on a series of ongoing earthquakes in Japan’s Noto Peninsula. The team discovered that seismic activity in the region is surprisingly synchronized with certain changes in underground pressure, and that those changes are influenced by seasonal patterns of snowfall and precipitation. The scientists suspect that this new connection between quakes and climate may not be unique to Japan and could play a role in shaking up other parts of the world.Looking to the future, they predict that the climate’s influence on earthquakes could be more pronounced with global warming.“If we’re going into a climate that’s changing, with more extreme precipitation events, and we expect a redistribution of water in the atmosphere, oceans, and continents, that will change how the Earth’s crust is loaded,” Frank adds. “That will have an impact for sure, and it’s a link we could further explore.”The study’s lead author is former MIT research associate Qing-Yu Wang (now at Grenoble Alpes University), and also includes EAPS postdoc Xin Cui, Yang Lu of the University of Vienna, Takashi Hirose of Tohoku University, and Kazushige Obara of the University of Tokyo.Seismic speedSince late 2020, hundreds of small earthquakes have shaken up Japan’s Noto Peninsula — a finger of land that curves north from the country’s main island into the Sea of Japan. Unlike a typical earthquake sequence, which begins as a main shock that gives way to a series of aftershocks before dying out, Noto’s seismic activity is an “earthquake swarm” — a pattern of multiple, ongoing quakes with no obvious main shock, or seismic trigger.The MIT team, along with their colleagues in Japan, aimed to spot any patterns in the swarm that would explain the persistent quakes. They started by looking through the Japanese Meteorological Agency’s catalog of earthquakes that provides data on seismic activity throughout the country over time. They focused on quakes in the Noto Peninsula over the last 11 years, during which the region has experienced episodic earthquake activity, including the most recent swarm.With seismic data from the catalog, the team counted the number of seismic events that occurred in the region over time, and found that the timing of quakes prior to 2020 appeared sporadic and unrelated, compared to late 2020, when earthquakes grew more intense and clustered in time, signaling the start of the swarm, with quakes that are correlated in some way.The scientists then looked to a second dataset of seismic measurements taken by monitoring stations over the same 11-year period. Each station continuously records any displacement, or local shaking that occurs. The shaking from one station to another can give scientists an idea of how fast a seismic wave travels between stations. This “seismic velocity” is related to the structure of the Earth through which the seismic wave is traveling. Wang used the station measurements to calculate the seismic velocity between every station in and around Noto over the last 11 years.The researchers generated an evolving picture of seismic velocity beneath the Noto Peninsula and observed a surprising pattern: In 2020, around when the earthquake swarm is thought to have begun, changes in seismic velocity appeared to be synchronized with the seasons.“We then had to explain why we were observing this seasonal variation,” Frank says.Snow pressureThe team wondered whether environmental changes from season to season could influence the underlying structure of the Earth in a way that would set off an earthquake swarm. Specifically, they looked at how seasonal precipitation would affect the underground “pore fluid pressure” — the amount of pressure that fluids in the Earth’s cracks and fissures exert within the bedrock.“When it rains or snows, that adds weight, which increases pore pressure, which allows seismic waves to travel through slower,” Frank explains. “When all that weight is removed, through evaporation or runoff, all of a sudden, that pore pressure decreases and seismic waves are faster.”Wang and Cui developed a hydromechanical model of the Noto Peninsula to simulate the underlying pore pressure over the last 11 years in response to seasonal changes in precipitation. They fed into the model meteorological data from this same period, including measurements of daily snow, rainfall, and sea-level changes. From their model, they were able to track changes in excess pore pressure beneath the Noto Peninsula, before and during the earthquake swarm. They then compared this timeline of evolving pore pressure with their evolving picture of seismic velocity.“We had seismic velocity observations, and we had the model of excess pore pressure, and when we overlapped them, we saw they just fit extremely well,” Frank says.In particular, they found that when they included snowfall data, and especially, extreme snowfall events, the fit between the model and observations was stronger than if they only considered rainfall and other events. In other words, the ongoing earthquake swarm that Noto residents have been experiencing can be explained in part by seasonal precipitation, and particularly, heavy snowfall events.“We can see that the timing of these earthquakes lines up extremely well with multiple times where we see intense snowfall,” Frank says. “It’s well-correlated with earthquake activity. And we think there’s a physical link between the two.”The researchers suspect that heavy snowfall and similar extreme precipitation could play a role in earthquakes elsewhere, though they emphasize that the primary trigger will always originate underground.“When we first want to understand how earthquakes work, we look to plate tectonics, because that is and will always be the number one reason why an earthquake happens,” Frank says. “But, what are the other things that could affect when and how an earthquake happens? That’s when you start to go to second-order controlling factors, and the climate is obviously one of those.”This research was supported, in part, by the National Science Foundation. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Gosha Geogdzhayev and Sadhana Lolla named 2024 Gates Cambridge Scholars

    This article was updated on April 23 to reflect the promotion of Gosha Geogdzhayev from alternate to winner of the Gates Cambridge Scholarship.

    MIT seniors Gosha Geogdzhayev and Sadhana Lolla have won the prestigious Gates Cambridge Scholarship, which offers students an opportunity to pursue graduate study in the field of their choice at Cambridge University in the U.K.

    Established in 2000, Gates Cambridge offers full-cost post-graduate scholarships to outstanding applicants from countries outside of the U.K. The mission of Gates Cambridge is to build a global network of future leaders committed to improving the lives of others.

    Gosha Geogdzhayev

    Originally from New York City, Geogdzhayev is a senior majoring in physics with minors in mathematics and computer science. At Cambridge, Geogdzhayev intends to pursue an MPhil in quantitative climate and environmental science. He is interested in applying these subjects to climate science and intends to spend his career developing novel statistical methods for climate prediction.

    At MIT, Geogdzhayev researches climate emulators with Professor Raffaele Ferrari’s group in the Department of Earth, Atmospheric and Planetary Sciences and is part of the “Bringing Computation to the Climate Challenge” Grand Challenges project. He is currently working on an operator-based emulator for the projection of climate extremes. Previously, Geogdzhayev studied the statistics of changing chaotic systems, work that has recently been published as a first-author paper.

    As a recipient of the National Oceanic and Atmospheric Agency (NOAA) Hollings Scholarship, Geogdzhayev has worked on bias correction methods for climate data at the NOAA Geophysical Fluid Dynamics Laboratory. He is the recipient of several other awards in the field of earth and atmospheric sciences, notably the American Meteorological Society Ward and Eileen Seguin Scholarship.

    Outside of research, Geogdzhayev enjoys writing poetry and is actively involved with his living community, Burton 1, for which he has previously served as floor chair.

    Sadhana Lolla

    Lolla, a senior from Clarksburg, Maryland, is majoring in computer science and minoring in mathematics and literature. At Cambridge, she will pursue an MPhil in technology policy.

    In the future, Lolla aims to lead conversations on deploying and developing technology for marginalized communities, such as the rural Indian village that her family calls home, while also conducting research in embodied intelligence.

    At MIT, Lolla conducts research on safe and trustworthy robotics and deep learning at the Distributed Robotics Laboratory with Professor Daniela Rus. Her research has spanned debiasing strategies for autonomous vehicles and accelerating robotic design processes. At Microsoft Research and Themis AI, she works on creating uncertainty-aware frameworks for deep learning, which has impacts across computational biology, language modeling, and robotics. She has presented her work at the Neural Information Processing Systems (NeurIPS) conference and the International Conference on Machine Learning (ICML). 

    Outside of research, Lolla leads initiatives to make computer science education more accessible globally. She is an instructor for class 6.s191 (MIT Introduction to Deep Learning), one of the largest AI courses in the world, which reaches millions of students annually. She serves as the curriculum lead for Momentum AI, the only U.S. program that teaches AI to underserved students for free, and she has taught hundreds of students in Northern Scotland as part of the MIT Global Teaching Labs program.

    Lolla was also the director for xFair, MIT’s largest student-run career fair, and is an executive board member for Next Sing, where she works to make a cappella more accessible for students across musical backgrounds. In her free time, she enjoys singing, solving crossword puzzles, and baking. More