More stories

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Study: Heavy snowfall and rain may contribute to some earthquakes

    When scientists look for an earthquake’s cause, their search often starts underground. As centuries of seismic studies have made clear, it’s the collision of tectonic plates and the movement of subsurface faults and fissures that primarily trigger a temblor.But MIT scientists have now found that certain weather events may also play a role in setting off some quakes.In a study appearing today in Science Advances, the researchers report that episodes of heavy snowfall and rain likely contributed to a swarm of earthquakes over the past several years in northern Japan. The study is the first to show that climate conditions could initiate some quakes.“We see that snowfall and other environmental loading at the surface impacts the stress state underground, and the timing of intense precipitation events is well-correlated with the start of this earthquake swarm,” says study author William Frank, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, climate obviously has an impact on the response of the solid earth, and part of that response is earthquakes.”The new study focuses on a series of ongoing earthquakes in Japan’s Noto Peninsula. The team discovered that seismic activity in the region is surprisingly synchronized with certain changes in underground pressure, and that those changes are influenced by seasonal patterns of snowfall and precipitation. The scientists suspect that this new connection between quakes and climate may not be unique to Japan and could play a role in shaking up other parts of the world.Looking to the future, they predict that the climate’s influence on earthquakes could be more pronounced with global warming.“If we’re going into a climate that’s changing, with more extreme precipitation events, and we expect a redistribution of water in the atmosphere, oceans, and continents, that will change how the Earth’s crust is loaded,” Frank adds. “That will have an impact for sure, and it’s a link we could further explore.”The study’s lead author is former MIT research associate Qing-Yu Wang (now at Grenoble Alpes University), and also includes EAPS postdoc Xin Cui, Yang Lu of the University of Vienna, Takashi Hirose of Tohoku University, and Kazushige Obara of the University of Tokyo.Seismic speedSince late 2020, hundreds of small earthquakes have shaken up Japan’s Noto Peninsula — a finger of land that curves north from the country’s main island into the Sea of Japan. Unlike a typical earthquake sequence, which begins as a main shock that gives way to a series of aftershocks before dying out, Noto’s seismic activity is an “earthquake swarm” — a pattern of multiple, ongoing quakes with no obvious main shock, or seismic trigger.The MIT team, along with their colleagues in Japan, aimed to spot any patterns in the swarm that would explain the persistent quakes. They started by looking through the Japanese Meteorological Agency’s catalog of earthquakes that provides data on seismic activity throughout the country over time. They focused on quakes in the Noto Peninsula over the last 11 years, during which the region has experienced episodic earthquake activity, including the most recent swarm.With seismic data from the catalog, the team counted the number of seismic events that occurred in the region over time, and found that the timing of quakes prior to 2020 appeared sporadic and unrelated, compared to late 2020, when earthquakes grew more intense and clustered in time, signaling the start of the swarm, with quakes that are correlated in some way.The scientists then looked to a second dataset of seismic measurements taken by monitoring stations over the same 11-year period. Each station continuously records any displacement, or local shaking that occurs. The shaking from one station to another can give scientists an idea of how fast a seismic wave travels between stations. This “seismic velocity” is related to the structure of the Earth through which the seismic wave is traveling. Wang used the station measurements to calculate the seismic velocity between every station in and around Noto over the last 11 years.The researchers generated an evolving picture of seismic velocity beneath the Noto Peninsula and observed a surprising pattern: In 2020, around when the earthquake swarm is thought to have begun, changes in seismic velocity appeared to be synchronized with the seasons.“We then had to explain why we were observing this seasonal variation,” Frank says.Snow pressureThe team wondered whether environmental changes from season to season could influence the underlying structure of the Earth in a way that would set off an earthquake swarm. Specifically, they looked at how seasonal precipitation would affect the underground “pore fluid pressure” — the amount of pressure that fluids in the Earth’s cracks and fissures exert within the bedrock.“When it rains or snows, that adds weight, which increases pore pressure, which allows seismic waves to travel through slower,” Frank explains. “When all that weight is removed, through evaporation or runoff, all of a sudden, that pore pressure decreases and seismic waves are faster.”Wang and Cui developed a hydromechanical model of the Noto Peninsula to simulate the underlying pore pressure over the last 11 years in response to seasonal changes in precipitation. They fed into the model meteorological data from this same period, including measurements of daily snow, rainfall, and sea-level changes. From their model, they were able to track changes in excess pore pressure beneath the Noto Peninsula, before and during the earthquake swarm. They then compared this timeline of evolving pore pressure with their evolving picture of seismic velocity.“We had seismic velocity observations, and we had the model of excess pore pressure, and when we overlapped them, we saw they just fit extremely well,” Frank says.In particular, they found that when they included snowfall data, and especially, extreme snowfall events, the fit between the model and observations was stronger than if they only considered rainfall and other events. In other words, the ongoing earthquake swarm that Noto residents have been experiencing can be explained in part by seasonal precipitation, and particularly, heavy snowfall events.“We can see that the timing of these earthquakes lines up extremely well with multiple times where we see intense snowfall,” Frank says. “It’s well-correlated with earthquake activity. And we think there’s a physical link between the two.”The researchers suspect that heavy snowfall and similar extreme precipitation could play a role in earthquakes elsewhere, though they emphasize that the primary trigger will always originate underground.“When we first want to understand how earthquakes work, we look to plate tectonics, because that is and will always be the number one reason why an earthquake happens,” Frank says. “But, what are the other things that could affect when and how an earthquake happens? That’s when you start to go to second-order controlling factors, and the climate is obviously one of those.”This research was supported, in part, by the National Science Foundation. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Artificial reef designed by MIT engineers could protect marine life, reduce storm damage

    The beautiful, gnarled, nooked-and-crannied reefs that surround tropical islands serve as a marine refuge and natural buffer against stormy seas. But as the effects of climate change bleach and break down coral reefs around the world, and extreme weather events become more common, coastal communities are left increasingly vulnerable to frequent flooding and erosion.

    An MIT team is now hoping to fortify coastlines with “architected” reefs — sustainable, offshore structures engineered to mimic the wave-buffering effects of natural reefs while also providing pockets for fish and other marine life.

    The team’s reef design centers on a cylindrical structure surrounded by four rudder-like slats. The engineers found that when this structure stands up against a wave, it efficiently breaks the wave into turbulent jets that ultimately dissipate most of the wave’s total energy. The team has calculated that the new design could reduce as much wave energy as existing artificial reefs, using 10 times less material.

    The researchers plan to fabricate each cylindrical structure from sustainable cement, which they would mold in a pattern of “voxels” that could be automatically assembled, and would provide pockets for fish to explore and other marine life to settle in. The cylinders could be connected to form a long, semipermeable wall, which the engineers could erect along a coastline, about half a mile from shore. Based on the team’s initial experiments with lab-scale prototypes, the architected reef could reduce the energy of incoming waves by more than 95 percent.

    “This would be like a long wave-breaker,” says Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. “If waves are 6 meters high coming toward this reef structure, they would be ultimately less than a meter high on the other side. So, this kills the impact of the waves, which could prevent erosion and flooding.”

    Details of the architected reef design are reported today in a study appearing in the open-access journal PNAS Nexus. Triantafyllou’s MIT co-authors are Edvard Ronglan SM ’23; graduate students Alfonso Parra Rubio, Jose del Auila Ferrandis, and Erik Strand; research scientists Patricia Maria Stathatou and Carolina Bastidas; and Professor Neil Gershenfeld, director of the Center for Bits and Atoms; along with Alexis Oliveira Da Silva at the Polytechnic Institute of Paris, Dixia Fan of Westlake University, and Jeffrey Gair Jr. of Scinetics, Inc.

    Leveraging turbulence

    Some regions have already erected artificial reefs to protect their coastlines from encroaching storms. These structures are typically sunken ships, retired oil and gas platforms, and even assembled configurations of concrete, metal, tires, and stones. However, there’s variability in the types of artificial reefs that are currently in place, and no standard for engineering such structures. What’s more, the designs that are deployed tend to have a low wave dissipation per unit volume of material used. That is, it takes a huge amount of material to break enough wave energy to adequately protect coastal communities.

    The MIT team instead looked for ways to engineer an artificial reef that would efficiently dissipate wave energy with less material, while also providing a refuge for fish living along any vulnerable coast.

    “Remember, natural coral reefs are only found in tropical waters,” says Triantafyllou, who is director of the MIT Sea Grant. “We cannot have these reefs, for instance, in Massachusetts. But architected reefs don’t depend on temperature, so they can be placed in any water, to protect more coastal areas.”

    MIT researchers test the wave-breaking performance of two artificial reef structures in the MIT Towing Tank.Credit: Courtesy of the researchers

    The new effort is the result of a collaboration between researchers in MIT Sea Grant, who developed the reef structure’s hydrodynamic design, and researchers at the Center for Bits and Atoms (CBA), who worked to make the structure modular and easy to fabricate on location. The team’s architected reef design grew out of two seemingly unrelated problems. CBA researchers were developing ultralight cellular structures for the aerospace industry, while Sea Grant researchers were assessing the performance of blowout preventers in offshore oil structures — cylindrical valves that are used to seal off oil and gas wells and prevent them from leaking.

    The team’s tests showed that the structure’s cylindrical arrangement generated a high amount of drag. In other words, the structure appeared to be especially efficient in dissipating high-force flows of oil and gas. They wondered: Could the same arrangement dissipate another type of flow, in ocean waves?

    The researchers began to play with the general structure in simulations of water flow, tweaking its dimensions and adding certain elements to see whether and how waves changed as they crashed against each simulated design. This iterative process ultimately landed on an optimized geometry: a vertical cylinder flanked by four long slats, each attached to the cylinder in a way that leaves space for water to flow through the resulting structure. They found this setup essentially breaks up any incoming wave energy, causing parts of the wave-induced flow to spiral to the sides rather than crashing ahead.

    “We’re leveraging this turbulence and these powerful jets to ultimately dissipate wave energy,” Ferrandis says.

    Standing up to storms

    Once the researchers identified an optimal wave-dissipating structure, they fabricated a laboratory-scale version of an architected reef made from a series of the cylindrical structures, which they 3D-printed from plastic. Each test cylinder measured about 1 foot wide and 4 feet tall. They assembled a number of cylinders, each spaced about a foot apart, to form a fence-like structure, which they then lowered into a wave tank at MIT. They then generated waves of various heights and measured them before and after passing through the architected reef.

    “We saw the waves reduce substantially, as the reef destroyed their energy,” Triantafyllou says.

    The team has also looked into making the structures more porous, and friendly to fish. They found that, rather than making each structure from a solid slab of plastic, they could use a more affordable and sustainable type of cement.

    “We’ve worked with biologists to test the cement we intend to use, and it’s benign to fish, and ready to go,” he adds.

    They identified an ideal pattern of “voxels,” or microstructures, that cement could be molded into, in order to fabricate the reefs while creating pockets in which fish could live. This voxel geometry resembles individual egg cartons, stacked end to end, and appears to not affect the structure’s overall wave-dissipating power.

    “These voxels still maintain a big drag while allowing fish to move inside,” Ferrandis says.

    The team is currently fabricating cement voxel structures and assembling them into a lab-scale architected reef, which they will test under various wave conditions. They envision that the voxel design could be modular, and scalable to any desired size, and easy to transport and install in various offshore locations. “Now we’re simulating actual sea patterns, and testing how these models will perform when we eventually have to deploy them,” says Anjali Sinha, a graduate student at MIT who recently joined the group.

    Going forward, the team hopes to work with beach towns in Massachusetts to test the structures on a pilot scale.

    “These test structures would not be small,” Triantafyllou emphasizes. “They would be about a mile long, and about 5 meters tall, and would cost something like 6 million dollars per mile. So it’s not cheap. But it could prevent billions of dollars in storm damage. And with climate change, protecting the coasts will become a big issue.”

    This work was funded, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    New tool predicts flood risk from hurricanes in a warming climate

    Coastal cities and communities will face more frequent major hurricanes with climate change in the coming years. To help prepare coastal cities against future storms, MIT scientists have developed a method to predict how much flooding a coastal community is likely to experience as hurricanes evolve over the next decades.

    When hurricanes make landfall, strong winds whip up salty ocean waters that generate storm surge in coastal regions. As the storms move over land, torrential rainfall can induce further flooding inland. When multiple flood sources such as storm surge and rainfall interact, they can compound a hurricane’s hazards, leading to significantly more flooding than would result from any one source alone. The new study introduces a physics-based method for predicting how the risk of such complex, compound flooding may evolve under a warming climate in coastal cities.

    One example of compound flooding’s impact is the aftermath from Hurricane Sandy in 2012. The storm made landfall on the East Coast of the United States as heavy winds whipped up a towering storm surge that combined with rainfall-driven flooding in some areas to cause historic and devastating floods across New York and New Jersey.

    In their study, the MIT team applied the new compound flood-modeling method to New York City to predict how climate change may influence the risk of compound flooding from Sandy-like hurricanes over the next decades.  

    They found that, in today’s climate, a Sandy-level compound flooding event will likely hit New York City every 150 years. By midcentury, a warmer climate will drive up the frequency of such flooding, to every 60 years. At the end of the century, destructive Sandy-like floods will deluge the city every 30 years — a fivefold increase compared to the present climate.

    “Long-term average damages from weather hazards are usually dominated by the rare, intense events like Hurricane Sandy,” says study co-author Kerry Emanuel, professor emeritus of atmospheric science at MIT. “It is important to get these right.”

    While these are sobering projections, the researchers hope the flood forecasts can help city planners prepare and protect against future disasters. “Our methodology equips coastal city authorities and policymakers with essential tools to conduct compound flooding risk assessments from hurricanes in coastal cities at a detailed, granular level, extending to each street or building, in both current and future decades,” says study author Ali Sarhadi, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    The team’s open-access study appears online today in the Bulletin of the American Meteorological Society. Co-authors include Raphaël Rousseau-Rizzi at MIT’s Lorenz Center, Kyle Mandli at Columbia University, Jeffrey Neal at the University of Bristol, Michael Wiper at the Charles III University of Madrid, and Monika Feldmann at the Swiss Federal Institute of Technology Lausanne.

    The seeds of floods

    To forecast a region’s flood risk, weather modelers typically look to the past. Historical records contain measurements of previous hurricanes’ wind speeds, rainfall, and spatial extent, which scientists use to predict where and how much flooding may occur with coming storms. But Sarhadi believes that the limitations and brevity of these historical records are insufficient for predicting future hurricanes’ risks.

    “Even if we had lengthy historical records, they wouldn’t be a good guide for future risks because of climate change,” he says. “Climate change is changing the structural characteristics, frequency, intensity, and movement of hurricanes, and we cannot rely on the past.”

    Sarhadi and his colleagues instead looked to predict a region’s risk of hurricane flooding in a changing climate using a physics-based risk assessment methodology. They first paired simulations of hurricane activity with coupled ocean and atmospheric models over time. With the hurricane simulations, developed originally by Emanuel, the researchers virtually scatter tens of thousands of “seeds” of hurricanes into a simulated climate. Most seeds dissipate, while a few grow into category-level storms, depending on the conditions of the ocean and atmosphere.

    When the team drives these hurricane simulations with climate models of ocean and atmospheric conditions under certain global temperature projections, they can see how hurricanes change, for instance in terms of intensity, frequency, and size, under past, current, and future climate conditions.

    The team then sought to precisely predict the level and degree of compound flooding from future hurricanes in coastal cities. The researchers first used rainfall models to simulate rain intensity for a large number of simulated hurricanes, then applied numerical models to hydraulically translate that rainfall intensity into flooding on the ground during landfalling of hurricanes, given information about a region such as its surface and topography characteristics. They also simulated the same hurricanes’ storm surges, using hydrodynamic models to translate hurricanes’ maximum wind speed and sea level pressure into surge height in coastal areas. The simulation further assessed the propagation of ocean waters into coastal areas, causing coastal flooding.

    Then, the team developed a numerical hydrodynamic model to predict how two sources of hurricane-induced flooding, such as storm surge and rain-driven flooding, would simultaneously interact through time and space, as simulated hurricanes make landfall in coastal regions such as New York City, in both current and future climates.  

    “There’s a complex, nonlinear hydrodynamic interaction between saltwater surge-driven flooding and freshwater rainfall-driven flooding, that forms compound flooding that a lot of existing methods ignore,” Sarhadi says. “As a result, they underestimate the risk of compound flooding.”

    Amplified risk

    With their flood-forecasting method in place, the team applied it to a specific test case: New York City. They used the multipronged method to predict the city’s risk of compound flooding from hurricanes, and more specifically from Sandy-like hurricanes, in present and future climates. Their simulations showed that the city’s odds of experiencing Sandy-like flooding will increase significantly over the next decades as the climate warms, from once every 150 years in the current climate, to every 60 years by 2050, and every 30 years by 2099.

    Interestingly, they found that much of this increase in risk has less to do with how hurricanes themselves will change with warming climates, but with how sea levels will increase around the world.

    “In future decades, we will experience sea level rise in coastal areas, and we also incorporated that effect into our models to see how much that would increase the risk of compound flooding,” Sarhadi explains. “And in fact, we see sea level rise is playing a major role in amplifying the risk of compound flooding from hurricanes in New York City.”

    The team’s methodology can be applied to any coastal city to assess the risk of compound flooding from hurricanes and extratropical storms. With this approach, Sarhadi hopes decision-makers can make informed decisions regarding the implementation of adaptive measures, such as reinforcing coastal defenses to enhance infrastructure and community resilience.

    “Another aspect highlighting the urgency of our research is the projected 25 percent increase in coastal populations by midcentury, leading to heightened exposure to damaging storms,” Sarhadi says. “Additionally, we have trillions of dollars in assets situated in coastal flood-prone areas, necessitating proactive strategies to reduce damages from compound flooding from hurricanes under a warming climate.”

    This research was supported, in part, by Homesite Insurance. More

  • in

    Studying rivers from worlds away

    Rivers have flowed on two other worlds in the solar system besides Earth: Mars, where dry tracks and craters are all that’s left of ancient rivers and lakes, and Titan, Saturn’s largest moon, where rivers of liquid methane still flow today.

    A new technique developed by MIT geologists allows scientists to see how intensely rivers used to flow on Mars, and how they currently flow on Titan. The method uses satellite observations to estimate the rate at which rivers move fluid and sediment downstream.

    Applying their new technique, the MIT team calculated how fast and deep rivers were in certain regions on Mars more than 1 billion years ago. They also made similar estimates for currently active rivers on Titan, even though the moon’s thick atmosphere and distance from Earth make it harder to explore, with far fewer available images of its surface than those of Mars.

    “What’s exciting about Titan is that it’s active. With this technique, we have a method to make real predictions for a place where we won’t get more data for a long time,” says Taylor Perron, the Cecil and Ida Green Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “And on Mars, it gives us a time machine, to take the rivers that are dead now and get a sense of what they were like when they were actively flowing.”

    Perron and his colleagues have published their results today in the Proceedings of the National Academy of Sciences. Perron’s MIT co-authors are first author Samuel Birch, Paul Corlies, and Jason Soderblom, with Rose Palermo and Andrew Ashton of the Woods Hole Oceanographic Institution (WHOI), Gary Parker of the University of Illinois at Urbana-Champaign, and collaborators from the University of California at Los Angeles, Yale University, and Cornell University.

    River math

    The team’s study grew out of Perron and Birch’s puzzlement over Titan’s rivers. The images taken by NASA’s Cassini spacecraft have shown a curious lack of fan-shaped deltas at the mouths of most of the moon’s rivers, contrary to many rivers on Earth. Could it be that Titan’s rivers don’t carry enough flow or sediment to build deltas?

    The group built on the work of co-author Gary Parker, who in the 2000s developed a series of mathematical equations to describe river flow on Earth. Parker had studied measurements of rivers taken directly in the field by others. From these data, he found there were certain universal relationships between a river’s physical dimensions — its width, depth, and slope — and the rate at which it flowed. He drew up equations to describe these relationships mathematically, accounting for other variables such as the gravitational field acting on the river, and the size and density of the sediment being pushed along a river’s bed.

    “This means that rivers with different gravity and materials should follow similar relationships,” Perron says. “That opened up a possibility to apply this to other planets too.”

    Getting a glimpse

    On Earth, geologists can make field measurements of a river’s width, slope, and average sediment size, all of which can be fed into Parker’s equations to accurately predict a river’s flow rate, or how much water and sediment it can move downstream. But for rivers on other planets, measurements are more limited, and largely based on images and elevation measurements collected by remote satellites. For Mars, multiple orbiters have taken high-resolution images of the planet. For Titan, views are few and far between.

    Birch realized that any estimate of river flow on Mars or Titan would have to be based on the few characteristics that can be measured from remote images and topography — namely, a river’s width and slope. With some algebraic tinkering, he adapted Parker’s equations to work only with width and slope inputs. He then assembled data from 491 rivers on Earth, tested the modified equations on these rivers, and found that the predictions based solely on each river’s width and slope were accurate.

    Then, he applied the equations to Mars, and specifically, to the ancient rivers leading into Gale and Jezero Craters, both of which are thought to have been water-filled lakes billions of years ago. To predict the flow rate of each river, he plugged into the equations Mars’ gravity, and estimates of each river’s width and slope, based on images and elevation measurements taken by orbiting satellites.

    From their predictions of flow rate, the team found that rivers likely flowed for at least 100,000 years at Gale Crater and at least 1 million years at Jezero Crater — long enough to have possibly supported life. They were also able to compare their predictions of the average size of sediment on each river’s bed with actual field measurements of Martian grains near each river, taken by NASA’s Curiosity and Perseverance rovers. These few field measurements allowed the team to check that their equations, applied on Mars, were accurate.

    The team then took their approach to Titan. They zeroed in on two locations where river slopes can be measured, including a river that flows into a lake the size of Lake Ontario. This river appears to form a delta as it feeds into the lake. However, the delta is one of only a few thought to exist on the moon — nearly every viewable river flowing into a lake mysteriously lacks a delta. The team also applied their method to one of these other delta-less rivers.

    They calculated both rivers’ flow and found that they may be comparable to some of the biggest rivers on Earth, with deltas estimated to have a flow rate as large as the Mississippi. Both rivers should move enough sediment to build up deltas. Yet, most rivers on Titan lack the fan-shaped deposits. Something else must be at work to explain this lack of river deposits.

    In another finding, the team calculated that rivers on Titan should be wider and have a gentler slope than rivers carrying the same flow on Earth or Mars. “Titan is the most Earth-like place,” Birch says. ”We’ve only gotten a glimpse of it. There’s so much more that we know is down there, and this remote technique is pushing us a little closer.”

    This research was supported, in part, by NASA and the Heising-Simons Foundation. More

  • in

    Megawatt electrical motor designed by MIT engineers could help electrify aviation

    Aviation’s huge carbon footprint could shrink significantly with electrification. To date, however, only small all-electric planes have gotten off the ground. Their electric motors generate hundreds of kilowatts of power. To electrify larger, heavier jets, such as commercial airliners, megawatt-scale motors are required. These would be propelled by hybrid or turbo-electric propulsion systems where an electrical machine is coupled with a gas turbine aero-engine.

    To meet this need, a team of MIT engineers is now creating a 1-megawatt motor that could be a key stepping stone toward electrifying larger aircraft. The team has designed and tested the major components of the motor, and shown through detailed computations that the coupled components can work as a whole to generate one megawatt of power, at a weight and size competitive with current small aero-engines.

    For all-electric applications, the team envisions the motor could be paired with a source of electricity such as a battery or a fuel cell. The motor could then turn the electrical energy into mechanical work to power a plane’s propellers. The electrical machine could also be paired with a traditional turbofan jet engine to run as a hybrid propulsion system, providing electric propulsion during certain phases of a flight.

    “No matter what we use as an energy carrier — batteries, hydrogen, ammonia, or sustainable aviation fuel — independent of all that, megawatt-class motors will be a key enabler for greening aviation,” says Zoltan Spakovszky, the T. Wilson Professor in Aeronautics and the Director of the Gas Turbine Laboratory (GTL) at MIT, who leads the project.

    Spakovszky and members of his team, along with industry collaborators, will present their work at a special session of the American Institute of Aeronautics and Astronautics – Electric Aircraft Technologies Symposium (EATS) at the Aviation conference in June.

    The MIT team is composed of faculty, students, and research staff from GTL and the MIT Laboratory for Electromagnetic and Electronic Systems: Henry Andersen Yuankang Chen, Zachary Cordero, David Cuadrado,  Edward Greitzer, Charlotte Gump, James Kirtley, Jr., Jeffrey Lang, David Otten, David Perreault, and Mohammad Qasim,  along with Marc Amato of Innova-Logic LLC. The project is sponsored by Mitsubishi Heavy Industries (MHI).

    Heavy stuff

    To prevent the worst impacts from human-induced climate change, scientists have determined that global emissions of carbon dioxide must reach net zero by 2050. Meeting this target for aviation, Spakovszky says, will require “step-change achievements” in the design of unconventional aircraft, smart and flexible fuel systems, advanced materials, and safe and efficient electrified propulsion. Multiple aerospace companies are focused on electrified propulsion and the design of megawatt-scale electric machines that are powerful and light enough to propel passenger aircraft.

    “There is no silver bullet to make this happen, and the devil is in the details,” Spakovszky says. “This is hard engineering, in terms of co-optimizing individual components and making them compatible with each other while maximizing overall performance. To do this means we have to push the boundaries in materials, manufacturing, thermal management, structures and rotordynamics, and power electronics”

    Broadly speaking, an electric motor uses electromagnetic force to generate motion. Electric motors, such as those that power the fan in your laptop, use electrical energy — from a battery or power supply — to generate a magnetic field, typically through copper coils. In response, a magnet, set near the coils, then spins in the direction of the generated field and can then drive a fan or propeller.

    Electric machines have been around for over 150 years, with the understanding that, the bigger the appliance or vehicle, the larger the copper coils  and the magnetic rotor, making the machine heavier. The more power the electrical machine generates, the more heat it produces, which requires additional elements to keep the components cool — all of which can take up space and add significant weight to the system, making it challenging for airplane applications.

    “Heavy stuff doesn’t go on airplanes,” Spakovszky says. “So we had to come up with a compact, lightweight, and powerful architecture.”

    Good trajectory

    As designed, the MIT electric motor and power electronics are each about the size of a checked suitcase weighing less than an adult passenger.

    The motor’s main components are: a high-speed rotor, lined with an array of magnets with varying orientation of polarity; a compact low-loss stator that fits inside the rotor and contains an intricate array of copper windings; an advanced heat exchanger that keeps the components cool while transmitting the torque of the machine; and a distributed power electronics system, made from 30 custom-built circuit boards, that precisely change the currents running through each of the stator’s copper windings, at high frequency.

    “I believe this is the first truly co-optimized integrated design,” Spakovszky says. “Which means we did a very extensive design space exploration where all considerations from thermal management, to rotor dynamics, to power electronics and electrical machine architecture were assessed in an integrated way to find out what is the best possible combination to get the required specific power at one megawatt.”

    As a whole system, the motor is designed such that the distributed circuit boards are close coupled with the electrical machine to minimize transmission loss and to allow effective air cooling through the integrated heat exchanger.

    “This is a high-speed machine, and to keep it rotating while creating torque, the magnetic fields have to be traveling very quickly, which we can do through our circuit boards switching at high frequency,” Spakovszky says.

    To mitigate risk, the team has built and tested each of the major components individually, and shown that they can operate as designed and at conditions exceeding normal operational demands. The researchers plan to assemble the first fully working electric motor, and start testing it in the fall.

    “The electrification of aircraft has been on a steady rise,” says Phillip Ansell, director of the Center for Sustainable Aviation at the University of Illinois Urbana-Champaign, who was not involved in the project. “This group’s design uses a wonderful combination of conventional and cutting-edge methods for electric machine development, allowing it to offer both robustness and efficiency to meet the practical needs of aircraft of the future.”

    Once the MIT team can demonstrate the electric motor as a whole, they say the design could power regional aircraft and could also be a companion to conventional jet engines, to enable hybrid-electric propulsion systems. The team also envision that multiple one-megawatt motors could power multiple fans distributed along the wing on future aircraft configurations. Looking ahead, the foundations of the one-megawatt electrical machine design could potentially be scaled up to multi-megawatt motors, to power larger passenger planes.

    “I think we’re on a good trajectory,” says Spakovszky, whose group and research have focused on more than just gas turbines. “We are not electrical engineers by training, but addressing the 2050 climate grand challenge is of utmost importance; working with electrical engineering faculty, staff and students for this goal can draw on MIT’s breadth of technologies so the whole is greater than the sum of the parts. So we are reinventing ourselves in new areas. And MIT gives you the opportunity to do that.” More

  • in

    River erosion can shape fish evolution, study suggests

    If we could rewind the tape of species evolution around the world and play it forward over hundreds of millions of years to the present day, we would see biodiversity clustering around regions of tectonic turmoil. Tectonically active regions such as the Himalayan and Andean mountains are especially rich in flora and fauna due to their shifting landscapes, which act to divide and diversify species over time.

    But biodiversity can also flourish in some geologically quieter regions, where tectonics hasn’t shaken up the land for millennia. The Appalachian Mountains are a prime example: The range has not seen much tectonic activity in hundreds of millions of years, and yet the region is a notable hotspot of freshwater biodiversity.

    Now, an MIT study identifies a geological process that may shape the diversity of species in tectonically inactive regions. In a paper appearing today in Science, the researchers report that river erosion can be a driver of biodiversity in these older, quieter environments.

    They make their case in the southern Appalachians, and specifically the Tennessee River Basin, a region known for its huge diversity of freshwater fishes. The team found that as rivers eroded through different rock types in the region, the changing landscape pushed a species of fish known as the greenfin darter into different tributaries of the river network. Over time, these separated populations developed into their own distinct lineages.

    The team speculates that erosion likely drove the greenfin darter to diversify. Although the separated populations appear outwardly similar, with the greenfin darter’s characteristic green-tinged fins, they differ substantially in their genetic makeup. For now, the separated populations are classified as one single species. 

    “Give this process of erosion more time, and I think these separate lineages will become different species,” says Maya Stokes PhD ’21, who carried out part of the work as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

    The greenfin darter may not be the only species to diversify as a consequence of river erosion. The researchers suspect that erosion may have driven many other species to diversify throughout the basin, and possibly other tectonically inactive regions around the world.

    “If we can understand the geologic factors that contribute to biodiversity, we can do a better job of conserving it,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric, and Planetary Sciences at MIT.

    The study’s co-authors include collaborators at Yale University, Colorado State University, the University of Tennessee, the University of Massachusetts at Amherst, and the Tennessee Valley Authority (TVA). Stokes is currently an assistant professor at Florida State University.

    Fish in trees

    The new study grew out of Stokes’ PhD work at MIT, where she and Perron were exploring connections between geomorphology (the study of how landscapes evolve) and biology. They came across work at Yale by Thomas Near, who studies lineages of North American freshwater fishes. Near uses DNA sequence data collected from freshwater fishes across various regions of North America to show how and when certain species evolved and diverged in relation to each other.

    Near brought a curious observation to the team: a habitat distribution map of the greenfin darter showing that the fish was found in the Tennessee River Basin — but only in the southern half. What’s more, Near had mitochondrial DNA sequence data showing that the fish’s populations appeared to be different in their genetic makeup depending on the tributary in which they were found.

    To investigate the reasons for this pattern, Stokes gathered greenfin darter tissue samples from Near’s extensive collection at Yale, as well as from the field with help from TVA colleagues. She then analyzed DNA sequences from across the entire genome, and compared the genes of each individual fish to every other fish in the dataset. The team then created a phylogenetic tree of the greenfin darter, based on the genetic similarity between fish.

    From this tree, they observed that fish within a tributary were more related to each other than to fish in other tributaries. What’s more, fish within neighboring tributaries were more similar to each other than fish from more distant tributaries.

    “Our question was, could there have been a geological mechanism that, over time, took this single species, and splintered it into different, genetically distinct groups?” Perron says.

    A changing landscape

    Stokes and Perron started to observe a “tight correlation” between greenfin darter habitats and the type of rock where they are found. In particular, much of the southern half of the Tennessee River Basin, where the species abounds, is made of metamorphic rock, whereas the northern half consists of sedimentary rock, where the fish are not found.

    They also observed that the rivers running through metamorphic rock are steeper and more narrow, which generally creates more turbulence, a characteristic greenfin darters seem to prefer. The team wondered: Could the distribution of greenfin darter habitat have been shaped by a changing landscape of rock type, as rivers eroded into the land over time?

    To check this idea, the researchers developed a model to simulate how a landscape evolves as rivers erode through various rock types. They fed the model information about the rock types in the Tennessee River Basin today, then ran the simulation back to see how the same region may have looked millions of years ago, when more metamorphic rock was exposed.

    They then ran the model forward and observed how the exposure of metamorphic rock shrank over time. They took special note of where and when connections between tributaries crossed into non-metamorphic rock, blocking fish from passing between those tributaries. They drew up a simple timeline of these blocking events and compared this to the phylogenetic tree of diverging greenfin darters. The two were remarkably similar: The fish seemed to form separate lineages in the same order as when their respective tributaries became separated from the others.

    “It means it’s plausible that erosion through different rock layers caused isolation between different populations of the greenfin darter and caused lineages to diversify,” Stokes says.

    “This study is highly compelling because it reveals a much more subtle but powerful mechanism for speciation in passive margins,” says Josh Roering, professor of Earth sciences at the University of Oregon, who was not involved in the study. “Stokes and Perron have revealed some of the intimate connections between aquatic species and geology that may be much more common than we realize.”

    This research was supported, in part, by the mTerra Catalyst Fund and the U.S. National Science Foundation through the AGeS Geochronology Program and the Graduate Research Fellowship Program. While at MIT, Stokes received support through the Martin Fellowship for Sustainability and the Hugh Hampton Young Fellowship. More