More stories

  • in

    Study: Rocks from Mars’ Jezero Crater, which likely predate life on Earth, contain signs of water

    In a new study appearing today in the journal AGU Advances, scientists at MIT and NASA report that seven rock samples collected along the “fan front” of Mars’ Jezero Crater contain minerals that are typically formed in water. The findings suggest that the rocks were originally deposited by water, or may have formed in the presence of water.The seven samples were collected by NASA’s Perseverance rover in 2022 during its exploration of the crater’s western slope, where some rocks were hypothesized to have formed in what is now a dried-up ancient lake. Members of the Perseverance science team, including MIT scientists, have studied the rover’s images and chemical analyses of the samples, and confirmed that the rocks indeed contain signs of water, and that the crater was likely once a watery, habitable environment.Whether the crater was actually inhabited is yet unknown. The team found that the presence of organic matter — the starting material for life — cannot be confirmed, at least based on the rover’s measurements. But judging from the rocks’ mineral content, scientists believe the samples are their best chance of finding signs of ancient Martian life once the rocks are returned to Earth for more detailed analysis.“These rocks confirm the presence, at least temporarily, of habitable environments on Mars,” says the study’s lead author, Tanja Bosak, professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “What we’ve found is that indeed there was a lot of water activity. For how long, we don’t know, but certainly for long enough to create these big sedimentary deposits.”What’s more, some of the collected samples may have originally been deposited in the ancient lake more than 3.5 billion years ago — before even the first signs of life on Earth.“These are the oldest rocks that may have been deposited by water, that we’ve ever laid hands or rover arms on,” says co-author Benjamin Weiss, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT. “That’s exciting, because it means these are the most promising rocks that may have preserved fossils, and signatures of life.”The study’s MIT co-authors include postdoc Eva Scheller, and research scientist Elias Mansbach, along with members of the Perseverance science team.At the front

    NASA’s Perseverance rover collected rock samples from two locations seen in this image of Mars’ Jezero Crater: “Wildcat Ridge” (lower left) and “Skinner Ridge” (upper right).

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    The new rock samples were collected in 2022 as part of the rover’s Fan Front Campaign — an exploratory phase during which Perseverance traversed Jezero Crater’s western slope, where a fan-like region contains sedimentary, layered rocks. Scientists suspect that this “fan front” is an ancient delta that was created by sediment that flowed with a river and settled into a now bone-dry lakebed. If life existed on Mars, scientists believe that it could be preserved in the layers of sediment along the fan front.In the end, Perseverance collected seven samples from various locations along the fan front. The rover obtained each sample by drilling into the Martian bedrock and extracting a pencil-sized core, which it then sealed in a tube to one day be retrieved and returned to Earth for detailed analysis.

    Composed of multiple images from NASA’s Perseverance Mars rover, this mosaic shows a rocky outcrop called “Wildcat Ridge,” where the rover extracted two rock cores and abraded a circular patch to investigate the rock’s composition.

    Credit: NASA/JPL-Caltech/ASU/MSSS

    Previous item
    Next item

    Prior to extracting the cores, the rover took images of the surrounding sediments at each of the seven locations. The science team then processed the imaging data to estimate a sediment’s average grain size and mineral composition. This analysis showed that all seven collected samples likely contain signs of water, suggesting that they were initially deposited by water.Specifically, Bosak and her colleagues found evidence of certain minerals in the sediments that are known to precipitate out of water.“We found lots of minerals like carbonates, which are what make reefs on Earth,” Bosak says. “And it’s really an ideal material that can preserve fossils of microbial life.”Interestingly, the researchers also identified sulfates in some samples that were collected at the base of the fan front. Sulfates are minerals that form in very salty water — another sign that water was present in the crater at one time — though very salty water, Bosak notes, “is not necessarily the best thing for life.” If the entire crater was once filled with very salty water, then it would be difficult for any form of life to thrive. But if only the bottom of the lake were briny, that could be an advantage, at least for preserving any signs of life that may have lived further up, in less salty layers, that eventually died and drifted down to the bottom.“However salty it was, if there were any organics present, it’s like pickling something in salt,” Bosak says. “If there was life that fell into the salty layer, it would be very well-preserved.”Fuzzy fingerprintsBut the team emphasizes that organic matter has not been confidently detected by the rover’s instruments. Organic matter can be signs of life, but can also be produced by certain geological processes that have nothing to do with living matter. Perseverance’s predecessor, the Curiosity rover, had detected organic matter throughout Mars’ Gale Crater, which scientists suspect may have come from asteroids that made impact with Mars in the past.And in a previous campaign, Perseverance detected what appeared to be organic molecules at multiple locations along Jezero Crater’s floor. These observations were taken by the rover’s Scanning Habitable Environments with Raman and Luminescence for Organics and Chemicals (SHERLOC) instrument, which uses ultraviolet light to scan the Martian surface. If organics are present, they can glow, similar to material under a blacklight. The wavelengths at which the material glows act as a sort of fingerprint for the kind of organic molecules that are present.In Perseverance’s previous exploration of the crater floor, SHERLOC appeared to pick up signs of organic molecules throughout the region, and later, at some locations along the fan front. But a careful analysis, led by MIT’s Eva Scheller, has found that while the particular wavelengths observed could be signs of organic matter, they could just as well be signatures of substances that have nothing to do with organic matter.“It turns out that cerium metals incorporated in minerals actually produce very similar signals as the organic matter,” Scheller says. “When investigated, the potential organic signals were strongly correlated with phosphate minerals, which always contain some cerium.”Scheller’s work shows that the rover’s measurements cannot be interpreted definitively as organic matter.“This is not bad news,” Bosak says. “It just tells us there is not very abundant organic matter. It’s still possible that it’s there. It’s just below the rover’s detection limit.”When the collected samples are finally sent back to Earth, Bosak says laboratory instruments will have more than enough sensitivity to detect any organic matter that might lie within.“On Earth, once we have microscopes with nanometer-scale resolution, and various types of instruments that we cannot staff on one rover, then we can actually attempt to look for life,” she says.This work was supported, in part, by NASA. More

  • in

    Scientists find a human “fingerprint” in the upper troposphere’s increasing ozone

    Ozone can be an agent of good or harm, depending on where you find it in the atmosphere. Way up in the stratosphere, the colorless gas shields the Earth from the sun’s harsh ultraviolet rays. But closer to the ground, ozone is a harmful air pollutant that can trigger chronic health problems including chest pain, difficulty breathing, and impaired lung function.And somewhere in between, in the upper troposphere — the layer of the atmosphere just below the stratosphere, where most aircraft cruise — ozone contributes to warming the planet as a potent greenhouse gas.There are signs that ozone is continuing to rise in the upper troposphere despite efforts to reduce its sources at the surface in many nations. Now, MIT scientists confirm that much of ozone’s increase in the upper troposphere is likely due to humans.In a paper appearing today in the journal Environmental Science and Technology, the team reports that they detected a clear signal of human influence on upper tropospheric ozone trends in a 17-year satellite record starting in 2005.“We confirm that there’s a clear and increasing trend in upper tropospheric ozone in the northern midlatitudes due to human beings rather than climate noise,” says study lead author Xinyuan Yu, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).“Now we can do more detective work and try to understand what specific human activities are leading to this ozone trend,” adds co-author Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor in Earth, Atmospheric and Planetary Sciences.The study’s MIT authors include Sebastian Eastham and Qindan Zhu, along with Benjamin Santer at the University of California at Los Angeles, Gustavo Correa of Columbia University, Jean-François Lamarque at the National Center for Atmospheric Research, and Jerald Zimeke at NASA Goddard Space Flight Center.Ozone’s tangled webUnderstanding ozone’s causes and influences is a challenging exercise. Ozone is not emitted directly, but instead is a product of “precursors” — starting ingredients, such as nitrogen oxides and volatile organic compounds (VOCs), that react in the presence of sunlight to form ozone. These precursors are generated from vehicle exhaust, power plants, chemical solvents, industrial processes, aircraft emissions, and other human-induced activities.Whether and how long ozone lingers in the atmosphere depends on a tangle of variables, including the type and extent of human activities in a given area, as well as natural climate variability. For instance, a strong El Niño year could nudge the atmosphere’s circulation in a way that affects ozone’s concentrations, regardless of how much ozone humans are contributing to the atmosphere that year.Disentangling the human- versus climate-driven causes of ozone trend, particularly in the upper troposphere, is especially tricky. Complicating matters is the fact that in the lower troposphere — the lowest layer of the atmosphere, closest to ground level — ozone has stopped rising, and has even fallen in some regions at northern midlatitudes in the last few decades. This decrease in lower tropospheric ozone is mainly a result of efforts in North America and Europe to reduce industrial sources of air pollution.“Near the surface, ozone has been observed to decrease in some regions, and its variations are more closely linked to human emissions,” Yu notes. “In the upper troposphere, the ozone trends are less well-monitored but seem to decouple with those near the surface, and ozone is more easily influenced by climate variability. So, we don’t know whether and how much of that increase in observed ozone in the upper troposphere is attributed to humans.”A human signal amid climate noiseYu and Fiore wondered whether a human “fingerprint” in ozone levels, caused directly by human activities, could be strong enough to be detectable in satellite observations in the upper troposphere. To see such a signal, the researchers would first have to know what to look for.For this, they looked to simulations of the Earth’s climate and atmospheric chemistry. Following approaches developed in climate science, they reasoned that if they could simulate a number of possible climate variations in recent decades, all with identical human-derived sources of ozone precursor emissions, but each starting with a slightly different climate condition, then any differences among these scenarios should be due to climate noise. By inference, any common signal that emerged when averaging over the simulated scenarios should be due to human-driven causes. Such a signal, then, would be a “fingerprint” revealing human-caused ozone, which the team could look for in actual satellite observations.With this strategy in mind, the team ran simulations using a state-of-the-art chemistry climate model. They ran multiple climate scenarios, each starting from the year 1950 and running through 2014.From their simulations, the team saw a clear and common signal across scenarios, which they identified as a human fingerprint. They then looked to tropospheric ozone products derived from multiple instruments aboard NASA’s Aura satellite.“Quite honestly, I thought the satellite data were just going to be too noisy,” Fiore admits. “I didn’t expect that the pattern would be robust enough.”But the satellite observations they used gave them a good enough shot. The team looked through the upper tropospheric ozone data derived from the satellite products, from the years 2005 to 2021, and found that, indeed, they could see the signal of human-caused ozone that their simulations predicted. The signal is especially pronounced over Asia, where industrial activity has risen significantly in recent decades and where abundant sunlight and frequent weather events loft pollution, including ozone and its precursors, to the upper troposphere.Yu and Fiore are now looking to identify the specific human activities that are leading to ozone’s increase in the upper troposphere.“Where is this increasing trend coming from? Is it the near-surface emissions from combusting fossil fuels in vehicle engines and power plants? Is it the aircraft that are flying in the upper troposphere? Is it the influence of wildland fires? Or some combination of all of the above?” Fiore says. “Being able to separate human-caused impacts from natural climate variations can help to inform strategies to address climate change and air pollution.”This research was funded, in part, by NASA. More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    Microscopic defects in ice influence how massive glaciers flow, study shows

    As they seep and calve into the sea, melting glaciers and ice sheets are raising global water levels at unprecedented rates. To predict and prepare for future sea-level rise, scientists need a better understanding of how fast glaciers melt and what influences their flow.Now, a study by MIT scientists offers a new picture of glacier flow, based on microscopic deformation in the ice. The results show that a glacier’s flow depends strongly on how microscopic defects move through the ice.The researchers found they could estimate a glacier’s flow based on whether the ice is prone to microscopic defects of one kind versus another. They used this relationship between micro- and macro-scale deformation to develop a new model for how glaciers flow. With the new model, they mapped the flow of ice in locations across the Antarctic Ice Sheet.Contrary to conventional wisdom, they found, the ice sheet is not a monolith but instead is more varied in where and how it flows in response to warming-driven stresses. The study “dramatically alters the climate conditions under which marine ice sheets may become unstable and drive rapid rates of sea-level rise,” the researchers write in their paper.“This study really shows the effect of microscale processes on macroscale behavior,” says Meghana Ranganathan PhD ’22, who led the study as a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) and is now a postdoc at Georgia Tech. “These mechanisms happen at the scale of water molecules and ultimately can affect the stability of the West Antarctic Ice Sheet.”“Broadly speaking, glaciers are accelerating, and there are a lot of variants around that,” adds co-author and EAPS Associate Professor Brent Minchew. “This is the first study that takes a step from the laboratory to the ice sheets and starts evaluating what the stability of ice is in the natural environment. That will ultimately feed into our understanding of the probability of catastrophic sea-level rise.”Ranganathan and Minchew’s study appears this week in the Proceedings of the National Academy of Sciences.Micro flowGlacier flow describes the movement of ice from the peak of a glacier, or the center of an ice sheet, down to the edges, where the ice then breaks off and melts into the ocean — a normally slow process that contributes over time to raising the world’s average sea level.In recent years, the oceans have risen at unprecedented rates, driven by global warming and the accelerated melting of glaciers and ice sheets. While the loss of polar ice is known to be a major contributor to sea-level rise, it is also the biggest uncertainty when it comes to making predictions.“Part of it’s a scaling problem,” Ranganathan explains. “A lot of the fundamental mechanisms that cause ice to flow happen at a really small scale that we can’t see. We wanted to pin down exactly what these microphysical processes are that govern ice flow, which hasn’t been represented in models of sea-level change.”The team’s new study builds on previous experiments from the early 2000s by geologists at the University of Minnesota, who studied how small chips of ice deform when physically stressed and compressed. Their work revealed two microscopic mechanisms by which ice can flow: “dislocation creep,” where molecule-sized cracks migrate through the ice, and “grain boundary sliding,” where individual ice crystals slide against each other, causing the boundary between them to move through the ice.The geologists found that ice’s sensitivity to stress, or how likely it is to flow, depends on which of the two mechanisms is dominant. Specifically, ice is more sensitive to stress when microscopic defects occur via dislocation creep rather than grain boundary sliding.Ranganathan and Minchew realized that those findings at the microscopic level could redefine how ice flows at much larger, glacial scales.“Current models for sea-level rise assume a single value for the sensitivity of ice to stress and hold this value constant across an entire ice sheet,” Ranganathan explains. “What these experiments showed was that actually, there’s quite a bit of variability in ice sensitivity, due to which of these mechanisms is at play.”A mapping matchFor their new study, the MIT team took insights from the previous experiments and developed a model to estimate an icy region’s sensitivity to stress, which directly relates to how likely that ice is to flow. The model takes in information such as the ambient temperature, the average size of ice crystals, and the estimated mass of ice in the region, and calculates how much the ice is deforming by dislocation creep versus grain boundary sliding. Depending on which of the two mechanisms is dominant, the model then estimates the region’s sensitivity to stress.The scientists fed into the model actual observations from various locations across the Antarctic Ice Sheet, where others had previously recorded data such as the local height of ice, the size of ice crystals, and the ambient temperature. Based on the model’s estimates, the team generated a map of ice sensitivity to stress across the Antarctic Ice Sheet. When they compared this map to satellite and field measurements taken of the ice sheet over time, they observed a close match, suggesting that the model could be used to accurately predict how glaciers and ice sheets will flow in the future.“As climate change starts to thin glaciers, that could affect the sensitivity of ice to stress,” Ranganathan says. “The instabilities that we expect in Antarctica could be very different, and we can now capture those differences, using this model.”  More

  • in

    Study: Heavy snowfall and rain may contribute to some earthquakes

    When scientists look for an earthquake’s cause, their search often starts underground. As centuries of seismic studies have made clear, it’s the collision of tectonic plates and the movement of subsurface faults and fissures that primarily trigger a temblor.But MIT scientists have now found that certain weather events may also play a role in setting off some quakes.In a study appearing today in Science Advances, the researchers report that episodes of heavy snowfall and rain likely contributed to a swarm of earthquakes over the past several years in northern Japan. The study is the first to show that climate conditions could initiate some quakes.“We see that snowfall and other environmental loading at the surface impacts the stress state underground, and the timing of intense precipitation events is well-correlated with the start of this earthquake swarm,” says study author William Frank, an assistant professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So, climate obviously has an impact on the response of the solid earth, and part of that response is earthquakes.”The new study focuses on a series of ongoing earthquakes in Japan’s Noto Peninsula. The team discovered that seismic activity in the region is surprisingly synchronized with certain changes in underground pressure, and that those changes are influenced by seasonal patterns of snowfall and precipitation. The scientists suspect that this new connection between quakes and climate may not be unique to Japan and could play a role in shaking up other parts of the world.Looking to the future, they predict that the climate’s influence on earthquakes could be more pronounced with global warming.“If we’re going into a climate that’s changing, with more extreme precipitation events, and we expect a redistribution of water in the atmosphere, oceans, and continents, that will change how the Earth’s crust is loaded,” Frank adds. “That will have an impact for sure, and it’s a link we could further explore.”The study’s lead author is former MIT research associate Qing-Yu Wang (now at Grenoble Alpes University), and also includes EAPS postdoc Xin Cui, Yang Lu of the University of Vienna, Takashi Hirose of Tohoku University, and Kazushige Obara of the University of Tokyo.Seismic speedSince late 2020, hundreds of small earthquakes have shaken up Japan’s Noto Peninsula — a finger of land that curves north from the country’s main island into the Sea of Japan. Unlike a typical earthquake sequence, which begins as a main shock that gives way to a series of aftershocks before dying out, Noto’s seismic activity is an “earthquake swarm” — a pattern of multiple, ongoing quakes with no obvious main shock, or seismic trigger.The MIT team, along with their colleagues in Japan, aimed to spot any patterns in the swarm that would explain the persistent quakes. They started by looking through the Japanese Meteorological Agency’s catalog of earthquakes that provides data on seismic activity throughout the country over time. They focused on quakes in the Noto Peninsula over the last 11 years, during which the region has experienced episodic earthquake activity, including the most recent swarm.With seismic data from the catalog, the team counted the number of seismic events that occurred in the region over time, and found that the timing of quakes prior to 2020 appeared sporadic and unrelated, compared to late 2020, when earthquakes grew more intense and clustered in time, signaling the start of the swarm, with quakes that are correlated in some way.The scientists then looked to a second dataset of seismic measurements taken by monitoring stations over the same 11-year period. Each station continuously records any displacement, or local shaking that occurs. The shaking from one station to another can give scientists an idea of how fast a seismic wave travels between stations. This “seismic velocity” is related to the structure of the Earth through which the seismic wave is traveling. Wang used the station measurements to calculate the seismic velocity between every station in and around Noto over the last 11 years.The researchers generated an evolving picture of seismic velocity beneath the Noto Peninsula and observed a surprising pattern: In 2020, around when the earthquake swarm is thought to have begun, changes in seismic velocity appeared to be synchronized with the seasons.“We then had to explain why we were observing this seasonal variation,” Frank says.Snow pressureThe team wondered whether environmental changes from season to season could influence the underlying structure of the Earth in a way that would set off an earthquake swarm. Specifically, they looked at how seasonal precipitation would affect the underground “pore fluid pressure” — the amount of pressure that fluids in the Earth’s cracks and fissures exert within the bedrock.“When it rains or snows, that adds weight, which increases pore pressure, which allows seismic waves to travel through slower,” Frank explains. “When all that weight is removed, through evaporation or runoff, all of a sudden, that pore pressure decreases and seismic waves are faster.”Wang and Cui developed a hydromechanical model of the Noto Peninsula to simulate the underlying pore pressure over the last 11 years in response to seasonal changes in precipitation. They fed into the model meteorological data from this same period, including measurements of daily snow, rainfall, and sea-level changes. From their model, they were able to track changes in excess pore pressure beneath the Noto Peninsula, before and during the earthquake swarm. They then compared this timeline of evolving pore pressure with their evolving picture of seismic velocity.“We had seismic velocity observations, and we had the model of excess pore pressure, and when we overlapped them, we saw they just fit extremely well,” Frank says.In particular, they found that when they included snowfall data, and especially, extreme snowfall events, the fit between the model and observations was stronger than if they only considered rainfall and other events. In other words, the ongoing earthquake swarm that Noto residents have been experiencing can be explained in part by seasonal precipitation, and particularly, heavy snowfall events.“We can see that the timing of these earthquakes lines up extremely well with multiple times where we see intense snowfall,” Frank says. “It’s well-correlated with earthquake activity. And we think there’s a physical link between the two.”The researchers suspect that heavy snowfall and similar extreme precipitation could play a role in earthquakes elsewhere, though they emphasize that the primary trigger will always originate underground.“When we first want to understand how earthquakes work, we look to plate tectonics, because that is and will always be the number one reason why an earthquake happens,” Frank says. “But, what are the other things that could affect when and how an earthquake happens? That’s when you start to go to second-order controlling factors, and the climate is obviously one of those.”This research was supported, in part, by the National Science Foundation. More

  • in

    How light can vaporize water without the need for heat

    It’s the most fundamental of processes — the evaporation of water from the surfaces of oceans and lakes, the burning off of fog in the morning sun, and the drying of briny ponds that leaves solid salt behind. Evaporation is all around us, and humans have been observing it and making use of it for as long as we have existed.

    And yet, it turns out, we’ve been missing a major part of the picture all along.

    In a series of painstakingly precise experiments, a team of researchers at MIT has demonstrated that heat isn’t alone in causing water to evaporate. Light, striking the water’s surface where air and water meet, can break water molecules away and float them into the air, causing evaporation in the absence of any source of heat.

    The astonishing new discovery could have a wide range of significant implications. It could help explain mysterious measurements over the years of how sunlight affects clouds, and therefore affect calculations of the effects of climate change on cloud cover and precipitation. It could also lead to new ways of designing industrial processes such as solar-powered desalination or drying of materials.

    The findings, and the many different lines of evidence that demonstrate the reality of the phenomenon and the details of how it works, are described today in the journal PNAS, in a paper by Carl Richard Soderberg Professor of Power Engineering Gang Chen, postdocs Guangxin Lv and Yaodong Tu, and graduate student James Zhang.

    The authors say their study suggests that the effect should happen widely in nature— everywhere from clouds to fogs to the surfaces of oceans, soils, and plants — and that it could also lead to new practical applications, including in energy and clean water production. “I think this has a lot of applications,” Chen says. “We’re exploring all these different directions. And of course, it also affects the basic science, like the effects of clouds on climate, because clouds are the most uncertain aspect of climate models.”

    A newfound phenomenon

    The new work builds on research reported last year, which described this new “photomolecular effect” but only under very specialized conditions: on the surface of specially prepared hydrogels soaked with water. In the new study, the researchers demonstrate that the hydrogel is not necessary for the process; it occurs at any water surface exposed to light, whether it’s a flat surface like a body of water or a curved surface like a droplet of cloud vapor.

    Because the effect was so unexpected, the team worked to prove its existence with as many different lines of evidence as possible. In this study, they report 14 different kinds of tests and measurements they carried out to establish that water was indeed evaporating — that is, molecules of water were being knocked loose from the water’s surface and wafted into the air — due to the light alone, not by heat, which was long assumed to be the only mechanism involved.

    One key indicator, which showed up consistently in four different kinds of experiments under different conditions, was that as the water began to evaporate from a test container under visible light, the air temperature measured above the water’s surface cooled down and then leveled off, showing that thermal energy was not the driving force behind the effect.

    Other key indicators that showed up included the way the evaporation effect varied depending on the angle of the light, the exact color of the light, and its polarization. None of these varying characteristics should happen because at these wavelengths, water hardly absorbs light at all — and yet the researchers observed them.

    The effect is strongest when light hits the water surface at an angle of 45 degrees. It is also strongest with a certain type of polarization, called transverse magnetic polarization. And it peaks in green light — which, oddly, is the color for which water is most transparent and thus interacts the least.

    Chen and his co-researchers have proposed a physical mechanism that can explain the angle and polarization dependence of the effect, showing that the photons of light can impart a net force on water molecules at the water surface that is sufficient to knock them loose from the body of water. But they cannot yet account for the color dependence, which they say will require further study.

    They have named this the photomolecular effect, by analogy with the photoelectric effect that was discovered by Heinrich Hertz in 1887 and finally explained by Albert Einstein in 1905. That effect was one of the first demonstrations that light also has particle characteristics, which had major implications in physics and led to a wide variety of applications, including LEDs. Just as the photoelectric effect liberates electrons from atoms in a material in response to being hit by a photon of light, the photomolecular effect shows that photons can liberate entire molecules from a liquid surface, the researchers say.

    “The finding of evaporation caused by light instead of heat provides new disruptive knowledge of light-water interaction,” says Xiulin Ruan, professor of mechanical engineering at Purdue University, who was not involved in the study. “It could help us gain new understanding of how sunlight interacts with cloud, fog, oceans, and other natural water bodies to affect weather and climate. It has significant potential practical applications such as high-performance water desalination driven by solar energy. This research is among the rare group of truly revolutionary discoveries which are not widely accepted by the community right away but take time, sometimes a long time, to be confirmed.”

    Solving a cloud conundrum

    The finding may solve an 80-year-old mystery in climate science. Measurements of how clouds absorb sunlight have often shown that they are absorbing more sunlight than conventional physics dictates possible. The additional evaporation caused by this effect could account for the longstanding discrepancy, which has been a subject of dispute since such measurements are difficult to make.

    “Those experiments are based on satellite data and flight data,“ Chen explains. “They fly an airplane on top of and below the clouds, and there are also data based on the ocean temperature and radiation balance. And they all conclude that there is more absorption by clouds than theory could calculate. However, due to the complexity of clouds and the difficulties of making such measurements, researchers have been debating whether such discrepancies are real or not. And what we discovered suggests that hey, there’s another mechanism for cloud absorption, which was not accounted for, and this mechanism might explain the discrepancies.”

    Chen says he recently spoke about the phenomenon at an American Physical Society conference, and one physicist there who studies clouds and climate said they had never thought about this possibility, which could affect calculations of the complex effects of clouds on climate. The team conducted experiments using LEDs shining on an artificial cloud chamber, and they observed heating of the fog, which was not supposed to happen since water does not absorb in the visible spectrum. “Such heating can be explained based on the photomolecular effect more easily,” he says.

    Lv says that of the many lines of evidence, “the flat region in the air-side temperature distribution above hot water will be the easiest for people to reproduce.” That temperature profile “is a signature” that demonstrates the effect clearly, he says.

    Zhang adds: “It is quite hard to explain how this kind of flat temperature profile comes about without invoking some other mechanism” beyond the accepted theories of thermal evaporation. “It ties together what a whole lot of people are reporting in their solar desalination devices,” which again show evaporation rates that cannot be explained by the thermal input.

    The effect can be substantial. Under the optimum conditions of color, angle, and polarization, Lv says, “the evaporation rate is four times the thermal limit.”

    Already, since publication of the first paper, the team has been approached by companies that hope to harness the effect, Chen says, including for evaporating syrup and drying paper in a paper mill. The likeliest first applications will come in the areas of solar desalinization systems or other industrial drying processes, he says. “Drying consumes 20 percent of all industrial energy usage,” he points out.

    Because the effect is so new and unexpected, Chen says, “This phenomenon should be very general, and our experiment is really just the beginning.” The experiments needed to demonstrate and quantify the effect are very time-consuming. “There are many variables, from understanding water itself, to extending to other materials, other liquids and even solids,” he says.

    “The observations in the manuscript points to a new physical mechanism that foundationally alters our thinking on the kinetics of evaporation,” says Shannon Yee, an associate professor of mechanical engineering at Georgia Tech, who was not associated with this work. He adds, “Who would have thought that we are still learning about something as quotidian as water evaporating?”

    “I think this work is very significant scientifically because it presents a new mechanism,” says University of Alberta Distinguished Professor Janet A.W. Elliott, who also was not associated with this work. “It may also turn out to be practically important for technology and our understanding of nature, because evaporation of water is ubiquitous and the effect appears to deliver significantly higher evaporation rates than the known thermal mechanism. …  My overall impression is this work is outstanding. It appears to be carefully done with many precise experiments lending support for one another.”

    The work was partly supported by an MIT Bose Award. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    A new way to quantify climate change impacts: “Outdoor days”

    For most people, reading about the difference between a global average temperature rise of 1.5 C versus 2 C doesn’t conjure up a clear image of how their daily lives will actually be affected. So, researchers at MIT have come up with a different way of measuring and describing what global climate change patterns, in specific regions around the world, will mean for people’s daily activities and their quality of life.

    The new measure, called “outdoor days,” describes the number of days per year that outdoor temperatures are neither too hot nor too cold for people to go about normal outdoor activities, whether work or leisure, in reasonable comfort. Describing the impact of rising temperatures in those terms reveals some significant global disparities, the researchers say.

    The findings are described in a research paper written by MIT professor of civil and environmental engineering Elfatih Eltahir and postdocs Yeon-Woo Choi and Muhammad Khalifa, and published in the Journal of Climate.

    Eltahir says he got the idea for this new system during his hourlong daily walks in the Boston area. “That’s how I interface with the temperature every day,” he says. He found that there have been more winter days recently when he could walk comfortably than in past years. Originally from Sudan, he says that when he returned there for visits, the opposite was the case: In winter, the weather tends to be relatively comfortable, but the number of these clement winter days has been declining. “There are fewer days that are really suitable for outdoor activity,” Eltahir says.

    Rather than predefine what constitutes an acceptable outdoor day, Eltahir and his co-authors created a website where users can set their own definition of the highest and lowest temperatures they consider comfortable for their outside activities, then click on a country within a world map, or a state within the U.S., and get a forecast of how the number of days meeting those criteria will change between now and the end of this century. The website is freely available for anyone to use.

    “This is actually a new feature that’s quite innovative,” he says. “We don’t tell people what an outdoor day should be; we let the user define an outdoor day. Hence, we invite them to participate in defining how future climate change will impact their quality of life, and hopefully, this will facilitate deeper understanding of how climate change will impact individuals directly.”

    After deciding that this was a way of looking at the issue of climate change that might be useful, Eltahir says, “we started looking at the data on this, and we made several discoveries that I think are pretty significant.”

    First of all, there will be winners and losers, and the losers tend to be concentrated in the global south. “In the North, in a place like Russia or Canada, you gain a significant number of outdoor days. And when you go south to places like Bangladesh or Sudan, it’s bad news. You get significantly fewer outdoor days. It is very striking.”

    To derive the data, the software developed by the team uses all of the available climate models, about 50 of them, and provides output showing all of those projections on a single graph to make clear the range of possibilities, as well as the average forecast.

    When we think of climate change, Eltahir says, we tend to look at maps that show that virtually everywhere, temperatures will rise. “But if you think in terms of outdoor days, you see that the world is not flat. The North is gaining; the South is losing.”

    While North-South disparity in exposure and vulnerability has been broadly recognized in the past, he says, this way of quantifying the effects on the hazard (change in weather patterns) helps to bring home how strong the uneven risks from climate change on quality of life will be. “When you look at places like Bangladesh, Colombia, Ivory Coast, Sudan, Indonesia — they are all losing outdoor days.”

    The same kind of disparity shows up in Europe, he says. The effects are already being felt, and are showing up in travel patterns: “There is a shift to people spending time in northern European states. They go to Sweden and places like that instead of the Mediterranean, which is showing a significant drop,” he says.

    Placing this kind of detailed and localized information at people’s fingertips, he says, “I think brings the issue of communication of climate change to a different level.” With this tool, instead of looking at global averages, “we are saying according to your own definition of what a pleasant day is, [this is] how climate change is going to impact you, your activities.”

    And, he adds, “hopefully that will help society make decisions about what to do with this global challenge.”

    The project received support from the MIT Climate Grand Challenges project “Jameel Observatory – Climate Resilience Early Warning System Network,” as well as from the Abdul Latif Jameel Water and Food Systems Lab. More