More stories

  • in

    Scientists build new atlas of ocean’s oxygen-starved waters

    Life is teeming nearly everywhere in the oceans, except in certain pockets where oxygen naturally plummets and waters become unlivable for most aerobic organisms. These desolate pools are “oxygen-deficient zones,” or ODZs. And though they make up less than 1 percent of the ocean’s total volume, they are a significant source of nitrous oxide, a potent greenhouse gas. Their boundaries can also limit the extent of fisheries and marine ecosystems.

    Now MIT scientists have generated the most detailed, three-dimensional “atlas” of the largest ODZs in the world. The new atlas provides high-resolution maps of the two major, oxygen-starved bodies of water in the tropical Pacific. These maps reveal the volume, extent, and varying depths of each ODZ, along with fine-scale features, such as ribbons of oxygenated water that intrude into otherwise depleted zones.

    The team used a new method to process over 40 years’ worth of ocean data, comprising nearly 15 million measurements taken by many research cruises and autonomous robots deployed across the tropical Pacific. The researchers compiled then analyzed this vast and fine-grained data to generate maps of oxygen-deficient zones at various depths, similar to the many slices of a three-dimensional scan.

    From these maps, the researchers estimated the total volume of the two major ODZs in the tropical Pacific, more precisely than previous efforts. The first zone, which stretches out from the coast of South America, measures about 600,000 cubic kilometers — roughly the volume of water that would fill 240 billion Olympic-sized pools. The second zone, off the coast of Central America, is roughly three times larger.

    The atlas serves as a reference for where ODZs lie today. The team hopes scientists can add to this atlas with continued measurements, to better track changes in these zones and predict how they may shift as the climate warms.

    “It’s broadly expected that the oceans will lose oxygen as the climate gets warmer. But the situation is more complicated in the tropics where there are large oxygen-deficient zones,” says Jarek Kwiecinski ’21, who developed the atlas along with Andrew Babbin, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “It’s important to create a detailed map of these zones so we have a point of comparison for future change.”

    The team’s study appears today in the journal Global Biogeochemical Cycles.

    Airing out artifacts

    Oxygen-deficient zones are large, persistent regions of the ocean that occur naturally, as a consequence of marine microbes gobbling up sinking phytoplankton along with all the available oxygen in the surroundings. These zones happen to lie in regions that miss passing ocean currents, which would normally replenish regions with oxygenated water. As a result, ODZs are locations of relatively permanent, oxygen-depleted waters, and can exist at mid-ocean depths of between roughly 35 to 1,000 meters below the surface. For some perspective, the oceans on average run about 4,000 meters deep.

    Over the last 40 years, research cruises have explored these regions by dropping bottles down to various depths and hauling up seawater that scientists then measure for oxygen.

    “But there are a lot of artifacts that come from a bottle measurement when you’re trying to measure truly zero oxygen,” Babbin says. “All the plastic that we deploy at depth is full of oxygen that can leach out into the sample. When all is said and done, that artificial oxygen inflates the ocean’s true value.”

    Rather than rely on measurements from bottle samples, the team looked at data from sensors attached to the outside of the bottles or integrated with robotic platforms that can change their buoyancy to measure water at different depths. These sensors measure a variety of signals, including changes in electrical currents or the intensity of light emitted by a photosensitive dye to estimate the amount of oxygen dissolved in water. In contrast to seawater samples that represent a single discrete depth, the sensors record signals continuously as they descend through the water column.

    Scientists have attempted to use these sensor data to estimate the true value of oxygen concentrations in ODZs, but have found it incredibly tricky to convert these signals accurately, particularly at concentrations approaching zero.

    “We took a very different approach, using measurements not to look at their true value, but rather how that value changes within the water column,” Kwiecinski says. “That way we can identify anoxic waters, regardless of what a specific sensor says.”

    Bottoming out

    The team reasoned that, if sensors showed a constant, unchanging value of oxygen in a continuous, vertical section of the ocean, regardless of the true value, then it would likely be a sign that oxygen had bottomed out, and that the section was part of an oxygen-deficient zone.

    The researchers brought together nearly 15 million sensor measurements collected over 40 years by various research cruises and robotic floats, and mapped the regions where oxygen did not change with depth.

    “We can now see how the distribution of anoxic water in the Pacific changes in three dimensions,” Babbin says. 

    The team mapped the boundaries, volume, and shape of two major ODZs in the tropical Pacific, one in the Northern Hemisphere, and the other in the Southern Hemisphere. They were also able to see fine details within each zone. For instance, oxygen-depleted waters are “thicker,” or more concentrated towards the middle, and appear to thin out toward the edges of each zone.

    “We could also see gaps, where it looks like big bites were taken out of anoxic waters at shallow depths,” Babbin says. “There’s some mechanism bringing oxygen into this region, making it oxygenated compared to the water around it.”

    Such observations of the tropical Pacific’s oxygen-deficient zones are more detailed than what’s been measured to date.

    “How the borders of these ODZs are shaped, and how far they extend, could not be previously resolved,” Babbin says. “Now we have a better idea of how these two zones compare in terms of areal extent and depth.”

    “This gives you a sketch of what could be happening,” Kwiecinski says. “There’s a lot more one can do with this data compilation to understand how the ocean’s oxygen supply is controlled.”

    This research is supported, in part, by the Simons Foundation. More

  • in

    Climate modeling confirms historical records showing rise in hurricane activity

    When forecasting how storms may change in the future, it helps to know something about their past. Judging from historical records dating back to the 1850s, hurricanes in the North Atlantic have become more frequent over the last 150 years.

    However, scientists have questioned whether this upward trend is a reflection of reality, or simply an artifact of lopsided record-keeping. If 19th-century storm trackers had access to 21st-century technology, would they have recorded more storms? This inherent uncertainty has kept scientists from relying on storm records, and the patterns within them, for clues to how climate influences storms.

    A new MIT study published today in Nature Communications has used climate modeling, rather than storm records, to reconstruct the history of hurricanes and tropical cyclones around the world. The study finds that North Atlantic hurricanes have indeed increased in frequency over the last 150 years, similar to what historical records have shown.

    In particular, major hurricanes, and hurricanes in general, are more frequent today than in the past. And those that make landfall appear to have grown more powerful, carrying more destructive potential.

    Curiously, while the North Atlantic has seen an overall increase in storm activity, the same trend was not observed in the rest of the world. The study found that the frequency of tropical cyclones globally has not changed significantly in the last 150 years.

    “The evidence does point, as the original historical record did, to long-term increases in North Atlantic hurricane activity, but no significant changes in global hurricane activity,” says study author Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “It certainly will change the interpretation of climate’s effects on hurricanes — that it’s really the regionality of the climate, and that something happened to the North Atlantic that’s different from the rest of the globe. It may have been caused by global warming, which is not necessarily globally uniform.”

    Chance encounters

    The most comprehensive record of tropical cyclones is compiled in a database known as the International Best Track Archive for Climate Stewardship (IBTrACS). This historical record includes modern measurements from satellites and aircraft that date back to the 1940s. The database’s older records are based on reports from ships and islands that happened to be in a storm’s path. These earlier records date back to 1851, and overall the database shows an increase in North Atlantic storm activity over the last 150 years.

    “Nobody disagrees that that’s what the historical record shows,” Emanuel says. “On the other hand, most sensible people don’t really trust the historical record that far back in time.”

    Recently, scientists have used a statistical approach to identify storms that the historical record may have missed. To do so, they consulted all the digitally reconstructed shipping routes in the Atlantic over the last 150 years and mapped these routes over modern-day hurricane tracks. They then estimated the chance that a ship would encounter or entirely miss a hurricane’s presence. This analysis found a significant number of early storms were likely missed in the historical record. Accounting for these missed storms, they concluded that there was a chance that storm activity had not changed over the last 150 years.

    But Emanuel points out that hurricane paths in the 19th century may have looked different from today’s tracks. What’s more, the scientists may have missed key shipping routes in their analysis, as older routes have not yet been digitized.

    “All we know is, if there had been a change (in storm activity), it would not have been detectable, using digitized ship records,” Emanuel says “So I thought, there’s an opportunity to do better, by not using historical data at all.”

    Seeding storms

    Instead, he estimated past hurricane activity using dynamical downscaling — a technique that his group developed and has applied over the last 15 years to study climate’s effect on hurricanes. The technique starts with a coarse global climate simulation and embeds within this model a finer-resolution model that simulates features as small as hurricanes. The combined models are then fed with real-world measurements of atmospheric and ocean conditions. Emanuel then scatters the realistic simulation with hurricane “seeds” and runs the simulation forward in time to see which seeds bloom into full-blown storms.

    For the new study, Emanuel embedded a hurricane model into a climate “reanalysis” — a type of climate model that combines observations from the past with climate simulations to generate accurate reconstructions of past weather patterns and climate conditions. He used a particular subset of climate reanalyses that only accounts for observations collected from the surface — for instance from ships, which have recorded weather conditions and sea surface temperatures consistently since the 1850s, as opposed to from satellites, which only began systematic monitoring in the 1970s.

    “We chose to use this approach to avoid any artificial trends brought about by the introduction of progressively different observations,” Emanuel explains.

    He ran an embedded hurricane model on three different climate reanalyses, simulating tropical cyclones around the world over the past 150 years. Across all three models, he observed “unequivocal increases” in North Atlantic hurricane activity.

    “There’s been this quite large increase in activity in the Atlantic since the mid-19th century, which I didn’t expect to see,” Emanuel says.

    Within this overall rise in storm activity, he also observed a “hurricane drought” — a period during the 1970s and 80s when the number of yearly hurricanes momentarily dropped. This pause in storm activity can also be seen in historical records, and Emanuel’s group proposes a cause: sulfate aerosols, which were byproducts of fossil fuel combustion, likely set off a cascade of climate effects that cooled the North Atlantic and temporarily suppressed hurricane formation.

    “The general trend over the last 150 years was increasing storm activity, interrupted by this hurricane drought,” Emanuel notes. “And at this point, we’re more confident of why there was a hurricane drought than why there is an ongoing, long-term increase in activity that began in the 19th century. That is still a mystery, and it bears on the question of how global warming might affect future Atlantic hurricanes.”

    This research was supported, in part, by the National Science Foundation. More

  • in

    Nanograins make for a seismic shift

    In Earth’s crust, tectonic blocks slide and grind past each other like enormous ships loosed from anchor. Earthquakes are generated along these fault zones when enough stress builds for a block to stick, then suddenly slip.

    These slips can be aided by several factors that reduce friction within a fault zone, such as hotter temperatures or pressurized gases that can separate blocks like pucks on an air-hockey table. The decreasing friction enables one tectonic block to accelerate against the other until it runs out of energy. Seismologists have long believed this kind of frictional instability can explain how all crustal earthquakes start. But that might not be the whole story.

    In a study published today in Nature Communications, scientists Hongyu Sun and Matej Pec, from MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS), find that ultra-fine-grained crystals within fault zones can behave like low-viscosity fluids. The finding offers an alternative explanation for the instability that leads to crustal earthquakes. It also suggests a link between quakes in the crust and other types of temblors that occur deep in the Earth.

    Nanograins are commonly found in rocks from seismic environments along the smooth surface of “fault mirrors.” These polished, reflective rock faces betray the slipping, sliding forces of past earthquakes. However, it was unclear whether the crystals caused quakes or were merely formed by them.

    To better characterize how these crystals behaved within a fault, the researchers used a planetary ball milling machine to pulverize granite rocks into particles resembling those found in nature. Like a super-powered washing machine filled with ceramic balls, the machine pounded the rock until all its crystals were about 100 nanometers in width, each grain 1/2,000 the size of an average grain of sand.

    After packing the nanopowder into postage-stamp sized cylinders jacketed in gold, the researchers then subjected the material to stresses and heat, creating laboratory miniatures of real fault zones. This process enabled them to isolate the effect of the crystals from the complexity of other factors involved in an actual earthquake.

    The researchers report that the crystals were extremely weak when shearing was initiated — an order of magnitude weaker than more common microcrystals. But the nanocrystals became significantly stronger when the deformation rate was accelerated. Pec, professor of geophysics and the Victor P. Starr Career Development Chair, compares this characteristic, called “rate-strengthening,” to stirring honey in a jar. Stirring the honey slowly is easy, but becomes more difficult the faster you stir.

    The experiment suggests something similar happens in fault zones. As tectonic blocks accelerate past each other, the crystals gum things up between them like honey stirred in a seismic pot.

    Sun, the study’s lead author and EAPS graduate student, explains that their finding runs counter to the dominant frictional weakening theory of how earthquakes start. That theory would predict surfaces of a fault zone have material that gets weaker as the fault block accelerates, and friction should be decreasing. The nanocrystals did just the opposite. However, the crystals’ intrinsic weakness could mean that when enough of them accumulate within a fault, they can give way, causing an earthquake.

    “We don’t totally disagree with the old theorem, but our study really opens new doors to explain the mechanisms of how earthquakes happen in the crust,” Sun says.

    The finding also suggests a previously unrecognized link between earthquakes in the crust and the earthquakes that rumble hundreds of kilometers beneath the surface, where the same tectonic dynamics aren’t at play. That deep, there are no tectonic blocks to grind against each other, and even if there were, the immense pressure would prevent the type of quakes observed in the crust that necessitate some dilatancy and void creation.

    “We know that earthquakes happen all the way down to really big depths where this motion along a frictional fault is basically impossible,” says Pec. “And so clearly, there must be different processes that allow for these earthquakes to happen.”

    Possible mechanisms for these deep-Earth tremors include “phase transitions” which occur due to atomic re-arrangement in minerals and are accompanied by a volume change, and other kinds of metamorphic reactions, such as dehydration of water-bearing minerals, in which the released fluid is pumped through pores and destabilizes a fault. These mechanisms are all characterized by a weak, rate-strengthening layer.

    If weak, rate-strengthening nanocrystals are abundant in the deep Earth, they could present another possible mechanism, says Pec. “Maybe crustal earthquakes are not a completely different beast than the deeper earthquakes. Maybe they have something in common.” More

  • in

    Zeroing in on the origins of Earth’s “single most important evolutionary innovation”

    Some time in Earth’s early history, the planet took a turn toward habitability when a group of enterprising microbes known as cyanobacteria evolved oxygenic photosynthesis — the ability to turn light and water into energy, releasing oxygen in the process.

    This evolutionary moment made it possible for oxygen to eventually accumulate in the atmosphere and oceans, setting off a domino effect of diversification and shaping the uniquely habitable planet we know today.  

    Now, MIT scientists have a precise estimate for when cyanobacteria, and oxygenic photosynthesis, first originated. Their results appear today in the Proceedings of the Royal Society B.

    They developed a new gene-analyzing technique that shows that all the species of cyanobacteria living today can be traced back to a common ancestor that evolved around 2.9 billion years ago. They also found that the ancestors of cyanobacteria branched off from other bacteria around 3.4 billion years ago, with oxygenic photosynthesis likely evolving during the intervening half-billion years, during the Archean Eon.

    Interestingly, this estimate places the appearance of oxygenic photosynthesis at least 400 million years before the Great Oxidation Event, a period in which the Earth’s atmosphere and oceans first experienced a rise in oxygen. This suggests that cyanobacteria may have evolved the ability to produce oxygen early on, but that it took a while for this oxygen to really take hold in the environment.

    “In evolution, things always start small,” says lead author Greg Fournier, associate professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Even though there’s evidence for early oxygenic photosynthesis — which is the single most important and really amazing evolutionary innovation on Earth — it still took hundreds of millions of years for it to take off.”

    Fournier’s MIT co-authors include Kelsey Moore, Luiz Thiberio Rangel, Jack Payette, Lily Momper, and Tanja Bosak.

    Slow fuse, or wildfire?

    Estimates for the origin of oxygenic photosynthesis vary widely, along with the methods to trace its evolution.

    For instance, scientists can use geochemical tools to look for traces of oxidized elements in ancient rocks. These methods have found hints that oxygen was present as early as 3.5 billion years ago — a sign that oxygenic photosynthesis may have been the source, although other sources are also possible.

    Researchers have also used molecular clock dating, which uses the genetic sequences of microbes today to trace back changes in genes through evolutionary history. Based on these sequences, researchers then use models to estimate the rate at which genetic changes occur, to trace when groups of organisms first evolved. But molecular clock dating is limited by the quality of ancient fossils, and the chosen rate model, which can produce different age estimates, depending on the rate that is assumed.

    Fournier says different age estimates can imply conflicting evolutionary narratives. For instance, some analyses suggest oxygenic photosynthesis evolved very early on and progressed “like a slow fuse,” while others indicate it appeared much later and then “took off like wildfire” to trigger the Great Oxidation Event and the accumulation of oxygen in the biosphere.

    “In order for us to understand the history of habitability on Earth, it’s important for us to distinguish between these hypotheses,” he says.

    Horizontal genes

    To precisely date the origin of cyanobacteria and oxygenic photosynthesis, Fournier and his colleagues paired molecular clock dating with horizontal gene transfer — an independent method that doesn’t rely entirely on fossils or rate assumptions.

    Normally, an organism inherits a gene “vertically,” when it is passed down from the organism’s parent. In rare instances, a gene can also jump from one species to another, distantly related species. For instance, one cell may eat another, and in the process incorporate some new genes into its genome.

    When such a horizontal gene transfer history is found, it’s clear that the group of organisms that acquired the gene is evolutionarily younger than the group from which the gene originated. Fournier reasoned that such instances could be used to determine the relative ages between certain bacterial groups. The ages for these groups could then be compared with the ages that various molecular clock models predict. The model that comes closest would likely be the most accurate, and could then be used to precisely estimate the age of other bacterial species — specifically, cyanobacteria.

    Following this reasoning, the team looked for instances of horizontal gene transfer across the genomes of thousands of bacterial species, including cyanobacteria. They also used new cultures of modern cyanobacteria taken by Bosak and Moore, to more precisely use fossil cyanobacteria as calibrations. In the end, they identified 34 clear instances of horizontal gene transfer. They then found that one out of six molecular clock models consistently matched the relative ages identified in the team’s horizontal gene transfer analysis.

    Fournier ran this model to estimate the age of the “crown” group of cyanobacteria, which encompasses all the species living today and known to exhibit oxygenic photosynthesis. They found that, during the Archean eon, the crown group originated around 2.9 billion years ago, while cyanobacteria as a whole branched off from other bacteria around 3.4 billion years ago. This strongly suggests that oxygenic photosynthesis was already happening 500 million years before the Great Oxidation Event (GOE), and that cyanobacteria were producing oxygen for quite a long time before it accumulated in the atmosphere.

    The analysis also revealed that, shortly before the GOE, around 2.4 billion years ago, cyanobacteria experienced a burst of diversification. This implies that a rapid expansion of cyanobacteria may have tipped the Earth into the GOE and launched oxygen into the atmosphere.

    Fournier plans to apply horizontal gene transfer beyond cyanobacteria to pin down the origins of other elusive species.

    “This work shows that molecular clocks incorporating horizontal gene transfers (HGTs) promise to reliably provide the ages of groups across the entire tree of life, even for ancient microbes that have left no fossil record … something that was previously impossible,” Fournier says. 

    This research was supported, in part, by the Simons Foundation and the National Science Foundation. More

  • in

    Taylor Perron receives 2021 MacArthur Fellowship

    Taylor Perron, professor of geology and associate department head for education in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, has been named a recipient of a 2021 MacArthur Fellowship.

    Often referred to as “genius grants,” the fellowships are awarded by the John D. and Catherine T. MacArthur Foundation to talented individuals in a variety of fields. Each MacArthur fellow receives a $625,000 stipend, which they are free to use as they see fit. Recipients are notified by the foundation of their selection shortly before the fellowships are publicly announced.

    “After I had absorbed what they were saying, the first thing I thought was, I couldn’t wait to tell my wife, Lisa,” Perron says of receiving the call. “We’ve been a team through all of this and have had a pretty incredible journey, and I was just eager to share that with her.”

    Perron is a geomorphologist who seeks to understand the mechanisms that shape landscapes on Earth and other planets. His work combines mathematical modeling and computer simulations of landscape evolution; analysis of remote-sensing and spacecraft data; and field studies in regions such as the Appalachian Mountains, Hawaii, and the Amazon rainforest to trace how landscapes evolved over time and how they may change in the future.

    “If we can understand how climate and life and geological processes have interacted over a long time to create the landscapes we see now, we can use that information to anticipate where the landscape is headed in the future,” Perron says.

    His group has developed models that describe how river systems generate intricate branching patterns as a result of competing erosional processes, and how climate influences erosion on continents, islands, and reefs.

    Perron has also applied his methods beyond Earth, to retrace the evolution of the surfaces of Mars and Saturn’s moon Titan. His group has used spacecraft images and data to show how features on Titan, which appear to be active river networks, were likely carved out by raining liquid methane. On Mars, his analyses have supported the idea that the Red Planet once harbored an ocean and that the former shoreline of this Martian ocean is now warped as a result of a shift in the planet’s spin axis.

    He is continuing to map out the details of Mars and Titan’s landscape histories, which he hopes will provide clues to their ancient climates and habitability.

    “I think answers to some of the big questions about the solar system are written in planetary landscapes,” Perron says. “For example, why did Mars start off with lakes and rivers, but end up as a frozen desert? And if a world like Titan has weather like ours, but with a methane cycle instead of a water cycle, could an environment like that have supported life? One thing we try to do is figure out how to read the landscape to find the answers to those questions.”

    Perron has expanded his group’s focus to examine how changing landscapes affect biodiversity, for instance in Appalachia and in the Amazon — both freshwater systems that host some of the most diverse populations of life on the planet.

    “If we can figure out how changes in the physical landscape may have generated regions of really high biodiversity, that should help us learn how to conserve it,” Perron says.

    Recently, his group has also begun to investigate the influence of landscape evolution on human history. Perron is collaborating with archaeologists on projects to study the effect of physical landscapes on human migration in the Americas, and how the response of rivers to ice ages may have helped humans develop complex farming societies in the Amazon.

    Looking ahead, he plans to apply the MacArthur grant toward these projects and other “intellectual risks” — ideas that have potential for failure but could be highly rewarding if they succeed. The fellowship will also provide resources for his group to continue collaborating across disciplines and continents.

    “I’ve learned a lot from reaching out to people in other fields — everything from granular mechanics to fish biology,” Perron says. “That has broadened my scientific horizons and helped us do innovative work. Having the fellowship will provide more flexibility to allow us to continue connecting with people from other fields and other parts of the world.”

    Perron holds a BA in earth and planetary sciences and archaeology from Harvard University and a PhD in earth and planetary science from the University of California at Berkeley. He joined MIT as a faculty member in 2009. More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Global warming begets more warming, new paleoclimate study finds

    It is increasingly clear that the prolonged drought conditions, record-breaking heat, sustained wildfires, and frequent, more extreme storms experienced in recent years are a direct result of rising global temperatures brought on by humans’ addition of carbon dioxide to the atmosphere. And a new MIT study on extreme climate events in Earth’s ancient history suggests that today’s planet may become more volatile as it continues to warm.

    The study, appearing today in Science Advances, examines the paleoclimate record of the last 66 million years, during the Cenozoic era, which began shortly after the extinction of the dinosaurs. The scientists found that during this period, fluctuations in the Earth’s climate experienced a surprising “warming bias.” In other words, there were far more warming events — periods of prolonged global warming, lasting thousands to tens of thousands of years — than cooling events. What’s more, warming events tended to be more extreme, with greater shifts in temperature, than cooling events.

    The researchers say a possible explanation for this warming bias may lie in a “multiplier effect,” whereby a modest degree of warming — for instance from volcanoes releasing carbon dioxide into the atmosphere — naturally speeds up certain biological and chemical processes that enhance these fluctuations, leading, on average, to still more warming.

    Interestingly, the team observed that this warming bias disappeared about 5 million years ago, around the time when ice sheets started forming in the Northern Hemisphere. It’s unclear what effect the ice has had on the Earth’s response to climate shifts. But as today’s Arctic ice recedes, the new study suggests that a multiplier effect may kick back in, and the result may be a further amplification of human-induced global warming.

    “The Northern Hemisphere’s ice sheets are shrinking, and could potentially disappear as a long-term consequence of human actions” says the study’s lead author Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Our research suggests that this may make the Earth’s climate fundamentally more susceptible to extreme, long-term global warming events such as those seen in the geologic past.”

    Arnscheidt’s study co-author is Daniel Rothman, professor of geophysics at MIT, and  co-founder and co-director of MIT’s Lorenz Center.

    A volatile push

    For their analysis, the team consulted large databases of sediments containing deep-sea benthic foraminifera — single-celled organisms that have been around for hundreds of millions of years and whose hard shells are preserved in sediments. The composition of these shells is affected by the ocean temperatures as organisms are growing; the shells are therefore considered a reliable proxy for the Earth’s ancient temperatures.

    For decades, scientists have analyzed the composition of these shells, collected from all over the world and dated to various time periods, to track how the Earth’s temperature has fluctuated over millions of years. 

    “When using these data to study extreme climate events, most studies have focused on individual large spikes in temperature, typically of a few degrees Celsius warming,” Arnscheidt says. “Instead, we tried to look at the overall statistics and consider all the fluctuations involved, rather than picking out the big ones.”

    The team first carried out a statistical analysis of the data and observed that, over the last 66 million years, the distribution of global temperature fluctuations didn’t resemble a standard bell curve, with symmetric tails representing an equal probability of extreme warm and extreme cool fluctuations. Instead, the curve was noticeably lopsided, skewed toward more warm than cool events. The curve also exhibited a noticeably longer tail, representing warm events that were more extreme, or of higher temperature, than the most extreme cold events.

    “This indicates there’s some sort of amplification relative to what you would otherwise have expected,” Arnscheidt says. “Everything’s pointing to something fundamental that’s causing this push, or bias toward warming events.”

    “It’s fair to say that the Earth system becomes more volatile, in a warming sense,” Rothman adds.

    A warming multiplier

    The team wondered whether this warming bias might have been a result of “multiplicative noise” in the climate-carbon cycle. Scientists have long understood that higher temperatures, up to a point, tend to speed up biological and chemical processes. Because the carbon cycle, which is a key driver of long-term climate fluctuations, is itself composed of such processes, increases in temperature may lead to larger fluctuations, biasing the system towards extreme warming events.

    In mathematics, there exists a set of equations that describes such general amplifying, or multiplicative effects. The researchers applied this multiplicative theory to their analysis to see whether the equations could predict the asymmetrical distribution, including the degree of its skew and the length of its tails.

    In the end, they found that the data, and the observed bias toward warming, could be explained by the multiplicative theory. In other words, it’s very likely that, over the last 66 million years, periods of modest warming were on average further enhanced by multiplier effects, such as the response of biological and chemical processes that further warmed the planet.

    As part of the study, the researchers also looked at the correlation between past warming events and changes in Earth’s orbit. Over hundreds of thousands of years, Earth’s orbit around the sun regularly becomes more or less elliptical. But scientists have wondered why many past warming events appeared to coincide with these changes, and why these events feature outsized warming compared with what the change in Earth’s orbit could have wrought on its own.

    So, Arnscheidt and Rothman incorporated the Earth’s orbital changes into the multiplicative model and their analysis of Earth’s temperature changes, and found that multiplier effects could predictably amplify, on average, the modest temperature rises due to changes in Earth’s orbit.

    “Climate warms and cools in synchrony with orbital changes, but the orbital cycles themselves would predict only modest changes in climate,” Rothman says. “But if we consider a multiplicative model, then modest warming, paired with this multiplier effect, can result in extreme events that tend to occur at the same time as these orbital changes.”

    “Humans are forcing the system in a new way,” Arnscheidt adds. “And this study is showing that, when we increase temperature, we’re likely going to interact with these natural, amplifying effects.”

    This research was supported, in part, by MIT’s School of Science. More

  • in

    A new approach to preventing human-induced earthquakes

    When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.

    Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.

    Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.

    “Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”

    The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.

    Safe injections

    Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.

    The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.

    “There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”

    In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.

    “This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.

    Seismic blueprint

    The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.

    This video shows the change in stress on the geologic faults of the Val d’Agri field from 2001 to 2019, as predicted by a new MIT-derived model. Video credit: A. Plesch (Harvard University)

    This video shows small earthquakes occurring on the Costa Molina fault within the Val d’Agri field from 2004 to 2016. Each event is shown for two years fading from an initial bright color to the final dark color. Video credit: A. Plesch (Harvard University)

    The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.

    When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.

    Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.

    “The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says. 

    The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.

    “A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”

    This research was supported, in part, by Eni. More