More stories

  • in

    A healthy wind

    Nearly 10 percent of today’s electricity in the United States comes from wind power. The renewable energy source benefits climate, air quality, and public health by displacing emissions of greenhouse gases and air pollutants that would otherwise be produced by fossil-fuel-based power plants.

    A new MIT study finds that the health benefits associated with wind power could more than quadruple if operators prioritized turning down output from the most polluting fossil-fuel-based power plants when energy from wind is available.

    In the study, published today in Science Advances, researchers analyzed the hourly activity of wind turbines, as well as the reported emissions from every fossil-fuel-based power plant in the country, between the years 2011 and 2017. They traced emissions across the country and mapped the pollutants to affected demographic populations. They then calculated the regional air quality and associated health costs to each community.

    The researchers found that in 2014, wind power that was associated with state-level policies improved air quality overall, resulting in $2 billion in health benefits across the country. However, only roughly 30 percent of these health benefits reached disadvantaged communities.

    The team further found that if the electricity industry were to reduce the output of the most polluting fossil-fuel-based power plants, rather than the most cost-saving plants, in times of wind-generated power, the overall health benefits could quadruple to $8.4 billion nationwide. However, the results would have a similar demographic breakdown.

    “We found that prioritizing health is a great way to maximize benefits in a widespread way across the U.S., which is a very positive thing. But it suggests it’s not going to address disparities,” says study co-author Noelle Selin, a professor in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences at MIT. “In order to address air pollution disparities, you can’t just focus on the electricity sector or renewables and count on the overall air pollution benefits addressing these real and persistent racial and ethnic disparities. You’ll need to look at other air pollution sources, as well as the underlying systemic factors that determine where plants are sited and where people live.”

    Selin’s co-authors are lead author and former MIT graduate student Minghao Qiu PhD ’21, now at Stanford University, and Corwin Zigler at the University of Texas at Austin.

    Turn-down service

    In their new study, the team looked for patterns between periods of wind power generation and the activity of fossil-fuel-based power plants, to see how regional electricity markets adjusted the output of power plants in response to influxes of renewable energy.

    “One of the technical challenges, and the contribution of this work, is trying to identify which are the power plants that respond to this increasing wind power,” Qiu notes.

    To do so, the researchers compared two historical datasets from the period between 2011 and 2017: an hour-by-hour record of energy output of wind turbines across the country, and a detailed record of emissions measurements from every fossil-fuel-based power plant in the U.S. The datasets covered each of seven major regional electricity markets, each market providing energy to one or multiple states.

    “California and New York are each their own market, whereas the New England market covers around seven states, and the Midwest covers more,” Qiu explains. “We also cover about 95 percent of all the wind power in the U.S.”

    In general, they observed that, in times when wind power was available, markets adjusted by essentially scaling back the power output of natural gas and sub-bituminous coal-fired power plants. They noted that the plants that were turned down were likely chosen for cost-saving reasons, as certain plants were less costly to turn down than others.

    The team then used a sophisticated atmospheric chemistry model to simulate the wind patterns and chemical transport of emissions across the country, and determined where and at what concentrations the emissions generated fine particulates and ozone — two pollutants that are known to damage air quality and human health. Finally, the researchers mapped the general demographic populations across the country, based on U.S. census data, and applied a standard epidemiological approach to calculate a population’s health cost as a result of their pollution exposure.

    This analysis revealed that, in the year 2014, a general cost-saving approach to displacing fossil-fuel-based energy in times of wind energy resulted in $2 billion in health benefits, or savings, across the country. A smaller share of these benefits went to disadvantaged populations, such as communities of color and low-income communities, though this disparity varied by state.

    “It’s a more complex story than we initially thought,” Qiu says. “Certain population groups are exposed to a higher level of air pollution, and those would be low-income people and racial minority groups. What we see is, developing wind power could reduce this gap in certain states but further increase it in other states, depending on which fossil-fuel plants are displaced.”

    Tweaking power

    The researchers then examined how the pattern of emissions and the associated health benefits would change if they prioritized turning down different fossil-fuel-based plants in times of wind-generated power. They tweaked the emissions data to reflect several alternative scenarios: one in which the most health-damaging, polluting power plants are turned down first; and two other scenarios in which plants producing the most sulfur dioxide and carbon dioxide respectively, are first to reduce their output.

    They found that while each scenario increased health benefits overall, and the first scenario in particular could quadruple health benefits, the original disparity persisted: Communities of color and low-income communities still experienced smaller health benefits than more well-off communities.

    “We got to the end of the road and said, there’s no way we can address this disparity by being smarter in deciding which plants to displace,” Selin says.

    Nevertheless, the study can help identify ways to improve the health of the general population, says Julian Marshall, a professor of environmental engineering at the University of Washington.

    “The detailed information provided by the scenarios in this paper can offer a roadmap to electricity-grid operators and to state air-quality regulators regarding which power plants are highly damaging to human health and also are likely to noticeably reduce emissions if wind-generated electricity increases,” says Marshall, who was not involved in the study.

    “One of the things that makes me optimistic about this area is, there’s a lot more attention to environmental justice and equity issues,” Selin concludes. “Our role is to figure out the strategies that are most impactful in addressing those challenges.”

    This work was supported, in part, by the U.S. Environmental Protection Agency, and by the National Institutes of Health. More

  • in

    Earth can regulate its own temperature over millennia, new study finds

    The Earth’s climate has undergone some big changes, from global volcanism to planet-cooling ice ages and dramatic shifts in solar radiation. And yet life, for the last 3.7 billion years, has kept on beating.

    Now, a study by MIT researchers in Science Advances confirms that the planet harbors a “stabilizing feedback” mechanism that acts over hundreds of thousands of years to pull the climate back from the brink, keeping global temperatures within a steady, habitable range.

    Just how does it accomplish this? A likely mechanism is “silicate weathering” — a geological process by which the slow and steady weathering of silicate rocks involves chemical reactions that ultimately draw carbon dioxide out of the atmosphere and into ocean sediments, trapping the gas in rocks.

    Scientists have long suspected that silicate weathering plays a major role in regulating the Earth’s carbon cycle. The mechanism of silicate weathering could provide a geologically constant force in keeping carbon dioxide — and global temperatures — in check. But there’s never been direct evidence for the continual operation of such a feedback, until now.

    The new findings are based on a study of paleoclimate data that record changes in average global temperatures over the last 66 million years. The MIT team applied a mathematical analysis to see whether the data revealed any patterns characteristic of stabilizing phenomena that reined in global temperatures on a  geologic timescale.

    They found that indeed there appears to be a consistent pattern in which the Earth’s temperature swings are dampened over timescales of hundreds of thousands of years. The duration of this effect is similar to the timescales over which silicate weathering is predicted to act.

    The results are the first to use actual data to confirm the existence of a stabilizing feedback, the mechanism of which is likely silicate weathering. This stabilizing feedback would explain how the Earth has remained habitable through dramatic climate events in the geologic past.

    “On the one hand, it’s good because we know that today’s global warming will eventually be canceled out through this stabilizing feedback,” says Constantin Arnscheidt, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But on the other hand, it will take hundreds of thousands of years to happen, so not fast enough to solve our present-day issues.”

    The study is co-authored by Arnscheidt and Daniel Rothman, professor of geophysics at MIT.

    Stability in data

    Scientists have previously seen hints of a climate-stabilizing effect in the Earth’s carbon cycle: Chemical analyses of ancient rocks have shown that the flux of carbon in and out of Earth’s surface environment has remained relatively balanced, even through dramatic swings in global temperature. Furthermore, models of silicate weathering predict that the process should have some stabilizing effect on the global climate. And finally, the fact of the Earth’s enduring habitability points to some inherent, geologic check on extreme temperature swings.

    “You have a planet whose climate was subjected to so many dramatic external changes. Why did life survive all this time? One argument is that we need some sort of stabilizing mechanism to keep temperatures suitable for life,” Arnscheidt says. “But it’s never been demonstrated from data that such a mechanism has consistently controlled Earth’s climate.”

    Arnscheidt and Rothman sought to confirm whether a stabilizing feedback has indeed been at work, by looking at data of global temperature fluctuations through geologic history. They worked with a range of global temperature records compiled by other scientists, from the chemical composition of ancient marine fossils and shells, as well as preserved Antarctic ice cores.

    “This whole study is only possible because there have been great advances in improving the resolution of these deep-sea temperature records,” Arnscheidt notes. “Now we have data going back 66 million years, with data points at most thousands of years apart.”

    Speeding to a stop

    To the data, the team applied the mathematical theory of stochastic differential equations, which is commonly used to reveal patterns in widely fluctuating datasets.

    “We realized this theory makes predictions for what you would expect Earth’s temperature history to look like if there had been feedbacks acting on certain timescales,” Arnscheidt explains.

    Using this approach, the team analyzed the history of average global temperatures over the last 66 million years, considering the entire period over different timescales, such as tens of thousands of years versus hundreds of thousands, to see whether any patterns of stabilizing feedback emerged within each timescale.

    “To some extent, it’s like your car is speeding down the street, and when you put on the brakes, you slide for a long time before you stop,” Rothman says. “There’s a timescale over which frictional resistance, or a stabilizing feedback, kicks in, when the system returns to a steady state.”

    Without stabilizing feedbacks, fluctuations of global temperature should grow with timescale. But the team’s analysis revealed a regime in which fluctuations did not grow, implying that a stabilizing mechanism reigned in the climate before fluctuations grew too extreme. The timescale for this stabilizing effect — hundreds of thousands of years — coincides with what scientists predict for silicate weathering.

    Interestingly, Arnscheidt and Rothman found that on longer timescales, the data did not reveal any stabilizing feedbacks. That is, there doesn’t appear to be any recurring pull-back of global temperatures on timescales longer than a million years. Over these longer timescales, then, what has kept global temperatures in check?

    “There’s an idea that chance may have played a major role in determining why, after more than 3 billion years, life still exists,” Rothman offers.

    In other words, as the Earth’s temperatures fluctuate over longer stretches, these fluctuations may just happen to be small enough in the geologic sense, to be within a range that a stabilizing feedback, such as silicate weathering, could periodically keep the climate in check, and more to the point, within a habitable zone.

    “There are two camps: Some say random chance is a good enough explanation, and others say there must be a stabilizing feedback,” Arnscheidt says. “We’re able to show, directly from data, that the answer is probably somewhere in between. In other words, there was some stabilization, but pure luck likely also played a role in keeping Earth continuously habitable.”

    This research was supported, in part, by a MathWorks fellowship and the National Science Foundation. More

  • in

    Methane research takes on new urgency at MIT

    One of the most notable climate change provisions in the 2022 Inflation Reduction Act is the first U.S. federal tax on a greenhouse gas (GHG). That the fee targets methane (CH4), rather than carbon dioxide (CO2), emissions is indicative of the urgency the scientific community has placed on reducing this short-lived but powerful gas. Methane persists in the air about 12 years — compared to more than 1,000 years for CO2 — yet it immediately causes about 120 times more warming upon release. The gas is responsible for at least a quarter of today’s gross warming. 

    “Methane has a disproportionate effect on near-term warming,” says Desiree Plata, the director of MIT Methane Network. “CH4 does more damage than CO2 no matter how long you run the clock. By removing methane, we could potentially avoid critical climate tipping points.” 

    Because GHGs have a runaway effect on climate, reductions made now will have a far greater impact than the same reductions made in the future. Cutting methane emissions will slow the thawing of permafrost, which could otherwise lead to massive methane releases, as well as reduce increasing emissions from wetlands.  

    “The goal of MIT Methane Network is to reduce methane emissions by 45 percent by 2030, which would save up to 0.5 degree C of warming by 2100,” says Plata, an associate professor of civil and environmental engineering at MIT and director of the Plata Lab. “When you consider that governments are trying for a 1.5-degree reduction of all GHGs by 2100, this is a big deal.” 

    Under normal concentrations, methane, like CO2, poses no health risks. Yet methane assists in the creation of high levels of ozone. In the lower atmosphere, ozone is a key component of air pollution, which leads to “higher rates of asthma and increased emergency room visits,” says Plata. 

    Methane-related projects at the Plata Lab include a filter made of zeolite — the same clay-like material used in cat litter — designed to convert methane into CO2 at dairy farms and coal mines. At first glance, the technology would appear to be a bit of a hard sell, since it converts one GHG into another. Yet the zeolite filter’s low carbon and dollar costs, combined with the disproportionate warming impact of methane, make it a potential game-changer.

    The sense of urgency about methane has been amplified by recent studies that show humans are generating far more methane emissions than previously estimated, and that the rates are rising rapidly. Exactly how much methane is in the air is uncertain. Current methods for measuring atmospheric methane, such as ground, drone, and satellite sensors, “are not readily abundant and do not always agree with each other,” says Plata.  

    The Plata Lab is collaborating with Tim Swager in the MIT Department of Chemistry to develop low-cost methane sensors. “We are developing chemiresisitive sensors that cost about a dollar that you could place near energy infrastructure to back-calculate where leaks are coming from,” says Plata.  

    The researchers are working on improving the accuracy of the sensors using machine learning techniques and are planning to integrate internet-of-things technology to transmit alerts. Plata and Swager are not alone in focusing on data collection: the Inflation Reduction Act adds significant funding for methane sensor research. 

    Other research at the Plata Lab includes the development of nanomaterials and heterogeneous catalysis techniques for environmental applications. The lab also explores mitigation solutions for industrial waste, particularly those related to the energy transition. Plata is the co-founder of an lithium-ion battery recycling startup called Nth Cycle. 

    On a more fundamental level, the Plata Lab is exploring how to develop products with environmental and social sustainability in mind. “Our overarching mission is to change the way that we invent materials and processes so that environmental objectives are incorporated along with traditional performance and cost metrics,” says Plata. “It is important to do that rigorous assessment early in the design process.”

    Play video

    MIT amps up methane research 

    The MIT Methane Network brings together 26 researchers from MIT along with representatives of other institutions “that are dedicated to the idea that we can reduce methane levels in our lifetime,” says Plata. The organization supports research such as Plata’s zeolite and sensor projects, as well as designing pipeline-fixing robots, developing methane-based fuels for clean hydrogen, and researching the capture and conversion of methane into liquid chemical precursors for pharmaceuticals and plastics. Other members are researching policies to encourage more sustainable agriculture and land use, as well as methane-related social justice initiatives. 

    “Methane is an especially difficult problem because it comes from all over the place,” says Plata. A recent Global Carbon Project study estimated that half of methane emissions are caused by humans. This is led by waste and agriculture (28 percent), including cow and sheep belching, rice paddies, and landfills.  

    Fossil fuels represent 18 percent of the total budget. Of this, about 63 percent is derived from oil and gas production and pipelines, 33 percent from coal mining activities, and 5 percent from industry and transportation. Human-caused biomass burning, primarily from slash-and-burn agriculture, emits about 4 percent of the global total.  

    The other half of the methane budget includes natural methane emissions from wetlands (20 percent) and other natural sources (30 percent). The latter includes permafrost melting and natural biomass burning, such as forest fires started by lightning.  

    With increases in global warming and population, the line between anthropogenic and natural causes is getting fuzzier. “Human activities are accelerating natural emissions,” says Plata. “Climate change increases the release of methane from wetlands and permafrost and leads to larger forest and peat fires.”  

    The calculations can get complicated. For example, wetlands provide benefits from CO2 capture, biological diversity, and sea level rise resiliency that more than compensate for methane releases. Meanwhile, draining swamps for development increases emissions. 

    Over 100 nations have signed onto the U.N.’s Global Methane Pledge to reduce at least 30 percent of anthropogenic emissions within the next 10 years. The U.N. report estimates that this goal can be achieved using proven technologies and that about 60 percent of these reductions can be accomplished at low cost. 

    Much of the savings would come from greater efficiencies in fossil fuel extraction, processing, and delivery. The methane fees in the Inflation Reduction Act are primarily focused on encouraging fossil fuel companies to accelerate ongoing efforts to cap old wells, flare off excess emissions, and tighten pipeline connections.  

    Fossil fuel companies have already made far greater pledges to reduce methane than they have with CO2, which is central to their business. This is due, in part, to the potential savings, as well as in preparation for methane regulations expected from the Environmental Protection Agency in late 2022. The regulations build upon existing EPA oversight of drilling operations, and will likely be exempt from the U.S. Supreme Court’s ruling that limits the federal government’s ability to regulate GHGs. 

    Zeolite filter targets methane in dairy and coal 

    The “low-hanging fruit” of gas stream mitigation addresses most of the 20 percent of total methane emissions in which the gas is released in sufficiently high concentrations for flaring. Plata’s zeolite filter aims to address the thornier challenge of reducing the 80 percent of non-flammable dilute emissions. 

    Plata found inspiration in decades-old catalysis research for turning methane into methanol. One strategy has been to use an abundant, low-cost aluminosilicate clay called zeolite.  

    “The methanol creation process is challenging because you need to separate a liquid, and it has very low efficiency,” says Plata. “Yet zeolite can be very efficient at converting methane into CO2, and it is much easier because it does not require liquid separation. Converting methane to CO2 sounds like a bad thing, but there is a major anti-warming benefit. And because methane is much more dilute than CO2, the relative CO2 contribution is minuscule.”  

    Using zeolite to create methanol requires highly concentrated methane, high temperatures and pressures, and industrial processing conditions. Yet Plata’s process, which dopes the zeolite with copper, operates in the presence of oxygen at much lower temperatures under typical pressures. “We let the methane proceed the way it wants from a thermodynamic perspective from methane to methanol down to CO2,” says Plata. 

    Researchers around the world are working on other dilute methane removal technologies. Projects include spraying iron salt aerosols into sea air where they react with natural chlorine or bromine radicals, thereby capturing methane. Most of these geoengineering solutions, however, are difficult to measure and would require massive scale to make a difference.  

    Plata is focusing her zeolite filters on environments where concentrations are high, but not so high as to be flammable. “We are trying to scale zeolite into filters that you could snap onto the side of a cross-ventilation fan in a dairy barn or in a ventilation air shaft in a coal mine,” says Plata. “For every packet of air we bring in, we take a lot of methane out, so we get more bang for our buck.”  

    The major challenge is creating a filter that can handle high flow rates without getting clogged or falling apart. Dairy barn air handlers can push air at up to 5,000 cubic feet per minute and coal mine handlers can approach 500,000 CFM. 

    Plata is exploring engineering options including fluidized bed reactors with floating catalyst particles. Another filter solution, based in part on catalytic converters, features “higher-order geometric structures where you have a porous material with a long path length where the gas can interact with the catalyst,” says Plata. “This avoids the challenge with fluidized beds of containing catalyst particles in the reactor. Instead, they are fixed within a structured material.”  

    Competing technologies for removing methane from mine shafts “operate at temperatures of 1,000 to 1,200 degrees C, requiring a lot of energy and risking explosion,” says Plata. “Our technology avoids safety concerns by operating at 300 to 400 degrees C. It reduces energy use and provides more tractable deployment costs.” 

    Potentially, energy and dollar costs could be further reduced in coal mines by capturing the heat generated by the conversion process. “In coal mines, you have enrichments above a half-percent methane, but below the 4 percent flammability threshold,” says Plata. “The excess heat from the process could be used to generate electricity using off-the-shelf converters.” 

    Plata’s dairy barn research is funded by the Gerstner Family Foundation and the coal mining project by the U.S. Department of Energy. “The DOE would like us to spin out the technology for scale-up within three years,” says Plata. “We cannot guarantee we will hit that goal, but we are trying to develop this as quickly as possible. Our society needs to start reducing methane emissions now.”  More

  • in

    Machine learning facilitates “turbulence tracking” in fusion reactors

    Fusion, which promises practically unlimited, carbon-free energy using the same processes that power the sun, is at the heart of a worldwide research effort that could help mitigate climate change.

    A multidisciplinary team of researchers is now bringing tools and insights from machine learning to aid this effort. Scientists from MIT and elsewhere have used computer-vision models to identify and track turbulent structures that appear under the conditions needed to facilitate fusion reactions.

    Monitoring the formation and movements of these structures, called filaments or “blobs,” is important for understanding the heat and particle flows exiting from the reacting fuel, which ultimately determines the engineering requirements for the reactor walls to meet those flows. However, scientists typically study blobs using averaging techniques, which trade details of individual structures in favor of aggregate statistics. Individual blob information must be tracked by marking them manually in video data. 

    The researchers built a synthetic video dataset of plasma turbulence to make this process more effective and efficient. They used it to train four computer vision models, each of which identifies and tracks blobs. They trained the models to pinpoint blobs in the same ways that humans would.

    When the researchers tested the trained models using real video clips, the models could identify blobs with high accuracy — more than 80 percent in some cases. The models were also able to effectively estimate the size of blobs and the speeds at which they moved.

    Because millions of video frames are captured during just one fusion experiment, using machine-learning models to track blobs could give scientists much more detailed information.

    “Before, we could get a macroscopic picture of what these structures are doing on average. Now, we have a microscope and the computational power to analyze one event at a time. If we take a step back, what this reveals is the power available from these machine-learning techniques, and ways to use these computational resources to make progress,” says Theodore Golfinopoulos, a research scientist at the MIT Plasma Science and Fusion Center and co-author of a paper detailing these approaches.

    His fellow co-authors include lead author Woonghee “Harry” Han, a physics PhD candidate; senior author Iddo Drori, a visiting professor in the Computer Science and Artificial Intelligence Laboratory (CSAIL), faculty associate professor at Boston University, and adjunct at Columbia University; as well as others from the MIT Plasma Science and Fusion Center, the MIT Department of Civil and Environmental Engineering, and the Swiss Federal Institute of Technology at Lausanne in Switzerland. The research appears today in Nature Scientific Reports.

    Heating things up

    For more than 70 years, scientists have sought to use controlled thermonuclear fusion reactions to develop an energy source. To reach the conditions necessary for a fusion reaction, fuel must be heated to temperatures above 100 million degrees Celsius. (The core of the sun is about 15 million degrees Celsius.)

    A common method for containing this super-hot fuel, called plasma, is to use a tokamak. These devices utilize extremely powerful magnetic fields to hold the plasma in place and control the interaction between the exhaust heat from the plasma and the reactor walls.

    However, blobs appear like filaments falling out of the plasma at the very edge, between the plasma and the reactor walls. These random, turbulent structures affect how energy flows between the plasma and the reactor.

    “Knowing what the blobs are doing strongly constrains the engineering performance that your tokamak power plant needs at the edge,” adds Golfinopoulos.

    Researchers use a unique imaging technique to capture video of the plasma’s turbulent edge during experiments. An experimental campaign may last months; a typical day will produce about 30 seconds of data, corresponding to roughly 60 million video frames, with thousands of blobs appearing each second. This makes it impossible to track all blobs manually, so researchers rely on average sampling techniques that only provide broad characteristics of blob size, speed, and frequency.

    “On the other hand, machine learning provides a solution to this by blob-by-blob tracking for every frame, not just average quantities. This gives us much more knowledge about what is happening at the boundary of the plasma,” Han says.

    He and his co-authors took four well-established computer vision models, which are commonly used for applications like autonomous driving, and trained them to tackle this problem.

    Simulating blobs

    To train these models, they created a vast dataset of synthetic video clips that captured the blobs’ random and unpredictable nature.

    “Sometimes they change direction or speed, sometimes multiple blobs merge, or they split apart. These kinds of events were not considered before with traditional approaches, but we could freely simulate those behaviors in the synthetic data,” Han says.

    Creating synthetic data also allowed them to label each blob, which made the training process more effective, Drori adds.

    Using these synthetic data, they trained the models to draw boundaries around blobs, teaching them to closely mimic what a human scientist would draw.

    Then they tested the models using real video data from experiments. First, they measured how closely the boundaries the models drew matched up with actual blob contours.

    But they also wanted to see if the models predicted objects that humans would identify. They asked three human experts to pinpoint the centers of blobs in video frames and checked to see if the models predicted blobs in those same locations.

    The models were able to draw accurate blob boundaries, overlapping with brightness contours which are considered ground-truth, about 80 percent of the time. Their evaluations were similar to those of human experts, and successfully predicted the theory-defined regime of the blob, which agrees with the results from a traditional method.

    Now that they have shown the success of using synthetic data and computer vision models for tracking blobs, the researchers plan to apply these techniques to other problems in fusion research, such as estimating particle transport at the boundary of a plasma, Han says.

    They also made the dataset and models publicly available, and look forward to seeing how other research groups apply these tools to study the dynamics of blobs, says Drori.

    “Prior to this, there was a barrier to entry that mostly the only people working on this problem were plasma physicists, who had the datasets and were using their methods. There is a huge machine-learning and computer-vision community. One goal of this work is to encourage participation in fusion research from the broader machine-learning community toward the broader goal of helping solve the critical problem of climate change,” he adds.

    This research is supported, in part, by the U.S. Department of Energy and the Swiss National Science Foundation. More

  • in

    Coordinating climate and air-quality policies to improve public health

    As America’s largest investment to fight climate change, the Inflation Reduction Act positions the country to reduce its greenhouse gas emissions by an estimated 40 percent below 2005 levels by 2030. But as it edges the United States closer to achieving its international climate commitment, the legislation is also expected to yield significant — and more immediate — improvements in the nation’s health. If successful in accelerating the transition from fossil fuels to clean energy alternatives, the IRA will sharply reduce atmospheric concentrations of fine particulates known to exacerbate respiratory and cardiovascular disease and cause premature deaths, along with other air pollutants that degrade human health. One recent study shows that eliminating air pollution from fossil fuels in the contiguous United States would prevent more than 50,000 premature deaths and avoid more than $600 billion in health costs each year.

    While national climate policies such as those advanced by the IRA can simultaneously help mitigate climate change and improve air quality, their results may vary widely when it comes to improving public health. That’s because the potential health benefits associated with air quality improvements are much greater in some regions and economic sectors than in others. Those benefits can be maximized, however, through a prudent combination of climate and air-quality policies.

    Several past studies have evaluated the likely health impacts of various policy combinations, but their usefulness has been limited due to a reliance on a small set of standard policy scenarios. More versatile tools are needed to model a wide range of climate and air-quality policy combinations and assess their collective effects on air quality and human health. Now researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Institute for Data, Systems and Society (IDSS) have developed a publicly available, flexible scenario tool that does just that.

    In a study published in the journal Geoscientific Model Development, the MIT team introduces its Tool for Air Pollution Scenarios (TAPS), which can be used to estimate the likely air-quality and health outcomes of a wide range of climate and air-quality policies at the regional, sectoral, and fuel-based level. 

    “This tool can help integrate the siloed sustainability issues of air pollution and climate action,” says the study’s lead author William Atkinson, who recently served as a Biogen Graduate Fellow and research assistant at the IDSS Technology and Policy Program’s (TPP) Research to Policy Engagement Initiative. “Climate action does not guarantee a clean air future, and vice versa — but the issues have similar sources that imply shared solutions if done right.”

    The study’s initial application of TAPS shows that with current air-quality policies and near-term Paris Agreement climate pledges alone, short-term pollution reductions give way to long-term increases — given the expected growth of emissions-intensive industrial and agricultural processes in developing regions. More ambitious climate and air-quality policies could be complementary, each reducing different pollutants substantially to give tremendous near- and long-term health benefits worldwide.

    “The significance of this work is that we can more confidently identify the long-term emission reduction strategies that also support air quality improvements,” says MIT Joint Program Deputy Director C. Adam Schlosser, a co-author of the study. “This is a win-win for setting climate targets that are also healthy targets.”

    TAPS projects air quality and health outcomes based on three integrated components: a recent global inventory of detailed emissions resulting from human activities (e.g., fossil fuel combustion, land-use change, industrial processes); multiple scenarios of emissions-generating human activities between now and the year 2100, produced by the MIT Economic Projection and Policy Analysis model; and emissions intensity (emissions per unit of activity) scenarios based on recent data from the Greenhouse Gas and Air Pollution Interactions and Synergies model.

    “We see the climate crisis as a health crisis, and believe that evidence-based approaches are key to making the most of this historic investment in the future, particularly for vulnerable communities,” says Johanna Jobin, global head of corporate reputation and responsibility at Biogen. “The scientific community has spoken with unanimity and alarm that not all climate-related actions deliver equal health benefits. We’re proud of our collaboration with the MIT Joint Program to develop this tool that can be used to bridge research-to-policy gaps, support policy decisions to promote health among vulnerable communities, and train the next generation of scientists and leaders for far-reaching impact.”

    The tool can inform decision-makers about a wide range of climate and air-quality policies. Policy scenarios can be applied to specific regions, sectors, or fuels to investigate policy combinations at a more granular level, or to target short-term actions with high-impact benefits.

    TAPS could be further developed to account for additional emissions sources and trends.

    “Our new tool could be used to examine a large range of both climate and air quality scenarios. As the framework is expanded, we can add detail for specific regions, as well as additional pollutants such as air toxics,” says study supervising co-author Noelle Selin, professor at IDSS and the MIT Department of Earth, Atmospheric and Planetary Sciences, and director of TPP.    

    This research was supported by the U.S. Environmental Protection Agency and its Science to Achieve Results (STAR) program; Biogen; TPP’s Leading Technology and Policy Initiative; and TPP’s Research to Policy Engagement Initiative. More

  • in

    Finding community in high-energy-density physics

    Skylar Dannhoff knew one thing: She did not want to be working alone.

    As an undergraduate at Case Western Reserve University, she had committed to a senior project that often felt like solitary lab work, a feeling heightened by the pandemic. Though it was an enriching experience, she was determined to find a graduate school environment that would foster community, one “with lots of people, lots of collaboration; where it’s impossible to work until 3 a.m. without anyone noticing.” A unique group at the Plasma Science and Fusion Center (PSFC) looked promising: the High-Energy-Density Physics (HEDP) division, a lead partner in the National Nuclear Security Administration’s Center for Excellence at MIT.

    “It was a shot in the dark, just more of a whim than anything,” she says of her request to join HEDP on her application to MIT’s Department of Physics. “And then, somehow, they reached out to me. I told them I’m willing to learn about plasma. I didn’t know anything about it.”

    What she did know was that the HEDP group collaborates with other U.S. laboratories on an approach to creating fusion energy known as inertial confinement fusion (ICF). One version of the technique, known as direct-drive ICF, aims multiple laser beams symmetrically onto a spherical capsule filled with nuclear fuel. The other, indirect-drive ICF, instead aims multiple lasers beams into a gold cylindrical cavity called a hohlraum, within which the spherical fuel capsule is positioned. The laser beams are configured to hit the inner hohlraum wall, generating a “bath” of X-rays, which in turn compress the fuel capsule.

    Imploding the capsule generates intense fusion energy within a tiny fraction of a second (an order of tens of picoseconds). In August 2021, the National Ignition Facility (NIF) at Lawrence Livermore National Laboratory (LLNL) used this method to produce an historic fusion yield of 1.3 megajoules, putting researchers within reach of “ignition,” the point where the self-sustained fusion burn spreads into the surrounding fuel, leading to a high fusion-energy gain.  

    Joining the group just a month before this long-sought success, Dannhoff was impressed more with the response of her new teammates and the ICF community than with the scientific milestone. “I got a better appreciation for people who had spent their entire careers working on this project, just chugging along doing their best, ignoring the naysayers. I was excited for the people.”

    Dannhoff is now working toward extending the success of NIF and other ICF experiments, like the OMEGA laser at the University of Rochester’s Laboratory for Laser Energetics. Under the supervision of Senior Research Scientist Chikang Li, she is studying what happens to the flow of plasma within the hohlraum cavity during indirect ICF experiments, particularly for hohlraums with inner-wall aerogel foam linings. Experiments, over the last decade, have shown just how excruciatingly precise the symmetry in ICF targets must be. The more symmetric the X-ray drive, the more effective the implosion, and it is possible that these foam linings will improve the X-ray symmetry and drive efficiency.

    Dannhoff is specifically interested in studying the behavior of silicon and tantalum-based foam liners. She is as concerned with the challenges of the people at General Atomics (GA) and LLNL who are creating these targets as she is with the scientific outcome.

    “I just had a meeting with GA yesterday,” she notes. “And it’s a really tricky process. It’s kind of pushing the boundaries of what is doable at the moment. I got a much better sense of how demanding this project is for them, how much we’re asking of them.”

    What excites Dannhoff is the teamwork she observes, both at MIT and between ICF institutions around the United States. With roughly 10 graduate students and postdocs down the hall, each with an assigned lead role in lab management, she knows she can consult an expert on almost any question. And collaborators across the country are just an email away. “Any information that people can give you, they will give you, and usually very freely,” she notes. “Everyone just wants to see this work.”

    That Dannhoff is a natural team player is also evidenced in her hobbies. A hockey goalie, she prioritizes playing with MIT’s intramural teams, “because goalies are a little hard to come by. I just play with whoever needs a goalie on that night, and it’s a lot of fun.”

    She is also a member of the radio community, a fellowship she first embraced at Case Western — a moment she describes as a turning point in her life. “I literally don’t know who I would be today if I hadn’t figured out radio is something I’m interested in,” she admits. The MIT Radio Society provided the perfect landing pad for her arrival in Cambridge, full of the kinds of supportive, interesting, knowledgeable students she had befriended as an undergraduate. She credits radio with helping her realize that she could make her greatest contributions to science by focusing on engineering.

    Danhoff gets philosophical as she marvels at the invisible waves that surround us.

    “Not just radio waves: every wave,” she asserts. “The voice is the everywhere. Music, signal, space phenomena: it’s always around. And all we have to do is make the right little device and have the right circuit elements put in the right order to unmix and mix the signals and amplify them. And bada-bing, bada-boom, we’re talking with the universe.”

    “Maybe that epitomizes physics to me,” she adds. “We’re trying to listen to the universe, and it’s talking to us. We just have to come up with the right tools and hear what it’s trying to say.” More

  • in

    Small eddies play a big role in feeding ocean microbes

    Subtropical gyres are enormous rotating ocean currents that generate sustained circulations in the Earth’s subtropical regions just to the north and south of the equator. These gyres are slow-moving whirlpools that circulate within massive basins around the world, gathering up nutrients, organisms, and sometimes trash, as the currents rotate from coast to coast.

    For years, oceanographers have puzzled over conflicting observations within subtropical gyres. At the surface, these massive currents appear to host healthy populations of phytoplankton — microbes that feed the rest of the ocean food chain and are responsible for sucking up a significant portion of the atmosphere’s carbon dioxide.

    But judging from what scientists know about the dynamics of gyres, they estimated the currents themselves wouldn’t be able to maintain enough nutrients to sustain the phytoplankton they were seeing. How, then, were the microbes able to thrive?

    Now, MIT researchers have found that phytoplankton may receive deliveries of nutrients from outside the gyres, and that the delivery vehicle is in the form of eddies — much smaller currents that swirl at the edges of a gyre. These eddies pull nutrients in from high-nutrient equatorial regions and push them into the center of a gyre, where the nutrients are then taken up by other currents and pumped to the surface to feed phytoplankton.

    Ocean eddies, the team found, appear to be an important source of nutrients in subtropical gyres. Their replenishing effect, which the researchers call a “nutrient relay,” helps maintain populations of phytoplankton, which play a central role in the ocean’s ability to sequester carbon from the atmosphere. While climate models tend to project a decline in the ocean’s ability to sequester carbon over the coming decades, this “nutrient relay” could help sustain carbon storage over the subtropical oceans.

    “There’s a lot of uncertainty about how the carbon cycle of the ocean will evolve as climate continues to change, ” says Mukund Gupta, a postdoc at Caltech who led the study as a graduate student at MIT. “As our paper shows, getting the carbon distribution right is not straightforward, and depends on understanding the role of eddies and other fine-scale motions in the ocean.”

    Gupta and his colleagues report their findings this week in the Proceedings of the National Academy of Sciences. The study’s co-authors are Jonathan Lauderdale, Oliver Jahn, Christopher Hill, Stephanie Dutkiewicz, and Michael Follows at MIT, and Richard Williams at the University of Liverpool.

    A snowy puzzle

    A cross-section of an ocean gyre resembles a stack of nesting bowls that is stratified by density: Warmer, lighter layers lie at the surface, while colder, denser waters make up deeper layers. Phytoplankton live within the ocean’s top sunlit layers, where the microbes require sunlight, warm temperatures, and nutrients to grow.

    When phytoplankton die, they sink through the ocean’s layers as “marine snow.” Some of this snow releases nutrients back into the current, where they are pumped back up to feed new microbes. The rest of the snow sinks out of the gyre, down to the deepest layers of the ocean. The deeper the snow sinks, the more difficult it is for it to be pumped back to the surface. The snow is then trapped, or sequestered, along with any unreleased carbon and nutrients.

    Oceanographers thought that the main source of nutrients in subtropical gyres came from recirculating marine snow. But as a portion of this snow inevitably sinks to the bottom, there must be another source of nutrients to explain the healthy populations of phytoplankton at the surface. Exactly what that source is “has left the oceanography community a little puzzled for some time,” Gupta says.

    Swirls at the edge

    In their new study, the team sought to simulate a subtropical gyre to see what other dynamics may be at work. They focused on the North Pacific gyre, one of the Earth’s five major gyres, which circulates over most of the North Pacific Ocean, and spans more than 20 million square kilometers. 

    The team started with the MITgcm, a general circulation model that simulates the physical circulation patterns in the atmosphere and oceans. To reproduce the North Pacific gyre’s dynamics as realistically as possible, the team used an MITgcm algorithm, previously developed at NASA and MIT, which tunes the model to match actual observations of the ocean, such as ocean currents recorded by satellites, and temperature and salinity measurements taken by ships and drifters.  

    “We use a simulation of the physical ocean that is as realistic as we can get, given the machinery of the model and the available observations,” Lauderdale says.

    Play video

    An animation of the North Pacific Ocean shows phosphate nutrient concentrations at 500 meters below the ocean surface. The swirls represent small eddies transporting phosphate from the nutrient-rich equator (lighter colors), northward toward the nutrient-depleted subtropics (darker colors). This nutrient relay mechanism helps sustain biological activity and carbon sequestration in the subtropical ocean. Credit: Oliver Jahn

    The realistic model captured finer details, at a resolution of less than 20 kilometers per pixel, compared to other models that have a more limited resolution. The team combined the simulation of the ocean’s physical behavior with the Darwin model — a simulation of microbe communities such as phytoplankton, and how they grow and evolve with ocean conditions.

    The team ran the combined simulation of the North Pacific gyre over a decade, and created animations to visualize the pattern of currents and the nutrients they carried, in and around the gyre. What emerged were small eddies that ran along the edges of the enormous gyre and appeared to be rich in nutrients.

    “We were picking up on little eddy motions, basically like weather systems in the ocean,” Lauderdale says. “These eddies were carrying packets of high-nutrient waters, from the equator, north into the center of the gyre and downwards along the sides of the bowls. We wondered if these eddy transfers made an important delivery mechanism.”

    Surprisingly, the nutrients first move deeper, away from the sunlight, before being returned upwards where the phytoplankton live. The team found that ocean eddies could supply up to 50 percent of the nutrients in subtropical gyres.

    “That is very significant,” Gupta says. “The vertical process that recycles nutrients from marine snow is only half the story. The other half is the replenishing effect of these eddies. As subtropical gyres contribute a significant part of the world’s oceans, we think this nutrient relay is of global importance.”

    This research was supported, in part, by the Simons Foundation and NASA. More

  • in

    Divorce is more common in albatross couples with shy males, study finds

    The wandering albatross is the poster bird for avian monogamy. The graceful glider is known to mate for life, partnering up with the same bird to breed, season after season, between long flights at sea.

    But on rare occasions, an albatross pair will “divorce” — a term ornithologists use for instances when one partner leaves the pair for another mate while the other partner remains in the flock. Divorce rates vary widely across the avian world, and the divorce rate for wandering albatrosses is relatively low.

    Nevertheless, the giant drifters can split up. Scientists at MIT and the Woods Hole Oceanographic Institution (WHOI) have found that, at least for one particular population of wandering albatross, whether a pair will divorce boils down to one important factor: personality. 

    In a study appearing today in the journal Biology Letters, the team reports that an albatross couple’s chance of divorce is highly influenced by the male partner’s “boldness.” The bolder and more aggressive the male, the more likely the pair is to stay together. The shyer the male, the higher the chance that the pair will divorce.

    The researchers say their study is the first to link personality and divorce in a wild animal species.

    “We thought that bold males, being more aggressive, would be more likely to divorce, because they would be more likely to take the risk of switching partners to improve future reproductive outcomes,” says study senior author Stephanie Jenouvrier, an associate scientist and seabird ecologist in WHOI’s FLEDGE Lab. “Instead we find the shy divorce more because they are more likely to be forced to divorce by a more competitive intruder. We expect personality may impact divorce rates in many species, but in different ways.”

    Lead author Ruijiao Sun, a graduate student in the MIT-WHOI Joint Program and MIT’s Department of Earth, Atmospheric and Planetary Sciences, says that this new evidence of a link between personality and divorce in the wandering albatross may help scientists predict the resilience of the population.

    “The wandering albatross is a vulnerable species,” Sun says. “Understanding the effect of personality on divorce is important because it can help researchers predict the consequences for population dynamics, and implement conservation efforts.”

    The study’s co-authors include Joanie Van de Walle of WHOI, Samantha Patrick of the University of Liverpool, and Christophe Barbraud, Henri Weimerskirch, and Karine Delord of CNRS- La Rochelle University in France.

    Repeat divorcées

    The new study concentrates on a population of wandering albatross that return regularly to Possession Island in the Southern Indian Ocean to breed. This population has been the focus of a long-term study dating back to the 1950s, in which researchers have been monitoring the birds each breeding season and recording the pairings and breakups of individuals through the years.

    This particular population is skewed toward more male individuals than females because the foraging grounds of female albatrosses overlap with fishing vessels, where they are more prone to being accidentally caught in fishing lines as bycatch.  

    In earlier research, Sun analyzed data from this long-term study and picked up a curious pattern: Those individuals that divorced were more likely to do so again and again.

    “Then we wanted to know, what drives divorce, and why are some individuals divorcing more often,” Jenouvrier says. “In humans, you see this repetitive divorce pattern as well, linked to personality. And the wandering albatross is one of the rare species for which we have both demographic and personality data.”

    That personality data comes from an ongoing study that began in 2008 and is led by co-author Patrick, who has been measuring the personality of individuals among the same population of wandering albatross on Possession Island. In the study of animal behavior, personality is defined as a consistent behavioral difference displayed by an individual. Biologists mainly measure personality in animals as a gradient between shy and bold, or less to more aggressive.

    In Patrick’s study, researchers have measured boldness in albatrosses by gauging a bird’s reaction to a human approaching its nest, from a distance of about 5 meters. A bird is assigned a score depending on how it reacts (a bird that does not respond scores a zero, being the most shy, while a bird that lifts its head, and even stands up, can score higher, being the most bold).

    Patrick has made multiple personality assessments of the same individuals over multiple years. Sun and Jenouvrier wondered: Could an individual’s personality have anything to do with their chance to divorce?

    “We had seen this repetitive divorce pattern, and then talked with Sam (Patrick) to see, could this be related to personality?” Sun recalls. “We know that personality predicts divorce in human beings, and it would be intuitive to make the link between personality and divorce in wild populations.”

    Shy birds

    In their new study, the team used data from both the demographic and personality studies to see whether any patterns between the two emerged. They applied a statistical model to both datasets, to test whether the personality of individuals in an albatross pair affected the fate of that pair.

    They found that for females, personality had little to do with whether the birds divorced. But in males, the pattern was clear: Those that were identified as shy were more likely to divorce, while bolder males stayed with their partner.

    “Divorce does not happen very often,” Jenouvrier says. “But we found that the shyer a bird is, the more likely they are to divorce.”

    But why? In their study, the team puts forth an explanation, which ecologists call “forced divorce.” They point out that, in this particular population of wandering albatross, males far outnumber females and therefore are more likely to compete with each other for mates. Males that are already partnered up, therefore, may be faced with a third “intruder” — a male who is competing for a place in the pair.

    “When there is a third intruder that competes, shy birds could step away and give away their mates, where bolder individuals are aggressive and will guard their partner and secure their partnership,” Sun explains. “That’s why shyer individuals may have higher divorce rates.”

    The team is planning to extend their work to examine how the personality of individuals can affect how the larger population changes and evolves. 

    “Now we’re talking about a connection between personality and divorce at the individual level,” Sun says. “But we want to understand the impact at the population level.”

    This research was supported, in part, by the National Science Foundation. More