More stories

  • in

    Reducing pesticide use while increasing effectiveness

    Farming can be a low-margin, high-risk business, subject to weather and climate patterns, insect population cycles, and other unpredictable factors. Farmers need to be savvy managers of the many resources they deal, and chemical fertilizers and pesticides are among their major recurring expenses.

    Despite the importance of these chemicals, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to their runoff into waterways and buildup up in the soil.

    That could change, thanks to a new approach of feedback-optimized spraying, invented by AgZen, an MIT spinout founded in 2020 by Professor Kripa Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22.

    Play video

    AgZen has developed a system for farming that can monitor exactly how much of the sprayed chemicals adheres to plants, in real time, as the sprayer drives through a field. Built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray.

    Over the past decade, AgZen’s founders have developed products and technologies to control the interactions of droplets and sprays with plant surfaces. The Boston-based venture-backed company launched a new commercial product in 2024 and is currently piloting another related product. Field tests of both have shown the products can help farmers spray more efficiently and effectively, using fewer chemicals overall.

    “Worldwide, farms spend approximately $60 billion a year on pesticides. Our objective is to reduce the number of pesticides sprayed and lighten the financial burden on farms without sacrificing effective pest management,” Varanasi says.

    Getting droplets to stick

    While the world pesticide market is growing rapidly, a lot of the pesticides sprayed don’t reach their target. A significant portion bounces off the plant surfaces, lands on the ground, and becomes part of the runoff that flows to streams and rivers, often causing serious pollution. Some of these pesticides can be carried away by wind over very long distances.

    “Drift, runoff, and poor application efficiency are well-known, longstanding problems in agriculture, but we can fix this by controlling and monitoring how sprayed droplets interact with leaves,” Varanasi says.

    With support from MIT Tata Center and the Abdul Latif Jameel Water and Food Systems Lab, Varanasi and his team analyzed how droplets strike plant surfaces, and explored ways to increase application efficiency. This research led them to develop a novel system of nozzles that cloak droplets with compounds that enhance the retention of droplets on the leaves, a product they call EnhanceCoverage.

    Field studies across regions — from Massachusetts to California to Italy and France —showed that this droplet-optimization system could allow farmers to cut the amount of chemicals needed by more than half because more of the sprayed substances would stick to the leaves.

    Measuring coverage

    However, in trying to bring this technology to market, the researchers faced a sticky problem: Nobody knew how well pesticide sprays were adhering to the plants in the first place, so how could AgZen say that the coverage was better with its new EnhanceCoverage system?

    “I had grown up spraying with a backpack on a small farm in India, so I knew this was an issue,” Jayaprakash says. “When we spoke to growers, they told me how complicated spraying is when you’re on a large machine. Whenever you spray, there are so many things that can influence how effective your spray is. How fast do you drive the sprayer? What flow rate are you using for the chemicals? What chemical are you using? What’s the age of the plants, what’s the nozzle you’re using, what is the weather at the time? All these things influence agrochemical efficiency.”

    Agricultural spraying essentially comes down to dissolving a chemical in water and then spraying droplets onto the plants. “But the interaction between a droplet and the leaf is complex,” Varanasi says. “We were coming in with ways to optimize that, but what the growers told us is, hey, we’ve never even really looked at that in the first place.”

    Although farmers have been spraying agricultural chemicals on a large scale for about 80 years, they’ve “been forced to rely on general rules of thumb and pick all these interlinked parameters, based on what’s worked for them in the past. You pick a set of these parameters, you go spray, and you’re basically praying for outcomes in terms of how effective your pest control is,” Varanasi says.

    Before AgZen could sell farmers on the new system to improve droplet coverage, the company had to invent a way to measure precisely how much spray was adhering to plants in real-time.

    Comparing before and after

    The system they came up with, which they tested extensively on farms across the country last year, involves a unit that can be bolted onto the spraying arm of virtually any sprayer. It carries two sensor stacks, one just ahead of the sprayer nozzles and one behind. Then, built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray. It also computes how much those droplets will spread out or evaporate, leading to a precise estimate of the final coverage.

    “There’s a lot of physics that governs how droplets spread and evaporate, and this has been incorporated into software that a farmer can use,” Varanasi says. “We bring a lot of our expertise into understanding droplets on leaves. All these factors, like how temperature and humidity influence coverage, have always been nebulous in the spraying world. But now you have something that can be exact in determining how well your sprays are doing.”

    “We’re not only measuring coverage, but then we recommend how to act,” says Jayaprakash, who is AgZen’s CEO. “With the information we collect in real-time and by using AI, RealCoverage tells operators how to optimize everything on their sprayer, from which nozzle to use, to how fast to drive, to how many gallons of spray is best for a particular chemical mix on a particular acre of a crop.”

    The tool was developed to prove how much AgZen’s EnhanceCoverage nozzle system (which will be launched in 2025) improves coverage. But it turns out that monitoring and optimizing droplet coverage on leaves in real-time with this system can itself yield major improvements.

    “We worked with large commercial farms last year in specialty and row crops,” Jayaprakash says. “When we saved our pilot customers up to 50 percent of their chemical cost at a large scale, they were very surprised.” He says the tool has reduced chemical costs and volume in fallow field burndowns, weed control in soybeans, defoliation in cotton, and fungicide and insecticide sprays in vegetables and fruits. Along with data from commercial farms, field trials conducted by three leading agricultural universities have also validated these results.

    “Across the board, we were able to save between 30 and 50 percent on chemical costs and increase crop yields by enabling better pest control,” Jayaprakash says. “By focusing on the droplet-leaf interface, our product can help any foliage spray throughout the year, whereas most technological advancements in this space recently have been focused on reducing herbicide use alone.” The company now intends to lease the system across thousands of acres this year.

    And these efficiency gains can lead to significant returns at scale, he emphasizes: In the U.S., farmers currently spend $16 billion a year on chemicals, to protect about $200 billion of crop yields.

    The company launched its first product, the coverage optimization system called RealCoverage, this year, reaching a wide variety of farms with different crops and in different climates. “We’re going from proof-of-concept with pilots in large farms to a truly massive scale on a commercial basis with our lease-to-own program,” Jayaprakash says.

    “We’ve also been tapped by the USDA to help them evaluate practices to minimize pesticides in watersheds,” Varanasi says, noting that RealCoverage can also be useful for regulators, chemical companies, and agricultural equipment manufacturers.

    Once AgZen has proven the effectiveness of using coverage as a decision metric, and after the RealCoverage optimization system is widely in practice, the company will next roll out its second product, EnhanceCoverage, designed to maximize droplet adhesion. Because that system will require replacing all the nozzles on a sprayer, the researchers are doing pilots this year but will wait for a full rollout in 2025, after farmers have gained experience and confidence with their initial product.

    “There is so much wastage,” Varanasi says. “Yet farmers must spray to protect crops, and there is a lot of environmental impact from this. So, after all this work over the years, learning about how droplets stick to surfaces and so on, now the culmination of it in all these products for me is amazing, to see all this come alive, to see that we’ll finally be able to solve the problem we set out to solve and help farmers.” More

  • in

    Cutting carbon emissions on the US power grid

    To help curb climate change, the United States is working to reduce carbon emissions from all sectors of the energy economy. Much of the current effort involves electrification — switching to electric cars for transportation, electric heat pumps for home heating, and so on. But in the United States, the electric power sector already generates about a quarter of all carbon emissions. “Unless we decarbonize our electric power grids, we’ll just be shifting carbon emissions from one source to another,” says Amanda Farnsworth, a PhD candidate in chemical engineering and research assistant at the MIT Energy Initiative (MITEI).

    But decarbonizing the nation’s electric power grids will be challenging. The availability of renewable energy resources such as solar and wind varies in different regions of the country. Likewise, patterns of energy demand differ from region to region. As a result, the least-cost pathway to a decarbonized grid will differ from one region to another.

    Over the past two years, Farnsworth and Emre Gençer, a principal research scientist at MITEI, developed a power system model that would allow them to investigate the importance of regional differences — and would enable experts and laypeople alike to explore their own regions and make informed decisions about the best way to decarbonize. “With this modeling capability you can really understand regional resources and patterns of demand, and use them to do a ‘bespoke’ analysis of the least-cost approach to decarbonizing the grid in your particular region,” says Gençer.

    To demonstrate the model’s capabilities, Gençer and Farnsworth performed a series of case studies. Their analyses confirmed that strategies must be designed for specific regions and that all the costs and carbon emissions associated with manufacturing and installing solar and wind generators must be included for accurate accounting. But the analyses also yielded some unexpected insights, including a correlation between a region’s wind energy and the ease of decarbonizing, and the important role of nuclear power in decarbonizing the California grid.

    A novel model

    For many decades, researchers have been developing “capacity expansion models” to help electric utility planners tackle the problem of designing power grids that are efficient, reliable, and low-cost. More recently, many of those models also factor in the goal of reducing or eliminating carbon emissions. While those models can provide interesting insights relating to decarbonization, Gençer and Farnsworth believe they leave some gaps that need to be addressed.

    For example, most focus on conditions and needs in a single U.S. region without highlighting the unique peculiarities of their chosen area of focus. Hardly any consider the carbon emitted in fabricating and installing such “zero-carbon” technologies as wind turbines and solar panels. And finally, most of the models are challenging to use. Even experts in the field must search out and assemble various complex datasets in order to perform a study of interest.

    Gençer and Farnsworth’s capacity expansion model — called Ideal Grid, or IG — addresses those and other shortcomings. IG is built within the framework of MITEI’s Sustainable Energy System Analysis Modeling Environment (SESAME), an energy system modeling platform that Gençer and his colleagues at MITEI have been developing since 2017. SESAME models the levels of greenhouse gas emissions from multiple, interacting energy sectors in future scenarios.

    Importantly, SESAME includes both techno-economic analyses and life-cycle assessments of various electricity generation and storage technologies. It thus considers costs and emissions incurred at each stage of the life cycle (manufacture, installation, operation, and retirement) for all generators. Most capacity expansion models only account for emissions from operation of fossil fuel-powered generators. As Farnsworth notes, “While this is a good approximation for our current grid, emissions from the full life cycle of all generating technologies become non-negligible as we transition to a highly renewable grid.”

    Through its connection with SESAME, the IG model has access to data on costs and emissions associated with many technologies critical to power grid operation. To explore regional differences in the cost-optimized decarbonization strategies, the IG model also includes conditions within each region, notably details on demand profiles and resource availability.

    In one recent study, Gençer and Farnsworth selected nine of the standard North American Electric Reliability Corporation (NERC) regions. For each region, they incorporated hourly electricity demand into the IG model. Farnsworth also gathered meteorological data for the nine U.S. regions for seven years — 2007 to 2013 — and calculated hourly power output profiles for the renewable energy sources, including solar and wind, taking into account the geography-limited maximum capacity of each technology.

    The availability of wind and solar resources differs widely from region to region. To permit a quick comparison, the researchers use a measure called “annual capacity factor,” which is the ratio between the electricity produced by a generating unit in a year and the electricity that could have been produced if that unit operated continuously at full power for that year. Values for the capacity factors in the nine U.S. regions vary between 20 percent and 30 percent for solar power and for between 25 percent and 45 percent for wind.

    Calculating optimized grids for different regions

    For their first case study, Gençer and Farnsworth used the IG model to calculate cost-optimized regional grids to meet defined caps on carbon dioxide (CO2) emissions. The analyses were based on cost and emissions data for 10 technologies: nuclear, wind, solar, three types of natural gas, three types of coal, and energy storage using lithium-ion batteries. Hydroelectric was not considered in this study because there was no comprehensive study outlining potential expansion sites with their respective costs and expected power output levels.

    To make region-to-region comparisons easy, the researchers used several simplifying assumptions. Their focus was on electricity generation, so the model calculations assume the same transmission and distribution costs and efficiencies for all regions. Also, the calculations did not consider the generator fleet currently in place. The goal was to investigate what happens if each region were to start from scratch and generate an “ideal” grid.

    To begin, Gençer and Farnsworth calculated the most economic combination of technologies for each region if it limits its total carbon emissions to 100, 50, and 25 grams of CO2 per kilowatt-hour (kWh) generated. For context, the current U.S. average emissions intensity is 386 grams of CO2 emissions per kWh.

    Given the wide variation in regional demand, the researchers needed to use a new metric to normalize their results and permit a one-to-one comparison between regions. Accordingly, the model calculates the required generating capacity divided by the average demand for each region. The required capacity accounts for both the variation in demand and the inability of generating systems — particularly solar and wind — to operate at full capacity all of the time.

    The analysis was based on regional demand data for 2021 — the most recent data available. And for each region, the model calculated the cost-optimized power grid seven times, using weather data from seven years. This discussion focuses on mean values for cost and total capacity installed and also total values for coal and for natural gas, although the analysis considered three separate technologies for each fuel.

    The results of the analyses confirm that there’s a wide variation in the cost-optimized system from one region to another. Most notable is that some regions require a lot of energy storage while others don’t require any at all. The availability of wind resources turns out to play an important role, while the use of nuclear is limited: the carbon intensity of nuclear (including uranium mining and transportation) is lower than that of either solar or wind, but nuclear is the most expensive technology option, so it’s added only when necessary. Finally, the change in the CO2 emissions cap brings some interesting responses.

    Under the most lenient limit on emissions — 100 grams of CO2 per kWh — there’s no coal in the mix anywhere. It’s the first to go, in general being replaced by the lower-carbon-emitting natural gas. Texas, Central, and North Central — the regions with the most wind — don’t need energy storage, while the other six regions do. The regions with the least wind — California and the Southwest — have the highest energy storage requirements. Unlike the other regions modeled, California begins installing nuclear, even at the most lenient limit.

    As the model plays out, under the moderate cap — 50 grams of CO2 per kWh — most regions bring in nuclear power. California and the Southeast — regions with low wind capacity factors — rely on nuclear the most. In contrast, wind-rich Texas, Central, and North Central don’t incorporate nuclear yet but instead add energy storage — a less-expensive option — to their mix. There’s still a bit of natural gas everywhere, in spite of its CO2 emissions.

    Under the most restrictive cap — 25 grams of CO2 per kWh — nuclear is in the mix everywhere. The highest use of nuclear is again correlated with low wind capacity factor. Central and North Central depend on nuclear the least. All regions continue to rely on a little natural gas to keep prices from skyrocketing due to the necessary but costly nuclear component. With nuclear in the mix, the need for storage declines in most regions.

    Results of the cost analysis are also interesting. Texas, Central, and North Central all have abundant wind resources, and they can delay incorporating the costly nuclear option, so the cost of their optimized system tends to be lower than costs for the other regions. In addition, their total capacity deployment — including all sources — tends to be lower than for the other regions. California and the Southwest both rely heavily on solar, and in both regions, costs and total deployment are relatively high.

    Lessons learned

    One unexpected result is the benefit of combining solar and wind resources. The problem with relying on solar alone is obvious: “Solar energy is available only five or six hours a day, so you need to build a lot of other generating sources and abundant storage capacity,” says Gençer. But an analysis of unit-by-unit operations at an hourly resolution yielded a less-intuitive trend: While solar installations only produce power in the midday hours, wind turbines generate the most power in the nighttime hours. As a result, solar and wind power are complementary. Having both resources available is far more valuable than having either one or the other. And having both impacts the need for storage, says Gençer: “Storage really plays a role either when you’re targeting a very low carbon intensity or where your resources are mostly solar and they’re not complemented by wind.”

    Gençer notes that the target for the U.S. electricity grid is to reach net zero by 2035. But the analysis showed that reaching just 100 grams of CO2 per kWh would require at least 50 percent of system capacity to be wind and solar. “And we’re nowhere near that yet,” he says.

    Indeed, Gençer and Farnsworth’s analysis doesn’t even include a zero emissions case. Why not? As Gençer says, “We cannot reach zero.” Wind and solar are usually considered to be net zero, but that’s not true. Wind, solar, and even storage have embedded carbon emissions due to materials, manufacturing, and so on. “To go to true net zero, you’d need negative emission technologies,” explains Gençer, referring to techniques that remove carbon from the air or ocean. That observation confirms the importance of performing life-cycle assessments.

    Farnsworth voices another concern: Coal quickly disappears in all regions because natural gas is an easy substitute for coal and has lower carbon emissions. “People say they’ve decreased their carbon emissions by a lot, but most have done it by transitioning from coal to natural gas power plants,” says Farnsworth. “But with that pathway for decarbonization, you hit a wall. Once you’ve transitioned from coal to natural gas, you’ve got to do something else. You need a new strategy — a new trajectory to actually reach your decarbonization target, which most likely will involve replacing the newly installed natural gas plants.”

    Gençer makes one final point: The availability of cheap nuclear — whether fission or fusion — would completely change the picture. When the tighter caps require the use of nuclear, the cost of electricity goes up. “The impact is quite significant,” says Gençer. “When we go from 100 grams down to 25 grams of CO2 per kWh, we see a 20 percent to 30 percent increase in the cost of electricity.” If it were available, a less-expensive nuclear option would likely be included in the technology mix under more lenient caps, significantly reducing the cost of decarbonizing power grids in all regions.

    The special case of California

    In another analysis, Gençer and Farnsworth took a closer look at California. In California, about 10 percent of total demand is now met with nuclear power. Yet current power plants are scheduled for retirement very soon, and a 1976 law forbids the construction of new nuclear plants. (The state recently extended the lifetime of one nuclear plant to prevent the grid from becoming unstable.) “California is very motivated to decarbonize their grid,” says Farnsworth. “So how difficult will that be without nuclear power?”

    To find out, the researchers performed a series of analyses to investigate the challenge of decarbonizing in California with nuclear power versus without it. At 200 grams of CO2 per kWh — about a 50 percent reduction — the optimized mix and cost look the same with and without nuclear. Nuclear doesn’t appear due to its high cost. At 100 grams of CO2 per kWh — about a 75 percent reduction — nuclear does appear in the cost-optimized system, reducing the total system capacity while having little impact on the cost.

    But at 50 grams of CO2 per kWh, the ban on nuclear makes a significant difference. “Without nuclear, there’s about a 45 percent increase in total system size, which is really quite substantial,” says Farnsworth. “It’s a vastly different system, and it’s more expensive.” Indeed, the cost of electricity would increase by 7 percent.

    Going one step further, the researchers performed an analysis to determine the most decarbonized system possible in California. Without nuclear, the state could reach 40 grams of CO2 per kWh. “But when you allow for nuclear, you can get all the way down to 16 grams of CO2 per kWh,” says Farnsworth. “We found that California needs nuclear more than any other region due to its poor wind resources.”

    Impacts of a carbon tax

    One more case study examined a policy approach to incentivizing decarbonization. Instead of imposing a ceiling on carbon emissions, this strategy would tax every ton of carbon that’s emitted. Proposed taxes range from zero to $100 per ton.

    To investigate the effectiveness of different levels of carbon tax, Farnsworth and Gençer used the IG model to calculate the minimum-cost system for each region, assuming a certain cost for emitting each ton of carbon. The analyses show that a low carbon tax — just $10 per ton — significantly reduces emissions in all regions by phasing out all coal generation. In the Northwest region, for example, a carbon tax of $10 per ton decreases system emissions by 65 percent while increasing system cost by just 2.8 percent (relative to an untaxed system).

    After coal has been phased out of all regions, every increase in the carbon tax brings a slow but steady linear decrease in emissions and a linear increase in cost. But the rates of those changes vary from region to region. For example, the rate of decrease in emissions for each added tax dollar is far lower in the Central region than in the Northwest, largely due to the Central region’s already low emissions intensity without a carbon tax. Indeed, the Central region without a carbon tax has a lower emissions intensity than the Northwest region with a tax of $100 per ton.

    As Farnsworth summarizes, “A low carbon tax — just $10 per ton — is very effective in quickly incentivizing the replacement of coal with natural gas. After that, it really just incentivizes the replacement of natural gas technologies with more renewables and more energy storage.” She concludes, “If you’re looking to get rid of coal, I would recommend a carbon tax.”

    Future extensions of IG

    The researchers have already added hydroelectric to the generating options in the IG model, and they are now planning further extensions. For example, they will include additional regions for analysis, add other long-term energy storage options, and make changes that allow analyses to take into account the generating infrastructure that already exists. Also, they will use the model to examine the cost and value of interregional transmission to take advantage of the diversity of available renewable resources.

    Farnsworth emphasizes that the analyses reported here are just samples of what’s possible using the IG model. The model is a web-based tool that includes embedded data covering the whole United States, and the output from an analysis includes an easy-to-understand display of the required installations, hourly operation, and overall techno-economic analysis and life-cycle assessment results. “The user is able to go in and explore a vast number of scenarios with no data collection or pre-processing,” she says. “There’s no barrier to begin using the tool. You can just hop on and start exploring your options so you can make an informed decision about the best path forward.”

    This work was supported by the International Energy Agency Gas and Oil Technology Collaboration Program and the MIT Energy Initiative Low-Carbon Energy Centers.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    A new sensor detects harmful “forever chemicals” in drinking water

    MIT chemists have designed a sensor that detects tiny quantities of perfluoroalkyl and polyfluoroalkyl substances (PFAS) — chemicals found in food packaging, nonstick cookware, and many other consumer products.

    These compounds, also known as “forever chemicals” because they do not break down naturally, have been linked to a variety of harmful health effects, including cancer, reproductive problems, and disruption of the immune and endocrine systems.

    Using the new sensor technology, the researchers showed that they could detect PFAS levels as low as 200 parts per trillion in a water sample. The device they designed could offer a way for consumers to test their drinking water, and it could also be useful in industries that rely heavily on PFAS chemicals, including the manufacture of semiconductors and firefighting equipment.

    “There’s a real need for these sensing technologies. We’re stuck with these chemicals for a long time, so we need to be able to detect them and get rid of them,” says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT and the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences.

    Other authors of the paper are former MIT postdoc and lead author Sohyun Park and MIT graduate student Collette Gordon.

    Detecting PFAS

    Coatings containing PFAS chemicals are used in thousands of consumer products. In addition to nonstick coatings for cookware, they are also commonly used in water-repellent clothing, stain-resistant fabrics, grease-resistant pizza boxes, cosmetics, and firefighting foams.

    These fluorinated chemicals, which have been in widespread use since the 1950s, can be released into water, air, and soil, from factories, sewage treatment plants, and landfills. They have been found in drinking water sources in all 50 states.

    In 2023, the Environmental Protection Agency created an “advisory health limit” for two of the most hazardous PFAS chemicals, known as perfluorooctanoic acid (PFOA) and perfluorooctyl sulfonate (PFOS). These advisories call for a limit of 0.004 parts per trillion for PFOA and 0.02 parts per trillion for PFOS in drinking water.

    Currently, the only way that a consumer could determine if their drinking water contains PFAS is to send a water sample to a laboratory that performs mass spectrometry testing. However, this process takes several weeks and costs hundreds of dollars.

    To create a cheaper and faster way to test for PFAS, the MIT team designed a sensor based on lateral flow technology — the same approach used for rapid Covid-19 tests and pregnancy tests. Instead of a test strip coated with antibodies, the new sensor is embedded with a special polymer known as polyaniline, which can switch between semiconducting and conducting states when protons are added to the material.

    The researchers deposited these polymers onto a strip of nitrocellulose paper and coated them with a surfactant that can pull fluorocarbons such as PFAS out of a drop of water placed on the strip. When this happens, protons from the PFAS are drawn into the polyaniline and turn it into a conductor, reducing the electrical resistance of the material. This change in resistance, which can be measured precisely using electrodes and sent to an external device such as a smartphone, gives a quantitative measurement of how much PFAS is present.

    This approach works only with PFAS that are acidic, which includes two of the most harmful PFAS — PFOA and perfluorobutanoic acid (PFBA).

    A user-friendly system

    The current version of the sensor can detect concentrations as low as 200 parts per trillion for PFBA, and 400 parts per trillion for PFOA. This is not quite low enough to meet the current EPA guidelines, but the sensor uses only a fraction of a milliliter of water. The researchers are now working on a larger-scale device that would be able to filter about a liter of water through a membrane made of polyaniline, and they believe this approach should increase the sensitivity by more than a hundredfold, with the goal of meeting the very low EPA advisory levels.

    “We do envision a user-friendly, household system,” Swager says. “You can imagine putting in a liter of water, letting it go through the membrane, and you have a device that measures the change in resistance of the membrane.”

    Such a device could offer a less expensive, rapid alternative to current PFAS detection methods. If PFAS are detected in drinking water, there are commercially available filters that can be used on household drinking water to reduce those levels. The new testing approach could also be useful for factories that manufacture products with PFAS chemicals, so they could test whether the water used in their manufacturing process is safe to release into the environment.

    The research was funded by an MIT School of Science Fellowship to Gordon, a Bose Research Grant, and a Fulbright Fellowship to Park. More

  • in

    Tests show high-temperature superconducting magnets are ready for fusion

    In the predawn hours of Sept. 5, 2021, engineers achieved a major milestone in the labs of MIT’s Plasma Science and Fusion Center (PSFC), when a new type of magnet, made from high-temperature superconducting material, achieved a world-record magnetic field strength of 20 tesla for a large-scale magnet. That’s the intensity needed to build a fusion power plant that is expected to produce a net output of power and potentially usher in an era of virtually limitless power production.

    The test was immediately declared a success, having met all the criteria established for the design of the new fusion device, dubbed SPARC, for which the magnets are the key enabling technology. Champagne corks popped as the weary team of experimenters, who had labored long and hard to make the achievement possible, celebrated their accomplishment.

    But that was far from the end of the process. Over the ensuing months, the team tore apart and inspected the components of the magnet, pored over and analyzed the data from hundreds of instruments that recorded details of the tests, and performed two additional test runs on the same magnet, ultimately pushing it to its breaking point in order to learn the details of any possible failure modes.

    All of this work has now culminated in a detailed report by researchers at PSFC and MIT spinout company Commonwealth Fusion Systems (CFS), published in a collection of six peer-reviewed papers in a special edition of the March issue of IEEE Transactions on Applied Superconductivity. Together, the papers describe the design and fabrication of the magnet and the diagnostic equipment needed to evaluate its performance, as well as the lessons learned from the process. Overall, the team found, the predictions and computer modeling were spot-on, verifying that the magnet’s unique design elements could serve as the foundation for a fusion power plant.

    Enabling practical fusion power

    The successful test of the magnet, says Hitachi America Professor of Engineering Dennis Whyte, who recently stepped down as director of the PSFC, was “the most important thing, in my opinion, in the last 30 years of fusion research.”

    Before the Sept. 5 demonstration, the best-available superconducting magnets were powerful enough to potentially achieve fusion energy — but only at sizes and costs that could never be practical or economically viable. Then, when the tests showed the practicality of such a strong magnet at a greatly reduced size, “overnight, it basically changed the cost per watt of a fusion reactor by a factor of almost 40 in one day,” Whyte says.

    “Now fusion has a chance,” Whyte adds. Tokamaks, the most widely used design for experimental fusion devices, “have a chance, in my opinion, of being economical because you’ve got a quantum change in your ability, with the known confinement physics rules, about being able to greatly reduce the size and the cost of objects that would make fusion possible.”

    The comprehensive data and analysis from the PSFC’s magnet test, as detailed in the six new papers, has demonstrated that plans for a new generation of fusion devices — the one designed by MIT and CFS, as well as similar designs by other commercial fusion companies — are built on a solid foundation in science.

    The superconducting breakthrough

    Fusion, the process of combining light atoms to form heavier ones, powers the sun and stars, but harnessing that process on Earth has proved to be a daunting challenge, with decades of hard work and many billions of dollars spent on experimental devices. The long-sought, but never yet achieved, goal is to build a fusion power plant that produces more energy than it consumes. Such a power plant could produce electricity without emitting greenhouse gases during operation, and generating very little radioactive waste. Fusion’s fuel, a form of hydrogen that can be derived from seawater, is virtually limitless.

    But to make it work requires compressing the fuel at extraordinarily high temperatures and pressures, and since no known material could withstand such temperatures, the fuel must be held in place by extremely powerful magnetic fields. Producing such strong fields requires superconducting magnets, but all previous fusion magnets have been made with a superconducting material that requires frigid temperatures of about 4 degrees above absolute zero (4 kelvins, or -270 degrees Celsius). In the last few years, a newer material nicknamed REBCO, for rare-earth barium copper oxide, was added to fusion magnets, and allows them to operate at 20 kelvins, a temperature that despite being only 16 kelvins warmer, brings significant advantages in terms of material properties and practical engineering.

    Taking advantage of this new higher-temperature superconducting material was not just a matter of substituting it in existing magnet designs. Instead, “it was a rework from the ground up of almost all the principles that you use to build superconducting magnets,” Whyte says. The new REBCO material is “extraordinarily different than the previous generation of superconductors. You’re not just going to adapt and replace, you’re actually going to innovate from the ground up.” The new papers in Transactions on Applied Superconductivity describe the details of that redesign process, now that patent protection is in place.

    A key innovation: no insulation

    One of the dramatic innovations, which had many others in the field skeptical of its chances of success, was the elimination of insulation around the thin, flat ribbons of superconducting tape that formed the magnet. Like virtually all electrical wires, conventional superconducting magnets are fully protected by insulating material to prevent short-circuits between the wires. But in the new magnet, the tape was left completely bare; the engineers relied on REBCO’s much greater conductivity to keep the current flowing through the material.

    “When we started this project, in let’s say 2018, the technology of using high-temperature superconductors to build large-scale high-field magnets was in its infancy,” says Zach Hartwig, the Robert N. Noyce Career Development Professor in the Department of Nuclear Science and Engineering. Hartwig has a co-appointment at the PSFC and is the head of its engineering group, which led the magnet development project. “The state of the art was small benchtop experiments, not really representative of what it takes to build a full-size thing. Our magnet development project started at benchtop scale and ended up at full scale in a short amount of time,” he adds, noting that the team built a 20,000-pound magnet that produced a steady, even magnetic field of just over 20 tesla — far beyond any such field ever produced at large scale.

    “The standard way to build these magnets is you would wind the conductor and you have insulation between the windings, and you need insulation to deal with the high voltages that are generated during off-normal events such as a shutdown.” Eliminating the layers of insulation, he says, “has the advantage of being a low-voltage system. It greatly simplifies the fabrication processes and schedule.” It also leaves more room for other elements, such as more cooling or more structure for strength.

    The magnet assembly is a slightly smaller-scale version of the ones that will form the donut-shaped chamber of the SPARC fusion device now being built by CFS in Devens, Massachusetts. It consists of 16 plates, called pancakes, each bearing a spiral winding of the superconducting tape on one side and cooling channels for helium gas on the other.

    But the no-insulation design was considered risky, and a lot was riding on the test program. “This was the first magnet at any sufficient scale that really probed what is involved in designing and building and testing a magnet with this so-called no-insulation no-twist technology,” Hartwig says. “It was very much a surprise to the community when we announced that it was a no-insulation coil.”

    Pushing to the limit … and beyond

    The initial test, described in previous papers, proved that the design and manufacturing process not only worked but was highly stable — something that some researchers had doubted. The next two test runs, also performed in late 2021, then pushed the device to the limit by deliberately creating unstable conditions, including a complete shutoff of incoming power that can lead to a catastrophic overheating. Known as quenching, this is considered a worst-case scenario for the operation of such magnets, with the potential to destroy the equipment.

    Part of the mission of the test program, Hartwig says, was “to actually go off and intentionally quench a full-scale magnet, so that we can get the critical data at the right scale and the right conditions to advance the science, to validate the design codes, and then to take the magnet apart and see what went wrong, why did it go wrong, and how do we take the next iteration toward fixing that. … It was a very successful test.”

    That final test, which ended with the melting of one corner of one of the 16 pancakes, produced a wealth of new information, Hartwig says. For one thing, they had been using several different computational models to design and predict the performance of various aspects of the magnet’s performance, and for the most part, the models agreed in their overall predictions and were well-validated by the series of tests and real-world measurements. But in predicting the effect of the quench, the model predictions diverged, so it was necessary to get the experimental data to evaluate the models’ validity.

    “The highest-fidelity models that we had predicted almost exactly how the magnet would warm up, to what degree it would warm up as it started to quench, and where would the resulting damage to the magnet would be,” he says. As described in detail in one of the new reports, “That test actually told us exactly the physics that was going on, and it told us which models were useful going forward and which to leave by the wayside because they’re not right.”

    Whyte says, “Basically we did the worst thing possible to a coil, on purpose, after we had tested all other aspects of the coil performance. And we found that most of the coil survived with no damage,” while one isolated area sustained some melting. “It’s like a few percent of the volume of the coil that got damaged.” And that led to revisions in the design that are expected to prevent such damage in the actual fusion device magnets, even under the most extreme conditions.

    Hartwig emphasizes that a major reason the team was able to accomplish such a radical new record-setting magnet design, and get it right the very first time and on a breakneck schedule, was thanks to the deep level of knowledge, expertise, and equipment accumulated over decades of operation of the Alcator C-Mod tokamak, the Francis Bitter Magnet Laboratory, and other work carried out at PSFC. “This goes to the heart of the institutional capabilities of a place like this,” he says. “We had the capability, the infrastructure, and the space and the people to do these things under one roof.”

    The collaboration with CFS was also key, he says, with MIT and CFS combining the most powerful aspects of an academic institution and private company to do things together that neither could have done on their own. “For example, one of the major contributions from CFS was leveraging the power of a private company to establish and scale up a supply chain at an unprecedented level and timeline for the most critical material in the project: 300 kilometers (186 miles) of high-temperature superconductor, which was procured with rigorous quality control in under a year, and integrated on schedule into the magnet.”

    The integration of the two teams, those from MIT and those from CFS, also was crucial to the success, he says. “We thought of ourselves as one team, and that made it possible to do what we did.” More

  • in

    With just a little electricity, MIT researchers boost common catalytic reactions

    A simple technique that uses small amounts of energy could boost the efficiency of some key chemical processing reactions, by up to a factor of 100,000, MIT researchers report. These reactions are at the heart of petrochemical processing, pharmaceutical manufacturing, and many other industrial chemical processes.

    The surprising findings are reported today in the journal Science, in a paper by MIT graduate student Karl Westendorff, professors Yogesh Surendranath and Yuriy Roman-Leshkov, and two others.

    “The results are really striking,” says Surendranath, a professor of chemistry and chemical engineering. Rate increases of that magnitude have been seen before but in a different class of catalytic reactions known as redox half-reactions, which involve the gain or loss of an electron. The dramatically increased rates reported in the new study “have never been observed for reactions that don’t involve oxidation or reduction,” he says.

    The non-redox chemical reactions studied by the MIT team are catalyzed by acids. “If you’re a first-year chemistry student, probably the first type of catalyst you learn about is an acid catalyst,” Surendranath says. There are many hundreds of such acid-catalyzed reactions, “and they’re super important in everything from processing petrochemical feedstocks to making commodity chemicals to doing transformations in pharmaceutical products. The list goes on and on.”

    “These reactions are key to making many products we use daily,” adds Roman-Leshkov, a professor of chemical engineering and chemistry.

    But the people who study redox half-reactions, also known as electrochemical reactions, are part of an entirely different research community than those studying non-redox chemical reactions, known as thermochemical reactions. As a result, even though the technique used in the new study, which involves applying a small external voltage, was well-known in the electrochemical research community, it had not been systematically applied to acid-catalyzed thermochemical reactions.

    People working on thermochemical catalysis, Surendranath says, “usually don’t consider” the role of the electrochemical potential at the catalyst surface, “and they often don’t have good ways of measuring it. And what this study tells us is that relatively small changes, on the order of a few hundred millivolts, can have huge impacts — orders of magnitude changes in the rates of catalyzed reactions at those surfaces.”

    “This overlooked parameter of surface potential is something we should pay a lot of attention to because it can have a really, really outsized effect,” he says. “It changes the paradigm of how we think about catalysis.”

    Chemists traditionally think about surface catalysis based on the chemical binding energy of molecules to active sites on the surface, which influences the amount of energy needed for the reaction, he says. But the new findings show that the electrostatic environment is “equally important in defining the rate of the reaction.”

    The team has already filed a provisional patent application on parts of the process and is working on ways to apply the findings to specific chemical processes. Westendorff says their findings suggest that “we should design and develop different types of reactors to take advantage of this sort of strategy. And we’re working right now on scaling up these systems.”

    While their experiments so far were done with a two-dimensional planar electrode, most industrial reactions are run in three-dimensional vessels filled with powders. Catalysts are distributed through those powders, providing a lot more surface area for the reactions to take place. “We’re looking at how catalysis is currently done in industry and how we can design systems that take advantage of the already existing infrastructure,” Westendorff says.

    Surendranath adds that these new findings “raise tantalizing possibilities: Is this a more general phenomenon? Does electrochemical potential play a key role in other reaction classes as well? In our mind, this reshapes how we think about designing catalysts and promoting their reactivity.”

    Roman-Leshkov adds that “traditionally people who work in thermochemical catalysis would not associate these reactions with electrochemical processes at all. However, introducing this perspective to the community will redefine how we can integrate electrochemical characteristics into thermochemical catalysis. It will have a big impact on the community in general.”

    While there has typically been little interaction between electrochemical and thermochemical catalysis researchers, Surendranath says, “this study shows the community that there’s really a blurring of the line between the two, and that there is a huge opportunity in cross-fertilization between these two communities.”

    Westerndorff adds that to make it work, “you have to design a system that’s pretty unconventional to either community to isolate this effect.” And that helps explain why such a dramatic effect had never been seen before. He notes that even their paper’s editor asked them why this effect hadn’t been reported before. The answer has to do with “how disparate those two ideologies were before this,” he says. “It’s not just that people don’t really talk to each other. There are deep methodological differences between how the two communities conduct experiments. And this work is really, we think, a great step toward bridging the two.”

    In practice, the findings could lead to far more efficient production of a wide variety of chemical materials, the team says. “You get orders of magnitude changes in rate with very little energy input,” Surendranath says. “That’s what’s amazing about it.”

    The findings, he says, “build a more holistic picture of how catalytic reactions at interfaces work, irrespective of whether you’re going to bin them into the category of electrochemical reactions or thermochemical reactions.” He adds that “it’s rare that you find something that could really revise our foundational understanding of surface catalytic reactions in general. We’re very excited.”

    “This research is of the highest quality,” says Costas Vayenas, a professor of engineering at the university of Patras, in Greece, who was not associated with the study. The work “is very promising for practical applications, particularly since it extends previous related work in redox catalytic systems,” he says.

    The team included MIT postdoc Max Hulsey PhD ’22 and graduate student Thejas Wesley PhD ’23, and was supported by the Air Force Office of Scientific Research and the U.S. Department of Energy Basic Energy Sciences. More

  • in

    MIT researchers remotely map crops, field by field

    Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

    Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The team’s method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next. 

    The researchers used the technique to automatically generate the first nationwide crop map of Thailand — a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailand’s four major crops — rice, cassava, sugarcane, and maize — and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

    The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

    “It’s a longstanding gap in knowledge about what is grown around the world,” says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). “The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown — the more granularly you can map, the more questions you can answer.”

    Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

    Ground truth

    Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms’ crop types and yields.

    Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These “ground truth” labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors don’t cover but that satellites automatically do.

    “What’s lacking in low- and middle-income countries is this ground label that we can associate with satellite signals,” Laguarta Soler says. “Getting these ground truths to train a model in the first place has been limited in most of the world.”

    The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

    In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

    Cropped image

    In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand — a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

    Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist — a web-based crowdsourced  biodiversity database, and GPT-4V, a “multimodal large language model” that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting — rice, maize, sugarcane, or cassava.

    The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a location’s greenness and its reflectivity (which can be a sign of water). 

    “Each type of crop has a certain signature across these different bands, which changes throughout a growing season,” Laguarta Soler notes.

    The team trained a second model to make associations between a location’s satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

    This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see whether the map’s labels matched the expert, “gold standard” labels, it did so 93 percent of the time.

    “In the U.S., we’re also looking at over 90 percent accuracy, whereas with previous work in India, we’ve only seen 75 percent because ground labels are limited,” Wang says. “Now we can create these labels in a cheap and automated way.”

    The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

    “There are over 150 million smallholder farmers in India,” Wang says. “India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

    The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

    “What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.” More

  • in

    Study measures the psychological toll of wildfires

    Wildfires in Southeast Asia significantly affect peoples’ moods, especially if the fires originate outside a person’s own country, according to a new study.

    The study, which measures sentiment by analyzing large amounts of social media data, helps show the psychological toll of wildfires that result in substantial air pollution, at a time when such fires are becoming a high-profile marker of climate change.  

    “It has a substantial negative impact on people’s subjective well-being,” says Siqi Zheng, an MIT professor and co-author of a new paper detailing the results. “This is a big effect.”

    The magnitude of the effect is about the same as another shift uncovered through large-scale studies of sentiment expressed online: When the weekend ends and the work week starts, people’s online postings reflect a sharp drop in mood. The new study finds that daily exposure to typical wildfire smoke levels in the region produces an equivalently large change in sentiment.

    “People feel anxious or sad when they have to go to work on Monday, and what we find with the fires is that this is, in fact, comparable to a Sunday-to-Monday sentiment drop,” says co-author Rui Du, a former MIT postdoct who is now an economist at Oklahoma State University.

    The paper, “Transboundary Vegetation Fire Smoke and Expressed Sentiment: Evidence from Twitter,” has been published online in the Journal of Environmental Economics and Management.

    The authors are Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability in the Center for Real Estate and the Department of Urban Studies and Planning at MIT; Du, an assistant professor of economics at Oklahoma State University’s Spears School of Business; Ajkel Mino, of the Department of Data Science and Knowledge Engineering at Maastricht University; and Jianghao Wang, of the Institute of Geographic Sciences and Natural Resources Research at the Chinese Academy of Sciences.

    The research is based on an examination of the events of 2019 in Southeast Asia, in which a huge series of Indonesian wildfires, seemingly related to climate change and deforestation for the palm oil industry, produced a massive amount of haze in the region. The air-quality problems affected seven countries: Brunei, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam.

    To conduct the study, the scholars produced a large-scale analysis of postings from 2019 on X (formerly known as Twitter) to sample public sentiment. The study involved 1,270,927 tweets from 378,300 users who agreed to have their locations made available. The researchers compiled the data with a web crawler program and multilingual natural language processing applications that review the content of tweets and rate them in affective terms based on the vocabulary used. They also used satellite data from NASA and NOAA to create a map of wildfires and haze over time, linking that to the social media data.

    Using this method creates an advantage that regular public-opinion polling does not have: It creates a measurement of mood that is effectively a real-time metric rather than an after-the-fact assessment. Moreover, substantial wind shifts in the region at the time in 2019 essentially randomize which countries were exposed to more haze at various points, making the results less likely to be influenced by other factors.

    The researchers also made a point to disentangle the sentiment change due to wildfire smoke and that due to other factors. After all, people experience mood changes all the time from various natural and socioeconomic events. Wildfires may be correlated with some of them, which makes it hard to tease out the singular effect of the smoke. By comparing only the difference in exposure to wildfire smoke, blown in by wind, within the same locations over time, this study is able to isolate the impact of local wildfire haze on mood, filtering out nonpollution influences.

    “What we are seeing from our estimates is really just the pure causal effect of the transboundary wildfire smoke,” Du says.

    The study also revealed that people living near international borders are much more likely to be upset when affected by wildfire smoke that comes from a neighboring country. When similar conditions originate in their own country, there is a considerably more muted reaction.

    “Notably, individuals do not seem to respond to domestically produced fire plumes,” the authors write in the paper. The small size of many countries in the region, coupled with a fire-prone climate, make this an ongoing source of concern, however.

    “In Southeast Asia this is really a big problem, with small countries clustered together,” Zheng observes.

    Zheng also co-authored a 2022 study using a related methodology to study the impact of the Covid-19 pandemic on the moods of residents in about 100 countries. In that case, the research showed that the global pandemic depressed sentiment about 4.7 times as much as the normal Sunday-to-Monday shift.

    “There was a huge toll of Covid on people’s sentiment, and while the impact of the wildfires was about one-fifth of Covid, that’s still quite large,” Du says.

    In policy terms, Zheng suggests that the global implications of cross-border smoke pollution could give countries a shared incentive to cooperate further. If one country’s fires become another country’s problem, they may all have reason to limit them. Scientists warn of a rising number of wildfires globally, fueled by climate change conditions in which more fires can proliferate, posing a persistent threat across societies.

    “If they don’t work on this collaboratively, it could be damaging to everyone,” Zheng says.

    The research at MIT was supported, in part, by the MIT Sustainable Urbanization Lab. Jianghao Wang was supported by the National Natural Science Foundation of China. More

  • in

    Study: Global deforestation leads to more mercury pollution

    About 10 percent of human-made mercury emissions into the atmosphere each year are the result of global deforestation, according to a new MIT study.

    The world’s vegetation, from the Amazon rainforest to the savannahs of sub-Saharan Africa, acts as a sink that removes the toxic pollutant from the air. However, if the current rate of deforestation remains unchanged or accelerates, the researchers estimate that net mercury emissions will keep increasing.

    “We’ve been overlooking a significant source of mercury, especially in tropical regions,” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.

    The researchers’ model shows that the Amazon rainforest plays a particularly important role as a mercury sink, contributing about 30 percent of the global land sink. Curbing Amazon deforestation could thus have a substantial impact on reducing mercury pollution.

    The team also estimates that global reforestation efforts could increase annual mercury uptake by about 5 percent. While this is significant, the researchers emphasize that reforestation alone should not be a substitute for worldwide pollution control efforts.

    “Countries have put a lot of effort into reducing mercury emissions, especially northern industrialized countries, and for very good reason. But 10 percent of the global anthropogenic source is substantial, and there is a potential for that to be even greater in the future. [Addressing these deforestation-related emissions] needs to be part of the solution,” says senior author Noelle Selin, a professor in IDSS and MIT’s Department of Earth, Atmospheric and Planetary Sciences.

    Feinberg and Selin are joined on the paper by co-authors Martin Jiskra, a former Swiss National Science Foundation Ambizione Fellow at the University of Basel; Pasquale Borrelli, a professor at Roma Tre University in Italy; and Jagannath Biswakarma, a postdoc at the Swiss Federal Institute of Aquatic Science and Technology. The paper appears today in Environmental Science and Technology.

    Modeling mercury

    Over the past few decades, scientists have generally focused on studying deforestation as a source of global carbon dioxide emissions. Mercury, a trace element, hasn’t received the same attention, partly because the terrestrial biosphere’s role in the global mercury cycle has only recently been better quantified.

    Plant leaves take up mercury from the atmosphere, in a similar way as they take up carbon dioxide. But unlike carbon dioxide, mercury doesn’t play an essential biological function for plants. Mercury largely stays within a leaf until it falls to the forest floor, where the mercury is absorbed by the soil.

    Mercury becomes a serious concern for humans if it ends up in water bodies, where it can become methylated by microorganisms. Methylmercury, a potent neurotoxin, can be taken up by fish and bioaccumulated through the food chain. This can lead to risky levels of methylmercury in the fish humans eat.

    “In soils, mercury is much more tightly bound than it would be if it were deposited in the ocean. The forests are doing a sort of ecosystem service, in that they are sequestering mercury for longer timescales,” says Feinberg, who is now a postdoc in the Blas Cabrera Institute of Physical Chemistry in Spain.

    In this way, forests reduce the amount of toxic methylmercury in oceans.

    Many studies of mercury focus on industrial sources, like burning fossil fuels, small-scale gold mining, and metal smelting. A global treaty, the 2013 Minamata Convention, calls on nations to reduce human-made emissions. However, it doesn’t directly consider impacts of deforestation.

    The researchers launched their study to fill in that missing piece.

    In past work, they had built a model to probe the role vegetation plays in mercury uptake. Using a series of land use change scenarios, they adjusted the model to quantify the role of deforestation.

    Evaluating emissions

    This chemical transport model tracks mercury from its emissions sources to where it is chemically transformed in the atmosphere and then ultimately to where it is deposited, mainly through rainfall or uptake into forest ecosystems.

    They divided the Earth into eight regions and performed simulations to calculate deforestation emissions factors for each, considering elements like type and density of vegetation, mercury content in soils, and historical land use.

    However, good data for some regions were hard to come by.

    They lacked measurements from tropical Africa or Southeast Asia — two areas that experience heavy deforestation. To get around this gap, they used simpler, offline models to simulate hundreds of scenarios, which helped them improve their estimations of potential uncertainties.

    They also developed a new formulation for mercury emissions from soil. This formulation captures the fact that deforestation reduces leaf area, which increases the amount of sunlight that hits the ground and accelerates the outgassing of mercury from soils.

    The model divides the world into grid squares, each of which is a few hundred square kilometers. By changing land surface and vegetation parameters in certain squares to represent deforestation and reforestation scenarios, the researchers can capture impacts on the mercury cycle.

    Overall, they found that about 200 tons of mercury are emitted to the atmosphere as the result of deforestation, or about 10 percent of total human-made emissions. But in tropical and sub-tropical countries, deforestation emissions represent a higher percentage of total emissions. For example, in Brazil deforestation emissions are 40 percent of total human-made emissions.

    In addition, people often light fires to prepare tropical forested areas for agricultural activities, which causes more emissions by releasing mercury stored by vegetation.

    “If deforestation was a country, it would be the second highest emitting country, after China, which emits around 500 tons of mercury a year,” Feinberg adds.

    And since the Minamata Convention is now addressing primary mercury emissions, scientists can expect deforestation to become a larger fraction of human-made emissions in the future.

    “Policies to protect forests or cut them down have unintended effects beyond their target. It is important to consider the fact that these are systems, and they involve human activities, and we need to understand them better in order to actually solve the problems that we know are out there,” Selin says.

    By providing this first estimate, the team hopes to inspire more research in this area.

    In the future, they want to incorporate more dynamic Earth system models into their analysis, which would enable them to interactively track mercury uptake and better model the timescale of vegetation regrowth.

    “This paper represents an important advance in our understanding of global mercury cycling by quantifying a pathway that has long been suggested but not yet quantified. Much of our research to date has focused on primary anthropogenic emissions — those directly resulting from human activity via coal combustion or mercury-gold amalgam burning in artisanal and small-scale gold mining,” says Jackie Gerson, an assistant professor in the Department of Earth and Environmental Sciences at Michigan State University, who was not involved with this research. “This research shows that deforestation can also result in substantial mercury emissions and needs to be considered both in terms of global mercury models and land management policies. It therefore has the potential to advance our field scientifically as well as to promote policies that reduce mercury emissions via deforestation.

    This work was funded, in part, by the U.S. National Science Foundation, the Swiss National Science Foundation, and Swiss Federal Institute of Aquatic Science and Technology. More