More stories

  • in

    Understanding the impacts of mining on local environments and communities

    Hydrosocial displacement refers to the idea that resolving water conflict in one area can shift the conflict to a different area. The concept was coined by Scott Odell, a visiting researcher in MIT’s Environmental Solutions Initiative (ESI). As part of ESI’s Program on Mining and the Circular Economy, Odell researches the impacts of extractive industries on local environments and communities, especially in Latin America. He discovered that hydrosocial displacements are often in regions where the mining industry is vying for use of precious water sources that are already stressed due to climate change.

    Odell is working with John Fernández, ESI director and professor in the Department of Architecture, on a project that is examining the converging impacts of climate change, mining, and agriculture in Chile. The work is funded by a seed grant from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Specifically, the project seeks to answer how the expansion of seawater desalination by the mining industry is affecting local populations, and how climate change and mining affect Andean glaciers and the agricultural communities dependent upon them.By working with communities in mining areas, Odell and Fernández are gaining a sense of the burden that mining minerals needed for the clean energy transition is placing on local populations, and the types of conflicts that arise when water sources become polluted or scarce. This work is of particular importance considering over 100 countries pledged a commitment to the clean energy transition at the recent United Nations climate change conference, known as COP28.

    Play video

    J-WAFS Community Spotlight on Scott Odell

    Water, humanity’s lifebloodAt the March 2023 United Nations (U.N.) Water Conference in New York, U.N. Secretary-General António Guterres warned “water is in deep trouble. We are draining humanity’s lifeblood through vampiric overconsumption and unsustainable use and evaporating it through global heating.” A quarter of the world’s population already faces “extremely high water stress,” according to the World Resources Institute. In an effort to raise awareness of major water-related issues and inspire action for innovative solutions, the U.N. created World Water Day, observed every year on March 22. This year’s theme is “Water for Peace,” underscoring the fact that even though water is a basic human right and intrinsic to every aspect of life, it is increasingly fought over as supplies dwindle due to problems including drought, overuse, or mismanagement.  

    The “Water for Peace” theme is exemplified in Fernández and Odell’s J-WAFS project, where findings are intended to inform policies to reduce social and environmental harms inflicted on mining communities and their limited water sources.“Despite broad academic engagement with mining and climate change separately, there has been a lack of analysis of the societal implications of the interactions between mining and climate change,” says Odell. “This project is helping to fill the knowledge gap. Results will be summarized in Spanish and English and distributed to interested and relevant parties in Chile, ensuring that the results can be of benefit to those most impacted by these challenges,” he adds.

    The effects of mining for the clean energy transition

    Global climate change is understood to be the most pressing environmental issue facing humanity today. Mitigating climate change requires reducing carbon emissions by transitioning away from conventional energy derived from burning fossil fuels, to more sustainable energy sources like solar and wind power. Because copper is an excellent conductor of electricity, it will be a crucial element in the clean energy transition, in which more solar panels, wind turbines, and electric vehicles will be manufactured. “We are going to see a major increase in demand for copper due to the clean energy transition,” says Odell.

    In 2021, Chile produced 26 percent of the world’s copper, more than twice as much as any other country, Odell explains. Much of Chile’s mining is concentrated in and around the Atacama Desert — the world’s driest desert. Unfortunately, mining requires large amounts of water for a variety of processes, including controlling dust at the extraction site, cooling machinery, and processing and transporting ore.

    Chile is also one of the world’s largest exporters of agricultural products. Farmland is typically situated in the valleys downstream of several mines in the high Andes region, meaning mines get first access to water. This can lead to water conflict between mining operations and agricultural communities. Compounding the problem of mining for greener energy materials to combat climate change, are the very effects of climate change. According to the Chilean government, the country has suffered 13 years of the worst drought in history. While this is detrimental to the mining industry, it is also concerning for those working in agriculture, including the Indigenous Atacameño communities that live closest to the Escondida mine, the largest copper mine in the world. “There was never a lot of water to go around, even before the mine,” Odell says. The addition of Escondida stresses an already strained water system, leaving Atacameño farmers and individuals vulnerable to severe water insecurity.

    What’s more, waste from mining, known as tailings, includes minerals and chemicals that can contaminate water in nearby communities if not properly handled and stored. Odell says the secure storage of tailings is a high priority in earthquake-prone Chile. “If an earthquake were to hit and damage a tailings dam, it could mean toxic materials flowing downstream and destroying farms and communities,” he says.

    Chile’s treasured glaciers are another piece of the mining, climate change, and agricultural puzzle. Caroline White-Nockleby, a PhD candidate in MIT’s Program in Science, Technology, and Society, is working with Odell and Fernández on the J-WAFS project and leading the research specifically on glaciers. “These may not be the picturesque bright blue glaciers that you might think of, but they are, nonetheless, an important source of water downstream,” says White-Nockleby. She goes on to explain that there are a few different ways that mines can impact glaciers.

    In some cases, mining companies have proposed to move or even destroy glaciers to get at the ore beneath. Other impacts include dust from mining that falls on glaciers. White-Nockleby says, “this makes the glaciers a darker color, so, instead of reflecting the sun’s rays away, [the glacier] may absorb the heat and melt faster.” This shows that even when not directly intervening with glaciers, mining activities can cause glacial decline, adding to the threat glaciers already face due to climate change. She also notes that “glaciers are an important water storage facility,” describing how, on an annual cycle, glaciers freeze and melt, allowing runoff that downstream agricultural communities can utilize. If glaciers suddenly melt too quickly, flooding of downstream communities can occur.

    Desalination offers a possible, but imperfect, solution

    Chile’s extensive coastline makes it uniquely positioned to utilize desalination — the removal of salts from seawater — to address water insecurity. Odell says that “over the last decade or so, there’s been billions of dollars of investments in desalination in Chile.”

    As part of his dissertation work at Clark University, Odell found broad optimism in Chile for solving water issues in the mining industry through desalination. Not only was the mining industry committed to building desalination plants, there was also political support, and support from some community members in highland communities near the mines. Yet, despite the optimism and investment, desalinated water was not replacing the use of continental water. He concluded that “desalination can’t solve water conflict if it doesn’t reduce demand for continental water supplies.”

    However, after publishing those results, Odell learned that new estimates at the national level showed that desalination operations had begun to replace the use of continental water after 2018. In two case studies that he currently focuses on — the Escondida and Los Pelambres copper mines — the mining companies have expanded their desalination objectives in order to reduce extraction from key continental sources. This seems to be due to a variety of factors. For one thing, in 2022, Chile’s water code was reformed to prioritize human water consumption and environmental protection of water during scarcity and in the allocation of future rights. It also shortened the granting of water rights from “in perpetuity” to 30 years. Under this new code, it is possible that the mining industry may have expanded its desalination efforts because it viewed continental water resources as less secure, Odell surmises.

    As part of the J-WAFS project, Odell has found that recent reactions have been mixed when it comes to the rapid increase in the use of desalination. He spent over two months doing fieldwork in Chile by conducting interviews with members of government, industry, and civil society at the Escondida, Los Pelambres, and Andina mining sites, as well as in Chile’s capital city, Santiago. He has spoken to local and national government officials, leaders of fishing unions, representatives of mining and desalination companies, and farmers. He observed that in the communities where the new desalination plants are being built, there have been concerns from community members as to whether they will get access to the desalinated water, or if it will belong solely to the mines.

    Interviews at the Escondida and Los Pelambres sites, in which desalination operations are already in place or under construction, indicate acceptance of the presence of desalination plants combined with apprehension about unknown long-term environmental impacts. At a third mining site, Andina, there have been active protests against a desalination project that would supply water to a neighboring mine, Los Bronces. In that community, there has been a blockade of the desalination operation by the fishing federation. “They were blockading that operation for three months because of concerns over what the desalination plant would do to their fishing grounds,” Odell says. And this is where the idea of hydrosocial displacement comes into the picture, he explains. Even though desalination operations are easing tensions with highland agricultural communities, new issues are arising for the communities on the coast. “We can’t just look to desalination to solve our problems if it’s going to create problems somewhere else” Odell advises.

    Within the process of hydrosocial displacement, interacting geographical, technical, economic, and political factors constrain the range of responses to address the water conflict. For example, communities that have more political and financial power tend to be better equipped to solve water conflict than less powerful communities. In addition, hydrosocial concerns usually follow the flow of water downstream, from the highlands to coastal regions. Odell says that this raises the need to look at water from a broader perspective.

    “We tend to address water concerns one by one and that can, in practice, end up being kind of like whack-a-mole,” says Odell. “When we think of the broader hydrological system, water is very much linked, and we need to look across the watershed. We can’t just be looking at the specific community affected now, but who else is affected downstream, and will be affected in the long term. If we do solve a water issue by moving it somewhere else, like moving a tailings dam somewhere else, or building a desalination plant, resources are needed in the receiving community to respond to that,” suggests Odell.

    The company building the desalination plant and the fishing federation ultimately reached an agreement and the desalination operation will be moving forward. But Odell notes, “the protest highlights concern about the impacts of the operation on local livelihoods and environments within the much larger context of industrial pollution in the area.”

    The power of communities

    The protest by the fishing federation is one example of communities coming together to have their voices heard. Recent proposals by mining companies that would affect glaciers and other water sources used by agriculture communities have led to other protests that resulted in new agreements to protect local water supplies and the withdrawal of some of the mining proposals.Odell observes that communities have also gone to the courts to raise their concerns. The Atacameño communities, for example, have drawn attention to over-extraction of water resources by the Escondida mine. “Community members are also pursuing education in these topics so that there’s not such a power imbalance between mining companies and local communities,” Odell remarks. This demonstrates the power local communities can have to protect continental water resources.The political and social landscape of Chile may also be changing in favor of local communities. Beginning with what is now referred to as the Estallido Social (social outburst) over inequality in 2019, Chile has undergone social upheaval that resulted in voters calling for a new constitution. Gabriel Boric, a progressive candidate, whose top priorities include social and environmental issues, was elected president during this period. These trends have brought major attention to issues of economic inequality, environmental harms of mining, and environmental justice, which is putting pressure on the mining industry to make a case for its operations in the country, and to justify the environmental costs of mining.

    What happens after the mine dries up?

    From his fieldwork interviews, Odell has learned that the development of mines within communities can offer benefits. Mining companies typically invest directly in communities through employment, road construction, and sometimes even by building or investing in schools, stadiums, or health clinics. Indirectly, mines can have spillover effects in the economy since miners might support local restaurants, hotels, or stores. But what happens when the mine closes? As one community member Odell interviewed stated: “When the mine is gone, what are we going to have left besides a big hole in the ground?”

    Odell suggests that a multi-pronged approach should be taken to address the future state of water and mining. First, he says we need to have broader conversations about the nature of our consumption and production at domestic and global scales. “Mining is driven indirectly by our consumption of energy and directly by our consumption of everything from our buildings to devices to cars,” Odell states. “We should be looking for ways to moderate our consumption and consume smarter through both policy and practice so that we don’t solve climate change while creating new environmental harms through mining.”One of the main ways we can do this is by advancing the circular economy by recycling metals already in the system, or even in landfills, to help build our new clean energy infrastructure. Even so, the clean energy transition will still require mining, but according to Odell, that mining can be done better. “Mining companies and government need to do a better job of consulting with communities. We need solid plans and financing for mine closures in place from the beginning of mining operations, so that when the mine dries up, there’s the money needed to secure tailings dams and protect the communities who will be there forever,” Odell concludes.Overall, it will take an engaged society — from the mining industry to government officials to individuals — to think critically about the role we each play in our quest for a more sustainable planet, and what that might mean for the most vulnerable populations among us. More

  • in

    Lessons from Fukushima: Prepare for the unlikely

    When a devastating earthquake and tsunami overwhelmed the protective systems at the Fukushima Dai’ichi nuclear power plant complex in Japan in March 2011, it triggered a sequence of events leading to one of the worst releases of radioactive materials in the world to date. Although nuclear energy is having a revival as a low-emissions energy source to mitigate climate change, the Fukushima accident is still cited as a reason for hesitancy in adopting it.

    A new study synthesizes information from multidisciplinary sources to understand how the Fukushima Dai’ichi disaster unfolded, and points to the importance of mitigation measures and last lines of defense — even against accidents considered highly unlikely. These procedures have received relatively little attention, but they are critical in determining how severe the consequences of a reactor failure will be, the researchers say.

    The researchers note that their synthesis is one of the few attempts to look at data across disciplinary boundaries, including: the physics and engineering of what took place within the plant’s systems, the plant operators’ actions throughout the emergency, actions by emergency responders, the meteorology of radionuclide releases and transport, and the environmental and health consequences documented since the event.

    The study appears in the journal iScience, in an open-access paper by postdoc Ali Ayoub and Professor Haruko Wainwright at MIT, along with others in Switzerland, Japan, and New Mexico.

    Since 2013, Wainwright has been leading the research to integrate all the radiation monitoring data in the Fukushima region into integrated maps. “I was staring at the contamination map for nearly 10 years, wondering what created the main plume extending in the northwest direction, but I could not find exact information,” Wainwright says. “Our study is unique because we started from the consequence, the contamination map, and tried to identify the key factors for the consequence. Other people study the Fukushima accident from the root cause, the tsunami.”

    One thing they found was that while all the operating reactors, units 1, 2, and 3, suffered core meltdowns as a result of the failure of emergency cooling systems, units 1 and 3 — although they did experience hydrogen explosions — did not release as much radiation to the environment because their venting systems essentially worked to relieve pressure inside the containment vessels as intended. But the same system in unit 2 failed badly.

    “People think that the hydrogen explosion or the core meltdown were the worst things, or the major driver of the radiological consequences of the accident,” Wainright says, “but our analysis found that’s not the case.” Much more significant in terms of the radiological release was the failure of the one venting mechanism.

    “There is a pressure-release mechanism that goes through water where a lot of the radionuclides get filtered out,” she explains. That system was effective in units 1 and 3, filtering out more than 90 percent of the radioactive elements before the gas was vented. However, “in unit 2, that pressure release mechanism got stuck, and the operators could not manually open it.” A hydrogen explosion in unit 1 had damaged the pressure relief mechanism of unit 2. This led to a breach of the containment structure and direct, unfiltered venting to the atmosphere, which, according to the new study, was what produced the greatest amount of contamination from the whole weeks-long event.

    Another factor was the timing of the attempt to vent the pressure buildup in the reactor. Guidelines at the time, and to this day in many reactors, specified that no venting should take place until the pressure inside the reactor containment vessel reached a specified threshold, with no regard to the wind directions at the time. In the case of Fukushima, an earlier venting could have dramatically reduced the impact: Much of the release happened when winds were blowing directly inland, but earlier the wind had been blowing offshore.

    “That pressure-release mechanism has not been a major focus of the engineering community,” she says. While there is appropriate attention to measures that prevent a core meltdown in the first place, “this sort of last line of defense has not been the main focus and should get more attention.”

    Wainwright says the study also underlines several successes in the management of the Fukushima accident. Many of the safety systems did work as they were designed. For example, even though the oldest reactor, unit 1, suffered the greatest internal damage, it released little radioactive material. Most people were able to evacuate from the 20-kilometer (12-mile) zone before the largest release happened. The mitigation measures were “somewhat successful,” Wainwright says. But there was tremendous confusion and anger during and after the accident because there were no preparations in place for such an event.

    Much work has focused on ways to prevent the kind of accidents that happened at Fukushima — for example, in the U.S. reactor operators can deploy portable backup power supplies to maintain proper reactor cooling at any reactor site. But the ongoing situation at the Zaporizhzhia nuclear complex in Ukraine, where nuclear safety is challenged by acts of war, demonstrates that despite engineers’ and operators’ best efforts to prevent it, “the totally unexpected could still happen,” Wainwright says.

    “The big-picture message is that we should have equal attention to both prevention and mitigation of accidents,” she says. “This is the essence of resilience, and it applies beyond nuclear power plants to all essential infrastructure of a functioning society, for example, the electric grid, the food and water supply, the transportation sector, etc.”

    One thing the researchers recommend is that in designing evacuation protocols, planners should make more effort to learn from much more frequent disasters such as wildfires and hurricanes. “We think getting more interdisciplinary, transdisciplinary knowledge from other kinds of disasters would be essential,” she says. Most of the emergency response strategies presently in place, she says, were designed in the 1980s and ’90s, and need to be modernized. “Consequences can be mitigated. A nuclear accident does not have to be a catastrophe, as is often portrayed in popular culture,” Wainright says.

    The research team included Giovanni Sansavini at ETH Zurich in Switzerland; Randall Gauntt at Sandia National Laboratories in New Mexico; and Kimiaki Saito at the Japan Atomic Energy Agency. More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More

  • in

    Reducing pesticide use while increasing effectiveness

    Farming can be a low-margin, high-risk business, subject to weather and climate patterns, insect population cycles, and other unpredictable factors. Farmers need to be savvy managers of the many resources they deal, and chemical fertilizers and pesticides are among their major recurring expenses.

    Despite the importance of these chemicals, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to their runoff into waterways and buildup up in the soil.

    That could change, thanks to a new approach of feedback-optimized spraying, invented by AgZen, an MIT spinout founded in 2020 by Professor Kripa Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22.

    Play video

    AgZen has developed a system for farming that can monitor exactly how much of the sprayed chemicals adheres to plants, in real time, as the sprayer drives through a field. Built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray.

    Over the past decade, AgZen’s founders have developed products and technologies to control the interactions of droplets and sprays with plant surfaces. The Boston-based venture-backed company launched a new commercial product in 2024 and is currently piloting another related product. Field tests of both have shown the products can help farmers spray more efficiently and effectively, using fewer chemicals overall.

    “Worldwide, farms spend approximately $60 billion a year on pesticides. Our objective is to reduce the number of pesticides sprayed and lighten the financial burden on farms without sacrificing effective pest management,” Varanasi says.

    Getting droplets to stick

    While the world pesticide market is growing rapidly, a lot of the pesticides sprayed don’t reach their target. A significant portion bounces off the plant surfaces, lands on the ground, and becomes part of the runoff that flows to streams and rivers, often causing serious pollution. Some of these pesticides can be carried away by wind over very long distances.

    “Drift, runoff, and poor application efficiency are well-known, longstanding problems in agriculture, but we can fix this by controlling and monitoring how sprayed droplets interact with leaves,” Varanasi says.

    With support from MIT Tata Center and the Abdul Latif Jameel Water and Food Systems Lab, Varanasi and his team analyzed how droplets strike plant surfaces, and explored ways to increase application efficiency. This research led them to develop a novel system of nozzles that cloak droplets with compounds that enhance the retention of droplets on the leaves, a product they call EnhanceCoverage.

    Field studies across regions — from Massachusetts to California to Italy and France —showed that this droplet-optimization system could allow farmers to cut the amount of chemicals needed by more than half because more of the sprayed substances would stick to the leaves.

    Measuring coverage

    However, in trying to bring this technology to market, the researchers faced a sticky problem: Nobody knew how well pesticide sprays were adhering to the plants in the first place, so how could AgZen say that the coverage was better with its new EnhanceCoverage system?

    “I had grown up spraying with a backpack on a small farm in India, so I knew this was an issue,” Jayaprakash says. “When we spoke to growers, they told me how complicated spraying is when you’re on a large machine. Whenever you spray, there are so many things that can influence how effective your spray is. How fast do you drive the sprayer? What flow rate are you using for the chemicals? What chemical are you using? What’s the age of the plants, what’s the nozzle you’re using, what is the weather at the time? All these things influence agrochemical efficiency.”

    Agricultural spraying essentially comes down to dissolving a chemical in water and then spraying droplets onto the plants. “But the interaction between a droplet and the leaf is complex,” Varanasi says. “We were coming in with ways to optimize that, but what the growers told us is, hey, we’ve never even really looked at that in the first place.”

    Although farmers have been spraying agricultural chemicals on a large scale for about 80 years, they’ve “been forced to rely on general rules of thumb and pick all these interlinked parameters, based on what’s worked for them in the past. You pick a set of these parameters, you go spray, and you’re basically praying for outcomes in terms of how effective your pest control is,” Varanasi says.

    Before AgZen could sell farmers on the new system to improve droplet coverage, the company had to invent a way to measure precisely how much spray was adhering to plants in real-time.

    Comparing before and after

    The system they came up with, which they tested extensively on farms across the country last year, involves a unit that can be bolted onto the spraying arm of virtually any sprayer. It carries two sensor stacks, one just ahead of the sprayer nozzles and one behind. Then, built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray. It also computes how much those droplets will spread out or evaporate, leading to a precise estimate of the final coverage.

    “There’s a lot of physics that governs how droplets spread and evaporate, and this has been incorporated into software that a farmer can use,” Varanasi says. “We bring a lot of our expertise into understanding droplets on leaves. All these factors, like how temperature and humidity influence coverage, have always been nebulous in the spraying world. But now you have something that can be exact in determining how well your sprays are doing.”

    “We’re not only measuring coverage, but then we recommend how to act,” says Jayaprakash, who is AgZen’s CEO. “With the information we collect in real-time and by using AI, RealCoverage tells operators how to optimize everything on their sprayer, from which nozzle to use, to how fast to drive, to how many gallons of spray is best for a particular chemical mix on a particular acre of a crop.”

    The tool was developed to prove how much AgZen’s EnhanceCoverage nozzle system (which will be launched in 2025) improves coverage. But it turns out that monitoring and optimizing droplet coverage on leaves in real-time with this system can itself yield major improvements.

    “We worked with large commercial farms last year in specialty and row crops,” Jayaprakash says. “When we saved our pilot customers up to 50 percent of their chemical cost at a large scale, they were very surprised.” He says the tool has reduced chemical costs and volume in fallow field burndowns, weed control in soybeans, defoliation in cotton, and fungicide and insecticide sprays in vegetables and fruits. Along with data from commercial farms, field trials conducted by three leading agricultural universities have also validated these results.

    “Across the board, we were able to save between 30 and 50 percent on chemical costs and increase crop yields by enabling better pest control,” Jayaprakash says. “By focusing on the droplet-leaf interface, our product can help any foliage spray throughout the year, whereas most technological advancements in this space recently have been focused on reducing herbicide use alone.” The company now intends to lease the system across thousands of acres this year.

    And these efficiency gains can lead to significant returns at scale, he emphasizes: In the U.S., farmers currently spend $16 billion a year on chemicals, to protect about $200 billion of crop yields.

    The company launched its first product, the coverage optimization system called RealCoverage, this year, reaching a wide variety of farms with different crops and in different climates. “We’re going from proof-of-concept with pilots in large farms to a truly massive scale on a commercial basis with our lease-to-own program,” Jayaprakash says.

    “We’ve also been tapped by the USDA to help them evaluate practices to minimize pesticides in watersheds,” Varanasi says, noting that RealCoverage can also be useful for regulators, chemical companies, and agricultural equipment manufacturers.

    Once AgZen has proven the effectiveness of using coverage as a decision metric, and after the RealCoverage optimization system is widely in practice, the company will next roll out its second product, EnhanceCoverage, designed to maximize droplet adhesion. Because that system will require replacing all the nozzles on a sprayer, the researchers are doing pilots this year but will wait for a full rollout in 2025, after farmers have gained experience and confidence with their initial product.

    “There is so much wastage,” Varanasi says. “Yet farmers must spray to protect crops, and there is a lot of environmental impact from this. So, after all this work over the years, learning about how droplets stick to surfaces and so on, now the culmination of it in all these products for me is amazing, to see all this come alive, to see that we’ll finally be able to solve the problem we set out to solve and help farmers.” More

  • in

    A new sensor detects harmful “forever chemicals” in drinking water

    MIT chemists have designed a sensor that detects tiny quantities of perfluoroalkyl and polyfluoroalkyl substances (PFAS) — chemicals found in food packaging, nonstick cookware, and many other consumer products.

    These compounds, also known as “forever chemicals” because they do not break down naturally, have been linked to a variety of harmful health effects, including cancer, reproductive problems, and disruption of the immune and endocrine systems.

    Using the new sensor technology, the researchers showed that they could detect PFAS levels as low as 200 parts per trillion in a water sample. The device they designed could offer a way for consumers to test their drinking water, and it could also be useful in industries that rely heavily on PFAS chemicals, including the manufacture of semiconductors and firefighting equipment.

    “There’s a real need for these sensing technologies. We’re stuck with these chemicals for a long time, so we need to be able to detect them and get rid of them,” says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT and the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences.

    Other authors of the paper are former MIT postdoc and lead author Sohyun Park and MIT graduate student Collette Gordon.

    Detecting PFAS

    Coatings containing PFAS chemicals are used in thousands of consumer products. In addition to nonstick coatings for cookware, they are also commonly used in water-repellent clothing, stain-resistant fabrics, grease-resistant pizza boxes, cosmetics, and firefighting foams.

    These fluorinated chemicals, which have been in widespread use since the 1950s, can be released into water, air, and soil, from factories, sewage treatment plants, and landfills. They have been found in drinking water sources in all 50 states.

    In 2023, the Environmental Protection Agency created an “advisory health limit” for two of the most hazardous PFAS chemicals, known as perfluorooctanoic acid (PFOA) and perfluorooctyl sulfonate (PFOS). These advisories call for a limit of 0.004 parts per trillion for PFOA and 0.02 parts per trillion for PFOS in drinking water.

    Currently, the only way that a consumer could determine if their drinking water contains PFAS is to send a water sample to a laboratory that performs mass spectrometry testing. However, this process takes several weeks and costs hundreds of dollars.

    To create a cheaper and faster way to test for PFAS, the MIT team designed a sensor based on lateral flow technology — the same approach used for rapid Covid-19 tests and pregnancy tests. Instead of a test strip coated with antibodies, the new sensor is embedded with a special polymer known as polyaniline, which can switch between semiconducting and conducting states when protons are added to the material.

    The researchers deposited these polymers onto a strip of nitrocellulose paper and coated them with a surfactant that can pull fluorocarbons such as PFAS out of a drop of water placed on the strip. When this happens, protons from the PFAS are drawn into the polyaniline and turn it into a conductor, reducing the electrical resistance of the material. This change in resistance, which can be measured precisely using electrodes and sent to an external device such as a smartphone, gives a quantitative measurement of how much PFAS is present.

    This approach works only with PFAS that are acidic, which includes two of the most harmful PFAS — PFOA and perfluorobutanoic acid (PFBA).

    A user-friendly system

    The current version of the sensor can detect concentrations as low as 200 parts per trillion for PFBA, and 400 parts per trillion for PFOA. This is not quite low enough to meet the current EPA guidelines, but the sensor uses only a fraction of a milliliter of water. The researchers are now working on a larger-scale device that would be able to filter about a liter of water through a membrane made of polyaniline, and they believe this approach should increase the sensitivity by more than a hundredfold, with the goal of meeting the very low EPA advisory levels.

    “We do envision a user-friendly, household system,” Swager says. “You can imagine putting in a liter of water, letting it go through the membrane, and you have a device that measures the change in resistance of the membrane.”

    Such a device could offer a less expensive, rapid alternative to current PFAS detection methods. If PFAS are detected in drinking water, there are commercially available filters that can be used on household drinking water to reduce those levels. The new testing approach could also be useful for factories that manufacture products with PFAS chemicals, so they could test whether the water used in their manufacturing process is safe to release into the environment.

    The research was funded by an MIT School of Science Fellowship to Gordon, a Bose Research Grant, and a Fulbright Fellowship to Park. More

  • in

    MIT researchers remotely map crops, field by field

    Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

    Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The team’s method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next. 

    The researchers used the technique to automatically generate the first nationwide crop map of Thailand — a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailand’s four major crops — rice, cassava, sugarcane, and maize — and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

    The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

    “It’s a longstanding gap in knowledge about what is grown around the world,” says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). “The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown — the more granularly you can map, the more questions you can answer.”

    Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

    Ground truth

    Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms’ crop types and yields.

    Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These “ground truth” labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors don’t cover but that satellites automatically do.

    “What’s lacking in low- and middle-income countries is this ground label that we can associate with satellite signals,” Laguarta Soler says. “Getting these ground truths to train a model in the first place has been limited in most of the world.”

    The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

    In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

    Cropped image

    In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand — a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

    Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist — a web-based crowdsourced  biodiversity database, and GPT-4V, a “multimodal large language model” that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting — rice, maize, sugarcane, or cassava.

    The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a location’s greenness and its reflectivity (which can be a sign of water). 

    “Each type of crop has a certain signature across these different bands, which changes throughout a growing season,” Laguarta Soler notes.

    The team trained a second model to make associations between a location’s satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

    This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see whether the map’s labels matched the expert, “gold standard” labels, it did so 93 percent of the time.

    “In the U.S., we’re also looking at over 90 percent accuracy, whereas with previous work in India, we’ve only seen 75 percent because ground labels are limited,” Wang says. “Now we can create these labels in a cheap and automated way.”

    The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

    “There are over 150 million smallholder farmers in India,” Wang says. “India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

    The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

    “What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.” More

  • in

    Study measures the psychological toll of wildfires

    Wildfires in Southeast Asia significantly affect peoples’ moods, especially if the fires originate outside a person’s own country, according to a new study.

    The study, which measures sentiment by analyzing large amounts of social media data, helps show the psychological toll of wildfires that result in substantial air pollution, at a time when such fires are becoming a high-profile marker of climate change.  

    “It has a substantial negative impact on people’s subjective well-being,” says Siqi Zheng, an MIT professor and co-author of a new paper detailing the results. “This is a big effect.”

    The magnitude of the effect is about the same as another shift uncovered through large-scale studies of sentiment expressed online: When the weekend ends and the work week starts, people’s online postings reflect a sharp drop in mood. The new study finds that daily exposure to typical wildfire smoke levels in the region produces an equivalently large change in sentiment.

    “People feel anxious or sad when they have to go to work on Monday, and what we find with the fires is that this is, in fact, comparable to a Sunday-to-Monday sentiment drop,” says co-author Rui Du, a former MIT postdoct who is now an economist at Oklahoma State University.

    The paper, “Transboundary Vegetation Fire Smoke and Expressed Sentiment: Evidence from Twitter,” has been published online in the Journal of Environmental Economics and Management.

    The authors are Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability in the Center for Real Estate and the Department of Urban Studies and Planning at MIT; Du, an assistant professor of economics at Oklahoma State University’s Spears School of Business; Ajkel Mino, of the Department of Data Science and Knowledge Engineering at Maastricht University; and Jianghao Wang, of the Institute of Geographic Sciences and Natural Resources Research at the Chinese Academy of Sciences.

    The research is based on an examination of the events of 2019 in Southeast Asia, in which a huge series of Indonesian wildfires, seemingly related to climate change and deforestation for the palm oil industry, produced a massive amount of haze in the region. The air-quality problems affected seven countries: Brunei, Indonesia, Malaysia, Philippines, Singapore, Thailand, and Vietnam.

    To conduct the study, the scholars produced a large-scale analysis of postings from 2019 on X (formerly known as Twitter) to sample public sentiment. The study involved 1,270,927 tweets from 378,300 users who agreed to have their locations made available. The researchers compiled the data with a web crawler program and multilingual natural language processing applications that review the content of tweets and rate them in affective terms based on the vocabulary used. They also used satellite data from NASA and NOAA to create a map of wildfires and haze over time, linking that to the social media data.

    Using this method creates an advantage that regular public-opinion polling does not have: It creates a measurement of mood that is effectively a real-time metric rather than an after-the-fact assessment. Moreover, substantial wind shifts in the region at the time in 2019 essentially randomize which countries were exposed to more haze at various points, making the results less likely to be influenced by other factors.

    The researchers also made a point to disentangle the sentiment change due to wildfire smoke and that due to other factors. After all, people experience mood changes all the time from various natural and socioeconomic events. Wildfires may be correlated with some of them, which makes it hard to tease out the singular effect of the smoke. By comparing only the difference in exposure to wildfire smoke, blown in by wind, within the same locations over time, this study is able to isolate the impact of local wildfire haze on mood, filtering out nonpollution influences.

    “What we are seeing from our estimates is really just the pure causal effect of the transboundary wildfire smoke,” Du says.

    The study also revealed that people living near international borders are much more likely to be upset when affected by wildfire smoke that comes from a neighboring country. When similar conditions originate in their own country, there is a considerably more muted reaction.

    “Notably, individuals do not seem to respond to domestically produced fire plumes,” the authors write in the paper. The small size of many countries in the region, coupled with a fire-prone climate, make this an ongoing source of concern, however.

    “In Southeast Asia this is really a big problem, with small countries clustered together,” Zheng observes.

    Zheng also co-authored a 2022 study using a related methodology to study the impact of the Covid-19 pandemic on the moods of residents in about 100 countries. In that case, the research showed that the global pandemic depressed sentiment about 4.7 times as much as the normal Sunday-to-Monday shift.

    “There was a huge toll of Covid on people’s sentiment, and while the impact of the wildfires was about one-fifth of Covid, that’s still quite large,” Du says.

    In policy terms, Zheng suggests that the global implications of cross-border smoke pollution could give countries a shared incentive to cooperate further. If one country’s fires become another country’s problem, they may all have reason to limit them. Scientists warn of a rising number of wildfires globally, fueled by climate change conditions in which more fires can proliferate, posing a persistent threat across societies.

    “If they don’t work on this collaboratively, it could be damaging to everyone,” Zheng says.

    The research at MIT was supported, in part, by the MIT Sustainable Urbanization Lab. Jianghao Wang was supported by the National Natural Science Foundation of China. More

  • in

    Illustrating India’s complex environmental crises

    Abhijit Banerjee, the Ford Foundation International Professor of Economics at MIT, and Sarnath Banerjee (no relation), an MIT Center for Art, Science, and Technology (CAST) visiting artist share a similar background, but have very different ways of thinking. Both were raised for a time in Kolkata before leaving India to pursue divergent careers, Abhijit as an economist who went on to win the 2019 Nobel Memorial Prize in Economic Sciences (an award he shares with MIT Professor Esther Duflo and Harvard University Professor Michael Kremer), and Sarnath as a visual artist and graphic novelist. 

    The two collaborated on a pair of short films, “The Land of Good Intentions” and “The Eternal Swamp,” that blend their expertise in a unique and captivating form. Each film addresses a particular environmental crisis facing present-day India by tracing its origins back through the centuries. The films are presented in a kind of lecture style, with Abhijit appearing as the narrator, unraveling historical details, as graphics by Sarnath visualize the story with an often wry and easy wit. The results apply logic and narrative coherence to problems with complex roots in the forces of nature, economics, and local culture. 

    “The Land of Good Intentions” explores the conditions and policies that led to mass protests by farmers, in Punjab and elsewhere, following the passage of farming legislation in September 2020. The film begins by providing historical context from multiple angles, including the significance of rice to regional culture, its growing conditions (which require a lot of water), the region’s climate (which produces very little), and previous government subsidies that led to its overproduction. The 2020 Farm Bills were intended to address rice overproduction and its consequences, including the depletion of Punjab’s groundwater supply, pollution from the burning of rice stalks, and a surplus going to waste. But farmers considered that they were being asked to shoulder the costs of a problem the government created. 

    “The arguments in the film don’t necessarily align with popular liberal arguments, but it gives subtler shape and layers to them,” Sarnath says. “That dialectical way of thinking is important to the liberal movement, which is driven by passion and a sense of justice. Abhijit is driven by factual analysis, which sometimes makes the argument more complex.”

    Their second film, “The Eternal Swamp,” addresses the crisis of flooding in Kolkata and its causes in the geographical and economic development of the city from the start. Because Kolkata was built on very wet land, and real estate has long been one of the only viable industries in the city, it has been developed without regard to proper drainage in a climate that produces more rainfall than it can handle. There is a pervading sense that Kolkata will eventually be entirely below water.

    “It was a good collaboration from the beginning,” Sarnath says of working with Abhijit on the CAST Visiting Artist project, a process which began just before Abhijit was awarded the Nobel Prize in 2019 and continued through the pandemic. “Both of us work on instinct, but the way he shapes an argument is very different from me,” Sarnath says. “My work does not follow a linear approach to telling a story; it’s fragmentary, driven by mood and emotion more than narrative, like composing a piece of music.”

    Since they first met at a literary conference years ago, Abhijit and Sarnath have been close friends and intellectual sparring partners. Though Sarnath is based in Berlin and Abhijit in Boston, the two often cross paths in different locales and have long, ambling discussions that traverse a wide array of topics. “We spend a lot of time walking together wherever we find ourselves, whether it’s down the Longfellow Bridge in Boston or through Delhi or Kolkata,” Sarnath says. The idea for this project was born out of such conversations, in response to pressing events in their home country. 

    Abhijit wrote a proposal to MIT CAST, and the questions they received through the process helped them further shape the project. “It’s important, when you have the luxury, just to spend time together. Thanks to MIT, we managed to do that across continents,” Sarnath says of their creative process. “It’s more than just telling a story; Abhijit unpacked what was in his head, and I drew and wrote a bit as well,” Sarnath says. And they worked with the editor and animator Niusha Ramzani, whom Sarnath says lent an Iranian aesthetic to the film’s animations. 

    As for the format of the films, they wanted to capture the sense of a serene Bengali afternoon, with Abhijit seated in his home in Kolkata speaking in a relaxed tone. “We wanted it to be a bit like a Royal Society lecture,” Sarnath says, somewhat like a TED Talk but more personable and intimate. The aim was to make their complicated subjects more easily comprehensible, through the language of Abhijit’s narration and with the help of visual metaphors. Still, they did not want to sacrifice complexity.

    “Economists are fabulists,” says Abhijit Banerjee. “We tell stories, simple stories, but that tends to get obscured in the telling, often because we like to be very careful about not overstating our case. Irony and the kind of playful humor that Sarnath brings to narration seemed to offer a different way to avoid being too emphatic, while allowing the story to be told in a way that it reaches a much larger audience. What is brilliant about Sarnath’s work is the play between reliable and the unreliable — the readers are happy to be misdirected because they know that it will ultimately lead them where they want to be. I was hoping we could bring a little of that into economics.” 

    “You have to emancipate yourself from any one definitive answer,” Sarnath Banerjee says, describing Abhijit’s expansive way of thinking, through which he follows multiple thought processes to their logical conclusions. The result allows for ambiguity and contradiction, though the pathways of thinking are clear. The films illustrate the situations facing farmers in Punjab and the waterlogged streets of Kolkata by tracing their roots and examining the history of cause and effect. The results provide clarity, but no simple answers.

    The process was an enriching one for both of them, the kind of advancement in understanding that can only come in dialogue. “With each collaboration, you learn, and learning to me is an artistic form,” Sarnath says. “We are always learning.” More