More stories

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Artificial reef designed by MIT engineers could protect marine life, reduce storm damage

    The beautiful, gnarled, nooked-and-crannied reefs that surround tropical islands serve as a marine refuge and natural buffer against stormy seas. But as the effects of climate change bleach and break down coral reefs around the world, and extreme weather events become more common, coastal communities are left increasingly vulnerable to frequent flooding and erosion.

    An MIT team is now hoping to fortify coastlines with “architected” reefs — sustainable, offshore structures engineered to mimic the wave-buffering effects of natural reefs while also providing pockets for fish and other marine life.

    The team’s reef design centers on a cylindrical structure surrounded by four rudder-like slats. The engineers found that when this structure stands up against a wave, it efficiently breaks the wave into turbulent jets that ultimately dissipate most of the wave’s total energy. The team has calculated that the new design could reduce as much wave energy as existing artificial reefs, using 10 times less material.

    The researchers plan to fabricate each cylindrical structure from sustainable cement, which they would mold in a pattern of “voxels” that could be automatically assembled, and would provide pockets for fish to explore and other marine life to settle in. The cylinders could be connected to form a long, semipermeable wall, which the engineers could erect along a coastline, about half a mile from shore. Based on the team’s initial experiments with lab-scale prototypes, the architected reef could reduce the energy of incoming waves by more than 95 percent.

    “This would be like a long wave-breaker,” says Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. “If waves are 6 meters high coming toward this reef structure, they would be ultimately less than a meter high on the other side. So, this kills the impact of the waves, which could prevent erosion and flooding.”

    Details of the architected reef design are reported today in a study appearing in the open-access journal PNAS Nexus. Triantafyllou’s MIT co-authors are Edvard Ronglan SM ’23; graduate students Alfonso Parra Rubio, Jose del Auila Ferrandis, and Erik Strand; research scientists Patricia Maria Stathatou and Carolina Bastidas; and Professor Neil Gershenfeld, director of the Center for Bits and Atoms; along with Alexis Oliveira Da Silva at the Polytechnic Institute of Paris, Dixia Fan of Westlake University, and Jeffrey Gair Jr. of Scinetics, Inc.

    Leveraging turbulence

    Some regions have already erected artificial reefs to protect their coastlines from encroaching storms. These structures are typically sunken ships, retired oil and gas platforms, and even assembled configurations of concrete, metal, tires, and stones. However, there’s variability in the types of artificial reefs that are currently in place, and no standard for engineering such structures. What’s more, the designs that are deployed tend to have a low wave dissipation per unit volume of material used. That is, it takes a huge amount of material to break enough wave energy to adequately protect coastal communities.

    The MIT team instead looked for ways to engineer an artificial reef that would efficiently dissipate wave energy with less material, while also providing a refuge for fish living along any vulnerable coast.

    “Remember, natural coral reefs are only found in tropical waters,” says Triantafyllou, who is director of the MIT Sea Grant. “We cannot have these reefs, for instance, in Massachusetts. But architected reefs don’t depend on temperature, so they can be placed in any water, to protect more coastal areas.”

    MIT researchers test the wave-breaking performance of two artificial reef structures in the MIT Towing Tank.Credit: Courtesy of the researchers

    The new effort is the result of a collaboration between researchers in MIT Sea Grant, who developed the reef structure’s hydrodynamic design, and researchers at the Center for Bits and Atoms (CBA), who worked to make the structure modular and easy to fabricate on location. The team’s architected reef design grew out of two seemingly unrelated problems. CBA researchers were developing ultralight cellular structures for the aerospace industry, while Sea Grant researchers were assessing the performance of blowout preventers in offshore oil structures — cylindrical valves that are used to seal off oil and gas wells and prevent them from leaking.

    The team’s tests showed that the structure’s cylindrical arrangement generated a high amount of drag. In other words, the structure appeared to be especially efficient in dissipating high-force flows of oil and gas. They wondered: Could the same arrangement dissipate another type of flow, in ocean waves?

    The researchers began to play with the general structure in simulations of water flow, tweaking its dimensions and adding certain elements to see whether and how waves changed as they crashed against each simulated design. This iterative process ultimately landed on an optimized geometry: a vertical cylinder flanked by four long slats, each attached to the cylinder in a way that leaves space for water to flow through the resulting structure. They found this setup essentially breaks up any incoming wave energy, causing parts of the wave-induced flow to spiral to the sides rather than crashing ahead.

    “We’re leveraging this turbulence and these powerful jets to ultimately dissipate wave energy,” Ferrandis says.

    Standing up to storms

    Once the researchers identified an optimal wave-dissipating structure, they fabricated a laboratory-scale version of an architected reef made from a series of the cylindrical structures, which they 3D-printed from plastic. Each test cylinder measured about 1 foot wide and 4 feet tall. They assembled a number of cylinders, each spaced about a foot apart, to form a fence-like structure, which they then lowered into a wave tank at MIT. They then generated waves of various heights and measured them before and after passing through the architected reef.

    “We saw the waves reduce substantially, as the reef destroyed their energy,” Triantafyllou says.

    The team has also looked into making the structures more porous, and friendly to fish. They found that, rather than making each structure from a solid slab of plastic, they could use a more affordable and sustainable type of cement.

    “We’ve worked with biologists to test the cement we intend to use, and it’s benign to fish, and ready to go,” he adds.

    They identified an ideal pattern of “voxels,” or microstructures, that cement could be molded into, in order to fabricate the reefs while creating pockets in which fish could live. This voxel geometry resembles individual egg cartons, stacked end to end, and appears to not affect the structure’s overall wave-dissipating power.

    “These voxels still maintain a big drag while allowing fish to move inside,” Ferrandis says.

    The team is currently fabricating cement voxel structures and assembling them into a lab-scale architected reef, which they will test under various wave conditions. They envision that the voxel design could be modular, and scalable to any desired size, and easy to transport and install in various offshore locations. “Now we’re simulating actual sea patterns, and testing how these models will perform when we eventually have to deploy them,” says Anjali Sinha, a graduate student at MIT who recently joined the group.

    Going forward, the team hopes to work with beach towns in Massachusetts to test the structures on a pilot scale.

    “These test structures would not be small,” Triantafyllou emphasizes. “They would be about a mile long, and about 5 meters tall, and would cost something like 6 million dollars per mile. So it’s not cheap. But it could prevent billions of dollars in storm damage. And with climate change, protecting the coasts will become a big issue.”

    This work was funded, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Understanding the impacts of mining on local environments and communities

    Hydrosocial displacement refers to the idea that resolving water conflict in one area can shift the conflict to a different area. The concept was coined by Scott Odell, a visiting researcher in MIT’s Environmental Solutions Initiative (ESI). As part of ESI’s Program on Mining and the Circular Economy, Odell researches the impacts of extractive industries on local environments and communities, especially in Latin America. He discovered that hydrosocial displacements are often in regions where the mining industry is vying for use of precious water sources that are already stressed due to climate change.

    Odell is working with John Fernández, ESI director and professor in the Department of Architecture, on a project that is examining the converging impacts of climate change, mining, and agriculture in Chile. The work is funded by a seed grant from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Specifically, the project seeks to answer how the expansion of seawater desalination by the mining industry is affecting local populations, and how climate change and mining affect Andean glaciers and the agricultural communities dependent upon them.By working with communities in mining areas, Odell and Fernández are gaining a sense of the burden that mining minerals needed for the clean energy transition is placing on local populations, and the types of conflicts that arise when water sources become polluted or scarce. This work is of particular importance considering over 100 countries pledged a commitment to the clean energy transition at the recent United Nations climate change conference, known as COP28.

    Play video

    J-WAFS Community Spotlight on Scott Odell

    Water, humanity’s lifebloodAt the March 2023 United Nations (U.N.) Water Conference in New York, U.N. Secretary-General António Guterres warned “water is in deep trouble. We are draining humanity’s lifeblood through vampiric overconsumption and unsustainable use and evaporating it through global heating.” A quarter of the world’s population already faces “extremely high water stress,” according to the World Resources Institute. In an effort to raise awareness of major water-related issues and inspire action for innovative solutions, the U.N. created World Water Day, observed every year on March 22. This year’s theme is “Water for Peace,” underscoring the fact that even though water is a basic human right and intrinsic to every aspect of life, it is increasingly fought over as supplies dwindle due to problems including drought, overuse, or mismanagement.  

    The “Water for Peace” theme is exemplified in Fernández and Odell’s J-WAFS project, where findings are intended to inform policies to reduce social and environmental harms inflicted on mining communities and their limited water sources.“Despite broad academic engagement with mining and climate change separately, there has been a lack of analysis of the societal implications of the interactions between mining and climate change,” says Odell. “This project is helping to fill the knowledge gap. Results will be summarized in Spanish and English and distributed to interested and relevant parties in Chile, ensuring that the results can be of benefit to those most impacted by these challenges,” he adds.

    The effects of mining for the clean energy transition

    Global climate change is understood to be the most pressing environmental issue facing humanity today. Mitigating climate change requires reducing carbon emissions by transitioning away from conventional energy derived from burning fossil fuels, to more sustainable energy sources like solar and wind power. Because copper is an excellent conductor of electricity, it will be a crucial element in the clean energy transition, in which more solar panels, wind turbines, and electric vehicles will be manufactured. “We are going to see a major increase in demand for copper due to the clean energy transition,” says Odell.

    In 2021, Chile produced 26 percent of the world’s copper, more than twice as much as any other country, Odell explains. Much of Chile’s mining is concentrated in and around the Atacama Desert — the world’s driest desert. Unfortunately, mining requires large amounts of water for a variety of processes, including controlling dust at the extraction site, cooling machinery, and processing and transporting ore.

    Chile is also one of the world’s largest exporters of agricultural products. Farmland is typically situated in the valleys downstream of several mines in the high Andes region, meaning mines get first access to water. This can lead to water conflict between mining operations and agricultural communities. Compounding the problem of mining for greener energy materials to combat climate change, are the very effects of climate change. According to the Chilean government, the country has suffered 13 years of the worst drought in history. While this is detrimental to the mining industry, it is also concerning for those working in agriculture, including the Indigenous Atacameño communities that live closest to the Escondida mine, the largest copper mine in the world. “There was never a lot of water to go around, even before the mine,” Odell says. The addition of Escondida stresses an already strained water system, leaving Atacameño farmers and individuals vulnerable to severe water insecurity.

    What’s more, waste from mining, known as tailings, includes minerals and chemicals that can contaminate water in nearby communities if not properly handled and stored. Odell says the secure storage of tailings is a high priority in earthquake-prone Chile. “If an earthquake were to hit and damage a tailings dam, it could mean toxic materials flowing downstream and destroying farms and communities,” he says.

    Chile’s treasured glaciers are another piece of the mining, climate change, and agricultural puzzle. Caroline White-Nockleby, a PhD candidate in MIT’s Program in Science, Technology, and Society, is working with Odell and Fernández on the J-WAFS project and leading the research specifically on glaciers. “These may not be the picturesque bright blue glaciers that you might think of, but they are, nonetheless, an important source of water downstream,” says White-Nockleby. She goes on to explain that there are a few different ways that mines can impact glaciers.

    In some cases, mining companies have proposed to move or even destroy glaciers to get at the ore beneath. Other impacts include dust from mining that falls on glaciers. White-Nockleby says, “this makes the glaciers a darker color, so, instead of reflecting the sun’s rays away, [the glacier] may absorb the heat and melt faster.” This shows that even when not directly intervening with glaciers, mining activities can cause glacial decline, adding to the threat glaciers already face due to climate change. She also notes that “glaciers are an important water storage facility,” describing how, on an annual cycle, glaciers freeze and melt, allowing runoff that downstream agricultural communities can utilize. If glaciers suddenly melt too quickly, flooding of downstream communities can occur.

    Desalination offers a possible, but imperfect, solution

    Chile’s extensive coastline makes it uniquely positioned to utilize desalination — the removal of salts from seawater — to address water insecurity. Odell says that “over the last decade or so, there’s been billions of dollars of investments in desalination in Chile.”

    As part of his dissertation work at Clark University, Odell found broad optimism in Chile for solving water issues in the mining industry through desalination. Not only was the mining industry committed to building desalination plants, there was also political support, and support from some community members in highland communities near the mines. Yet, despite the optimism and investment, desalinated water was not replacing the use of continental water. He concluded that “desalination can’t solve water conflict if it doesn’t reduce demand for continental water supplies.”

    However, after publishing those results, Odell learned that new estimates at the national level showed that desalination operations had begun to replace the use of continental water after 2018. In two case studies that he currently focuses on — the Escondida and Los Pelambres copper mines — the mining companies have expanded their desalination objectives in order to reduce extraction from key continental sources. This seems to be due to a variety of factors. For one thing, in 2022, Chile’s water code was reformed to prioritize human water consumption and environmental protection of water during scarcity and in the allocation of future rights. It also shortened the granting of water rights from “in perpetuity” to 30 years. Under this new code, it is possible that the mining industry may have expanded its desalination efforts because it viewed continental water resources as less secure, Odell surmises.

    As part of the J-WAFS project, Odell has found that recent reactions have been mixed when it comes to the rapid increase in the use of desalination. He spent over two months doing fieldwork in Chile by conducting interviews with members of government, industry, and civil society at the Escondida, Los Pelambres, and Andina mining sites, as well as in Chile’s capital city, Santiago. He has spoken to local and national government officials, leaders of fishing unions, representatives of mining and desalination companies, and farmers. He observed that in the communities where the new desalination plants are being built, there have been concerns from community members as to whether they will get access to the desalinated water, or if it will belong solely to the mines.

    Interviews at the Escondida and Los Pelambres sites, in which desalination operations are already in place or under construction, indicate acceptance of the presence of desalination plants combined with apprehension about unknown long-term environmental impacts. At a third mining site, Andina, there have been active protests against a desalination project that would supply water to a neighboring mine, Los Bronces. In that community, there has been a blockade of the desalination operation by the fishing federation. “They were blockading that operation for three months because of concerns over what the desalination plant would do to their fishing grounds,” Odell says. And this is where the idea of hydrosocial displacement comes into the picture, he explains. Even though desalination operations are easing tensions with highland agricultural communities, new issues are arising for the communities on the coast. “We can’t just look to desalination to solve our problems if it’s going to create problems somewhere else” Odell advises.

    Within the process of hydrosocial displacement, interacting geographical, technical, economic, and political factors constrain the range of responses to address the water conflict. For example, communities that have more political and financial power tend to be better equipped to solve water conflict than less powerful communities. In addition, hydrosocial concerns usually follow the flow of water downstream, from the highlands to coastal regions. Odell says that this raises the need to look at water from a broader perspective.

    “We tend to address water concerns one by one and that can, in practice, end up being kind of like whack-a-mole,” says Odell. “When we think of the broader hydrological system, water is very much linked, and we need to look across the watershed. We can’t just be looking at the specific community affected now, but who else is affected downstream, and will be affected in the long term. If we do solve a water issue by moving it somewhere else, like moving a tailings dam somewhere else, or building a desalination plant, resources are needed in the receiving community to respond to that,” suggests Odell.

    The company building the desalination plant and the fishing federation ultimately reached an agreement and the desalination operation will be moving forward. But Odell notes, “the protest highlights concern about the impacts of the operation on local livelihoods and environments within the much larger context of industrial pollution in the area.”

    The power of communities

    The protest by the fishing federation is one example of communities coming together to have their voices heard. Recent proposals by mining companies that would affect glaciers and other water sources used by agriculture communities have led to other protests that resulted in new agreements to protect local water supplies and the withdrawal of some of the mining proposals.Odell observes that communities have also gone to the courts to raise their concerns. The Atacameño communities, for example, have drawn attention to over-extraction of water resources by the Escondida mine. “Community members are also pursuing education in these topics so that there’s not such a power imbalance between mining companies and local communities,” Odell remarks. This demonstrates the power local communities can have to protect continental water resources.The political and social landscape of Chile may also be changing in favor of local communities. Beginning with what is now referred to as the Estallido Social (social outburst) over inequality in 2019, Chile has undergone social upheaval that resulted in voters calling for a new constitution. Gabriel Boric, a progressive candidate, whose top priorities include social and environmental issues, was elected president during this period. These trends have brought major attention to issues of economic inequality, environmental harms of mining, and environmental justice, which is putting pressure on the mining industry to make a case for its operations in the country, and to justify the environmental costs of mining.

    What happens after the mine dries up?

    From his fieldwork interviews, Odell has learned that the development of mines within communities can offer benefits. Mining companies typically invest directly in communities through employment, road construction, and sometimes even by building or investing in schools, stadiums, or health clinics. Indirectly, mines can have spillover effects in the economy since miners might support local restaurants, hotels, or stores. But what happens when the mine closes? As one community member Odell interviewed stated: “When the mine is gone, what are we going to have left besides a big hole in the ground?”

    Odell suggests that a multi-pronged approach should be taken to address the future state of water and mining. First, he says we need to have broader conversations about the nature of our consumption and production at domestic and global scales. “Mining is driven indirectly by our consumption of energy and directly by our consumption of everything from our buildings to devices to cars,” Odell states. “We should be looking for ways to moderate our consumption and consume smarter through both policy and practice so that we don’t solve climate change while creating new environmental harms through mining.”One of the main ways we can do this is by advancing the circular economy by recycling metals already in the system, or even in landfills, to help build our new clean energy infrastructure. Even so, the clean energy transition will still require mining, but according to Odell, that mining can be done better. “Mining companies and government need to do a better job of consulting with communities. We need solid plans and financing for mine closures in place from the beginning of mining operations, so that when the mine dries up, there’s the money needed to secure tailings dams and protect the communities who will be there forever,” Odell concludes.Overall, it will take an engaged society — from the mining industry to government officials to individuals — to think critically about the role we each play in our quest for a more sustainable planet, and what that might mean for the most vulnerable populations among us. More

  • in

    Lessons from Fukushima: Prepare for the unlikely

    When a devastating earthquake and tsunami overwhelmed the protective systems at the Fukushima Dai’ichi nuclear power plant complex in Japan in March 2011, it triggered a sequence of events leading to one of the worst releases of radioactive materials in the world to date. Although nuclear energy is having a revival as a low-emissions energy source to mitigate climate change, the Fukushima accident is still cited as a reason for hesitancy in adopting it.

    A new study synthesizes information from multidisciplinary sources to understand how the Fukushima Dai’ichi disaster unfolded, and points to the importance of mitigation measures and last lines of defense — even against accidents considered highly unlikely. These procedures have received relatively little attention, but they are critical in determining how severe the consequences of a reactor failure will be, the researchers say.

    The researchers note that their synthesis is one of the few attempts to look at data across disciplinary boundaries, including: the physics and engineering of what took place within the plant’s systems, the plant operators’ actions throughout the emergency, actions by emergency responders, the meteorology of radionuclide releases and transport, and the environmental and health consequences documented since the event.

    The study appears in the journal iScience, in an open-access paper by postdoc Ali Ayoub and Professor Haruko Wainwright at MIT, along with others in Switzerland, Japan, and New Mexico.

    Since 2013, Wainwright has been leading the research to integrate all the radiation monitoring data in the Fukushima region into integrated maps. “I was staring at the contamination map for nearly 10 years, wondering what created the main plume extending in the northwest direction, but I could not find exact information,” Wainwright says. “Our study is unique because we started from the consequence, the contamination map, and tried to identify the key factors for the consequence. Other people study the Fukushima accident from the root cause, the tsunami.”

    One thing they found was that while all the operating reactors, units 1, 2, and 3, suffered core meltdowns as a result of the failure of emergency cooling systems, units 1 and 3 — although they did experience hydrogen explosions — did not release as much radiation to the environment because their venting systems essentially worked to relieve pressure inside the containment vessels as intended. But the same system in unit 2 failed badly.

    “People think that the hydrogen explosion or the core meltdown were the worst things, or the major driver of the radiological consequences of the accident,” Wainright says, “but our analysis found that’s not the case.” Much more significant in terms of the radiological release was the failure of the one venting mechanism.

    “There is a pressure-release mechanism that goes through water where a lot of the radionuclides get filtered out,” she explains. That system was effective in units 1 and 3, filtering out more than 90 percent of the radioactive elements before the gas was vented. However, “in unit 2, that pressure release mechanism got stuck, and the operators could not manually open it.” A hydrogen explosion in unit 1 had damaged the pressure relief mechanism of unit 2. This led to a breach of the containment structure and direct, unfiltered venting to the atmosphere, which, according to the new study, was what produced the greatest amount of contamination from the whole weeks-long event.

    Another factor was the timing of the attempt to vent the pressure buildup in the reactor. Guidelines at the time, and to this day in many reactors, specified that no venting should take place until the pressure inside the reactor containment vessel reached a specified threshold, with no regard to the wind directions at the time. In the case of Fukushima, an earlier venting could have dramatically reduced the impact: Much of the release happened when winds were blowing directly inland, but earlier the wind had been blowing offshore.

    “That pressure-release mechanism has not been a major focus of the engineering community,” she says. While there is appropriate attention to measures that prevent a core meltdown in the first place, “this sort of last line of defense has not been the main focus and should get more attention.”

    Wainwright says the study also underlines several successes in the management of the Fukushima accident. Many of the safety systems did work as they were designed. For example, even though the oldest reactor, unit 1, suffered the greatest internal damage, it released little radioactive material. Most people were able to evacuate from the 20-kilometer (12-mile) zone before the largest release happened. The mitigation measures were “somewhat successful,” Wainwright says. But there was tremendous confusion and anger during and after the accident because there were no preparations in place for such an event.

    Much work has focused on ways to prevent the kind of accidents that happened at Fukushima — for example, in the U.S. reactor operators can deploy portable backup power supplies to maintain proper reactor cooling at any reactor site. But the ongoing situation at the Zaporizhzhia nuclear complex in Ukraine, where nuclear safety is challenged by acts of war, demonstrates that despite engineers’ and operators’ best efforts to prevent it, “the totally unexpected could still happen,” Wainwright says.

    “The big-picture message is that we should have equal attention to both prevention and mitigation of accidents,” she says. “This is the essence of resilience, and it applies beyond nuclear power plants to all essential infrastructure of a functioning society, for example, the electric grid, the food and water supply, the transportation sector, etc.”

    One thing the researchers recommend is that in designing evacuation protocols, planners should make more effort to learn from much more frequent disasters such as wildfires and hurricanes. “We think getting more interdisciplinary, transdisciplinary knowledge from other kinds of disasters would be essential,” she says. Most of the emergency response strategies presently in place, she says, were designed in the 1980s and ’90s, and need to be modernized. “Consequences can be mitigated. A nuclear accident does not have to be a catastrophe, as is often portrayed in popular culture,” Wainright says.

    The research team included Giovanni Sansavini at ETH Zurich in Switzerland; Randall Gauntt at Sandia National Laboratories in New Mexico; and Kimiaki Saito at the Japan Atomic Energy Agency. More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More

  • in

    Reducing pesticide use while increasing effectiveness

    Farming can be a low-margin, high-risk business, subject to weather and climate patterns, insect population cycles, and other unpredictable factors. Farmers need to be savvy managers of the many resources they deal, and chemical fertilizers and pesticides are among their major recurring expenses.

    Despite the importance of these chemicals, a lack of technology that monitors and optimizes sprays has forced farmers to rely on personal experience and rules of thumb to decide how to apply these chemicals. As a result, these chemicals tend to be over-sprayed, leading to their runoff into waterways and buildup up in the soil.

    That could change, thanks to a new approach of feedback-optimized spraying, invented by AgZen, an MIT spinout founded in 2020 by Professor Kripa Varanasi and Vishnu Jayaprakash SM ’19, PhD ’22.

    Play video

    AgZen has developed a system for farming that can monitor exactly how much of the sprayed chemicals adheres to plants, in real time, as the sprayer drives through a field. Built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray.

    Over the past decade, AgZen’s founders have developed products and technologies to control the interactions of droplets and sprays with plant surfaces. The Boston-based venture-backed company launched a new commercial product in 2024 and is currently piloting another related product. Field tests of both have shown the products can help farmers spray more efficiently and effectively, using fewer chemicals overall.

    “Worldwide, farms spend approximately $60 billion a year on pesticides. Our objective is to reduce the number of pesticides sprayed and lighten the financial burden on farms without sacrificing effective pest management,” Varanasi says.

    Getting droplets to stick

    While the world pesticide market is growing rapidly, a lot of the pesticides sprayed don’t reach their target. A significant portion bounces off the plant surfaces, lands on the ground, and becomes part of the runoff that flows to streams and rivers, often causing serious pollution. Some of these pesticides can be carried away by wind over very long distances.

    “Drift, runoff, and poor application efficiency are well-known, longstanding problems in agriculture, but we can fix this by controlling and monitoring how sprayed droplets interact with leaves,” Varanasi says.

    With support from MIT Tata Center and the Abdul Latif Jameel Water and Food Systems Lab, Varanasi and his team analyzed how droplets strike plant surfaces, and explored ways to increase application efficiency. This research led them to develop a novel system of nozzles that cloak droplets with compounds that enhance the retention of droplets on the leaves, a product they call EnhanceCoverage.

    Field studies across regions — from Massachusetts to California to Italy and France —showed that this droplet-optimization system could allow farmers to cut the amount of chemicals needed by more than half because more of the sprayed substances would stick to the leaves.

    Measuring coverage

    However, in trying to bring this technology to market, the researchers faced a sticky problem: Nobody knew how well pesticide sprays were adhering to the plants in the first place, so how could AgZen say that the coverage was better with its new EnhanceCoverage system?

    “I had grown up spraying with a backpack on a small farm in India, so I knew this was an issue,” Jayaprakash says. “When we spoke to growers, they told me how complicated spraying is when you’re on a large machine. Whenever you spray, there are so many things that can influence how effective your spray is. How fast do you drive the sprayer? What flow rate are you using for the chemicals? What chemical are you using? What’s the age of the plants, what’s the nozzle you’re using, what is the weather at the time? All these things influence agrochemical efficiency.”

    Agricultural spraying essentially comes down to dissolving a chemical in water and then spraying droplets onto the plants. “But the interaction between a droplet and the leaf is complex,” Varanasi says. “We were coming in with ways to optimize that, but what the growers told us is, hey, we’ve never even really looked at that in the first place.”

    Although farmers have been spraying agricultural chemicals on a large scale for about 80 years, they’ve “been forced to rely on general rules of thumb and pick all these interlinked parameters, based on what’s worked for them in the past. You pick a set of these parameters, you go spray, and you’re basically praying for outcomes in terms of how effective your pest control is,” Varanasi says.

    Before AgZen could sell farmers on the new system to improve droplet coverage, the company had to invent a way to measure precisely how much spray was adhering to plants in real-time.

    Comparing before and after

    The system they came up with, which they tested extensively on farms across the country last year, involves a unit that can be bolted onto the spraying arm of virtually any sprayer. It carries two sensor stacks, one just ahead of the sprayer nozzles and one behind. Then, built-in software running on a tablet shows the operator exactly how much of each leaf has been covered by the spray. It also computes how much those droplets will spread out or evaporate, leading to a precise estimate of the final coverage.

    “There’s a lot of physics that governs how droplets spread and evaporate, and this has been incorporated into software that a farmer can use,” Varanasi says. “We bring a lot of our expertise into understanding droplets on leaves. All these factors, like how temperature and humidity influence coverage, have always been nebulous in the spraying world. But now you have something that can be exact in determining how well your sprays are doing.”

    “We’re not only measuring coverage, but then we recommend how to act,” says Jayaprakash, who is AgZen’s CEO. “With the information we collect in real-time and by using AI, RealCoverage tells operators how to optimize everything on their sprayer, from which nozzle to use, to how fast to drive, to how many gallons of spray is best for a particular chemical mix on a particular acre of a crop.”

    The tool was developed to prove how much AgZen’s EnhanceCoverage nozzle system (which will be launched in 2025) improves coverage. But it turns out that monitoring and optimizing droplet coverage on leaves in real-time with this system can itself yield major improvements.

    “We worked with large commercial farms last year in specialty and row crops,” Jayaprakash says. “When we saved our pilot customers up to 50 percent of their chemical cost at a large scale, they were very surprised.” He says the tool has reduced chemical costs and volume in fallow field burndowns, weed control in soybeans, defoliation in cotton, and fungicide and insecticide sprays in vegetables and fruits. Along with data from commercial farms, field trials conducted by three leading agricultural universities have also validated these results.

    “Across the board, we were able to save between 30 and 50 percent on chemical costs and increase crop yields by enabling better pest control,” Jayaprakash says. “By focusing on the droplet-leaf interface, our product can help any foliage spray throughout the year, whereas most technological advancements in this space recently have been focused on reducing herbicide use alone.” The company now intends to lease the system across thousands of acres this year.

    And these efficiency gains can lead to significant returns at scale, he emphasizes: In the U.S., farmers currently spend $16 billion a year on chemicals, to protect about $200 billion of crop yields.

    The company launched its first product, the coverage optimization system called RealCoverage, this year, reaching a wide variety of farms with different crops and in different climates. “We’re going from proof-of-concept with pilots in large farms to a truly massive scale on a commercial basis with our lease-to-own program,” Jayaprakash says.

    “We’ve also been tapped by the USDA to help them evaluate practices to minimize pesticides in watersheds,” Varanasi says, noting that RealCoverage can also be useful for regulators, chemical companies, and agricultural equipment manufacturers.

    Once AgZen has proven the effectiveness of using coverage as a decision metric, and after the RealCoverage optimization system is widely in practice, the company will next roll out its second product, EnhanceCoverage, designed to maximize droplet adhesion. Because that system will require replacing all the nozzles on a sprayer, the researchers are doing pilots this year but will wait for a full rollout in 2025, after farmers have gained experience and confidence with their initial product.

    “There is so much wastage,” Varanasi says. “Yet farmers must spray to protect crops, and there is a lot of environmental impact from this. So, after all this work over the years, learning about how droplets stick to surfaces and so on, now the culmination of it in all these products for me is amazing, to see all this come alive, to see that we’ll finally be able to solve the problem we set out to solve and help farmers.” More

  • in

    A new sensor detects harmful “forever chemicals” in drinking water

    MIT chemists have designed a sensor that detects tiny quantities of perfluoroalkyl and polyfluoroalkyl substances (PFAS) — chemicals found in food packaging, nonstick cookware, and many other consumer products.

    These compounds, also known as “forever chemicals” because they do not break down naturally, have been linked to a variety of harmful health effects, including cancer, reproductive problems, and disruption of the immune and endocrine systems.

    Using the new sensor technology, the researchers showed that they could detect PFAS levels as low as 200 parts per trillion in a water sample. The device they designed could offer a way for consumers to test their drinking water, and it could also be useful in industries that rely heavily on PFAS chemicals, including the manufacture of semiconductors and firefighting equipment.

    “There’s a real need for these sensing technologies. We’re stuck with these chemicals for a long time, so we need to be able to detect them and get rid of them,” says Timothy Swager, the John D. MacArthur Professor of Chemistry at MIT and the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences.

    Other authors of the paper are former MIT postdoc and lead author Sohyun Park and MIT graduate student Collette Gordon.

    Detecting PFAS

    Coatings containing PFAS chemicals are used in thousands of consumer products. In addition to nonstick coatings for cookware, they are also commonly used in water-repellent clothing, stain-resistant fabrics, grease-resistant pizza boxes, cosmetics, and firefighting foams.

    These fluorinated chemicals, which have been in widespread use since the 1950s, can be released into water, air, and soil, from factories, sewage treatment plants, and landfills. They have been found in drinking water sources in all 50 states.

    In 2023, the Environmental Protection Agency created an “advisory health limit” for two of the most hazardous PFAS chemicals, known as perfluorooctanoic acid (PFOA) and perfluorooctyl sulfonate (PFOS). These advisories call for a limit of 0.004 parts per trillion for PFOA and 0.02 parts per trillion for PFOS in drinking water.

    Currently, the only way that a consumer could determine if their drinking water contains PFAS is to send a water sample to a laboratory that performs mass spectrometry testing. However, this process takes several weeks and costs hundreds of dollars.

    To create a cheaper and faster way to test for PFAS, the MIT team designed a sensor based on lateral flow technology — the same approach used for rapid Covid-19 tests and pregnancy tests. Instead of a test strip coated with antibodies, the new sensor is embedded with a special polymer known as polyaniline, which can switch between semiconducting and conducting states when protons are added to the material.

    The researchers deposited these polymers onto a strip of nitrocellulose paper and coated them with a surfactant that can pull fluorocarbons such as PFAS out of a drop of water placed on the strip. When this happens, protons from the PFAS are drawn into the polyaniline and turn it into a conductor, reducing the electrical resistance of the material. This change in resistance, which can be measured precisely using electrodes and sent to an external device such as a smartphone, gives a quantitative measurement of how much PFAS is present.

    This approach works only with PFAS that are acidic, which includes two of the most harmful PFAS — PFOA and perfluorobutanoic acid (PFBA).

    A user-friendly system

    The current version of the sensor can detect concentrations as low as 200 parts per trillion for PFBA, and 400 parts per trillion for PFOA. This is not quite low enough to meet the current EPA guidelines, but the sensor uses only a fraction of a milliliter of water. The researchers are now working on a larger-scale device that would be able to filter about a liter of water through a membrane made of polyaniline, and they believe this approach should increase the sensitivity by more than a hundredfold, with the goal of meeting the very low EPA advisory levels.

    “We do envision a user-friendly, household system,” Swager says. “You can imagine putting in a liter of water, letting it go through the membrane, and you have a device that measures the change in resistance of the membrane.”

    Such a device could offer a less expensive, rapid alternative to current PFAS detection methods. If PFAS are detected in drinking water, there are commercially available filters that can be used on household drinking water to reduce those levels. The new testing approach could also be useful for factories that manufacture products with PFAS chemicals, so they could test whether the water used in their manufacturing process is safe to release into the environment.

    The research was funded by an MIT School of Science Fellowship to Gordon, a Bose Research Grant, and a Fulbright Fellowship to Park. More

  • in

    MIT researchers remotely map crops, field by field

    Crop maps help scientists and policymakers track global food supplies and estimate how they might shift with climate change and growing populations. But getting accurate maps of the types of crops that are grown from farm to farm often requires on-the-ground surveys that only a handful of countries have the resources to maintain.

    Now, MIT engineers have developed a method to quickly and accurately label and map crop types without requiring in-person assessments of every single farm. The team’s method uses a combination of Google Street View images, machine learning, and satellite data to automatically determine the crops grown throughout a region, from one fraction of an acre to the next. 

    The researchers used the technique to automatically generate the first nationwide crop map of Thailand — a smallholder country where small, independent farms make up the predominant form of agriculture. The team created a border-to-border map of Thailand’s four major crops — rice, cassava, sugarcane, and maize — and determined which of the four types was grown, at every 10 meters, and without gaps, across the entire country. The resulting map achieved an accuracy of 93 percent, which the researchers say is comparable to on-the-ground mapping efforts in high-income, big-farm countries.

    The team is applying their mapping technique to other countries such as India, where small farms sustain most of the population but the type of crops grown from farm to farm has historically been poorly recorded.

    “It’s a longstanding gap in knowledge about what is grown around the world,” says Sherrie Wang, the d’Arbeloff Career Development Assistant Professor in MIT’s Department of Mechanical Engineering, and the Institute for Data, Systems, and Society (IDSS). “The final goal is to understand agricultural outcomes like yield, and how to farm more sustainably. One of the key preliminary steps is to map what is even being grown — the more granularly you can map, the more questions you can answer.”

    Wang, along with MIT graduate student Jordi Laguarta Soler and Thomas Friedel of the agtech company PEAT GmbH, will present a paper detailing their mapping method later this month at the AAAI Conference on Artificial Intelligence.

    Ground truth

    Smallholder farms are often run by a single family or farmer, who subsist on the crops and livestock that they raise. It’s estimated that smallholder farms support two-thirds of the world’s rural population and produce 80 percent of the world’s food. Keeping tabs on what is grown and where is essential to tracking and forecasting food supplies around the world. But the majority of these small farms are in low to middle-income countries, where few resources are devoted to keeping track of individual farms’ crop types and yields.

    Crop mapping efforts are mainly carried out in high-income regions such as the United States and Europe, where government agricultural agencies oversee crop surveys and send assessors to farms to label crops from field to field. These “ground truth” labels are then fed into machine-learning models that make connections between the ground labels of actual crops and satellite signals of the same fields. They then label and map wider swaths of farmland that assessors don’t cover but that satellites automatically do.

    “What’s lacking in low- and middle-income countries is this ground label that we can associate with satellite signals,” Laguarta Soler says. “Getting these ground truths to train a model in the first place has been limited in most of the world.”

    The team realized that, while many developing countries do not have the resources to maintain crop surveys, they could potentially use another source of ground data: roadside imagery, captured by services such as Google Street View and Mapillary, which send cars throughout a region to take continuous 360-degree images with dashcams and rooftop cameras.

    In recent years, such services have been able to access low- and middle-income countries. While the goal of these services is not specifically to capture images of crops, the MIT team saw that they could search the roadside images to identify crops.

    Cropped image

    In their new study, the researchers worked with Google Street View (GSV) images taken throughout Thailand — a country that the service has recently imaged fairly thoroughly, and which consists predominantly of smallholder farms.

    Starting with over 200,000 GSV images randomly sampled across Thailand, the team filtered out images that depicted buildings, trees, and general vegetation. About 81,000 images were crop-related. They set aside 2,000 of these, which they sent to an agronomist, who determined and labeled each crop type by eye. They then trained a convolutional neural network to automatically generate crop labels for the other 79,000 images, using various training methods, including iNaturalist — a web-based crowdsourced  biodiversity database, and GPT-4V, a “multimodal large language model” that enables a user to input an image and ask the model to identify what the image is depicting. For each of the 81,000 images, the model generated a label of one of four crops that the image was likely depicting — rice, maize, sugarcane, or cassava.

    The researchers then paired each labeled image with the corresponding satellite data taken of the same location throughout a single growing season. These satellite data include measurements across multiple wavelengths, such as a location’s greenness and its reflectivity (which can be a sign of water). 

    “Each type of crop has a certain signature across these different bands, which changes throughout a growing season,” Laguarta Soler notes.

    The team trained a second model to make associations between a location’s satellite data and its corresponding crop label. They then used this model to process satellite data taken of the rest of the country, where crop labels were not generated or available. From the associations that the model learned, it then assigned crop labels across Thailand, generating a country-wide map of crop types, at a resolution of 10 square meters.

    This first-of-its-kind crop map included locations corresponding to the 2,000 GSV images that the researchers originally set aside, that were labeled by arborists. These human-labeled images were used to validate the map’s labels, and when the team looked to see whether the map’s labels matched the expert, “gold standard” labels, it did so 93 percent of the time.

    “In the U.S., we’re also looking at over 90 percent accuracy, whereas with previous work in India, we’ve only seen 75 percent because ground labels are limited,” Wang says. “Now we can create these labels in a cheap and automated way.”

    The researchers are moving to map crops across India, where roadside images via Google Street View and other services have recently become available.

    “There are over 150 million smallholder farmers in India,” Wang says. “India is covered in agriculture, almost wall-to-wall farms, but very small farms, and historically it’s been very difficult to create maps of India because there are very sparse ground labels.”

    The team is working to generate crop maps in India, which could be used to inform policies having to do with assessing and bolstering yields, as global temperatures and populations rise.

    “What would be interesting would be to create these maps over time,” Wang says. “Then you could start to see trends, and we can try to relate those things to anything like changes in climate and policies.” More