More stories

  • in

    3 Questions: How AI is helping us monitor and support vulnerable ecosystems

    A recent study from Oregon State University estimated that more than 3,500 animal species are at risk of extinction because of factors including habitat alterations, natural resources being overexploited, and climate change.To better understand these changes and protect vulnerable wildlife, conservationists like MIT PhD student and Computer Science and Artificial Intelligence Laboratory (CSAIL) researcher Justin Kay are developing computer vision algorithms that carefully monitor animal populations. A member of the lab of MIT Department of Electrical Engineering and Computer Science assistant professor and CSAIL principal investigator Sara Beery, Kay is currently working on tracking salmon in the Pacific Northwest, where they provide crucial nutrients to predators like birds and bears, while managing the population of prey, like bugs.With all that wildlife data, though, researchers have lots of information to sort through and many AI models to choose from to analyze it all. Kay and his colleagues at CSAIL and the University of Massachusetts Amherst are developing AI methods that make this data-crunching process much more efficient, including a new approach called “consensus-driven active model selection” (or “CODA”) that helps conservationists choose which AI model to use. Their work was named a Highlight Paper at the International Conference on Computer Vision (ICCV) in October.That research was supported, in part, by the National Science Foundation, Natural Sciences and Engineering Research Council of Canada, and Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Here, Kay discusses this project, among other conservation efforts.Q: In your paper, you pose the question of which AI models will perform the best on a particular dataset. With as many as 1.9 million pre-trained models available in the HuggingFace Models repository alone, how does CODA help us address that challenge?A: Until recently, using AI for data analysis has typically meant training your own model. This requires significant effort to collect and annotate a representative training dataset, as well as iteratively train and validate models. You also need a certain technical skill set to run and modify AI training code. The way people interact with AI is changing, though — in particular, there are now millions of publicly available pre-trained models that can perform a variety of predictive tasks very well. This potentially enables people to use AI to analyze their data without developing their own model, simply by downloading an existing model with the capabilities they need. But this poses a new challenge: Which model, of the millions available, should they use to analyze their data? Typically, answering this model selection question also requires you to spend a lot of time collecting and annotating a large dataset, albeit for testing models rather than training them. This is especially true for real applications where user needs are specific, data distributions are imbalanced and constantly changing, and model performance may be inconsistent across samples. Our goal with CODA was to substantially reduce this effort. We do this by making the data annotation process “active.” Instead of requiring users to bulk-annotate a large test dataset all at once, in active model selection we make the process interactive, guiding users to annotate the most informative data points in their raw data. This is remarkably effective, often requiring users to annotate as few as 25 examples to identify the best model from their set of candidates. We’re very excited about CODA offering a new perspective on how to best utilize human effort in the development and deployment of machine-learning (ML) systems. As AI models become more commonplace, our work emphasizes the value of focusing effort on robust evaluation pipelines, rather than solely on training.Q: You applied the CODA method to classifying wildlife in images. Why did it perform so well, and what role can systems like this have in monitoring ecosystems in the future?A: One key insight was that when considering a collection of candidate AI models, the consensus of all of their predictions is more informative than any individual model’s predictions. This can be seen as a sort of “wisdom of the crowd:” On average, pooling the votes of all models gives you a decent prior over what the labels of individual data points in your raw dataset should be. Our approach with CODA is based on estimating a “confusion matrix” for each AI model — given the true label for some data point is class X, what is the probability that an individual model predicts class X, Y, or Z? This creates informative dependencies between all of the candidate models, the categories you want to label, and the unlabeled points in your dataset.Consider an example application where you are a wildlife ecologist who has just collected a dataset containing potentially hundreds of thousands of images from cameras deployed in the wild. You want to know what species are in these images, a time-consuming task that computer vision classifiers can help automate. You are trying to decide which species classification model to run on your data. If you have labeled 50 images of tigers so far, and some model has performed well on those 50 images, you can be pretty confident it will perform well on the remainder of the (currently unlabeled) images of tigers in your raw dataset as well. You also know that when that model predicts some image contains a tiger, it is likely to be correct, and therefore that any model that predicts a different label for that image is more likely to be wrong. You can use all these interdependencies to construct probabilistic estimates of each model’s confusion matrix, as well as a probability distribution over which model has the highest accuracy on the overall dataset. These design choices allow us to make more informed choices over which data points to label and ultimately are the reason why CODA performs model selection much more efficiently than past work.There are also a lot of exciting possibilities for building on top of our work. We think there may be even better ways of constructing informative priors for model selection based on domain expertise — for instance, if it is already known that one model performs exceptionally well on some subset of classes or poorly on others. There are also opportunities to extend the framework to support more complex machine-learning tasks and more sophisticated probabilistic models of performance. We hope our work can provide inspiration and a starting point for other researchers to keep pushing the state of the art.Q: You work in the Beerylab, led by Sara Beery, where researchers are combining the pattern-recognition capabilities of machine-learning algorithms with computer vision technology to monitor wildlife. What are some other ways your team is tracking and analyzing the natural world, beyond CODA?A: The lab is a really exciting place to work, and new projects are emerging all the time. We have ongoing projects monitoring coral reefs with drones, re-identifying individual elephants over time, and fusing multi-modal Earth observation data from satellites and in-situ cameras, just to name a few. Broadly, we look at emerging technologies for biodiversity monitoring and try to understand where the data analysis bottlenecks are, and develop new computer vision and machine-learning approaches that address those problems in a widely applicable way. It’s an exciting way of approaching problems that sort of targets the “meta-questions” underlying particular data challenges we face. The computer vision algorithms I’ve worked on that count migrating salmon in underwater sonar video are examples of that work. We often deal with shifting data distributions, even as we try to construct the most diverse training datasets we can. We always encounter something new when we deploy a new camera, and this tends to degrade the performance of computer vision algorithms. This is one instance of a general problem in machine learning called domain adaptation, but when we tried to apply existing domain adaptation algorithms to our fisheries data we realized there were serious limitations in how existing algorithms were trained and evaluated. We were able to develop a new domain adaptation framework, published earlier this year in Transactions on Machine Learning Research, that addressed these limitations and led to advancements in fish counting, and even self-driving and spacecraft analysis.One line of work that I’m particularly excited about is understanding how to better develop and analyze the performance of predictive ML algorithms in the context of what they are actually used for. Usually, the outputs from some computer vision algorithm — say, bounding boxes around animals in images — are not actually the thing that people care about, but rather a means to an end to answer a larger problem — say, what species live here, and how is that changing over time? We have been working on methods to analyze predictive performance in this context and reconsider the ways that we input human expertise into ML systems with this in mind. CODA was one example of this, where we showed that we could actually consider the ML models themselves as fixed and build a statistical framework to understand their performance very efficiently. We have been working recently on similar integrated analyses combining ML predictions with multi-stage prediction pipelines, as well as ecological statistical models. The natural world is changing at unprecedented rates and scales, and being able to quickly move from scientific hypotheses or management questions to data-driven answers is more important than ever for protecting ecosystems and the communities that depend on them. Advancements in AI can play an important role, but we need to think critically about the ways that we design, train, and evaluate algorithms in the context of these very real challenges. More

  • in

    Book reviews technologies aiming to remove carbon from the atmosphere

    Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized subject matter for nonspecialists” and includes Herzog’s 2018 book, “Carbon Capture.”Burning fossil fuels, as well as other human activities, cause the release of carbon dioxide (CO2) into the atmosphere, where it acts like a blanket that warms the Earth, resulting in climate change. Much attention has focused on mitigation technologies that reduce emissions, but in their book, Herzog and Mac Dowell have turned their attention to “carbon dioxide removal” (CDR), an approach that removes carbon already present in the atmosphere.In this new volume, the authors explain how CO2 naturally moves into and out of the atmosphere and present a brief history of carbon removal as a concept for dealing with climate change. They also describe the full range of “pathways” that have been proposed for removing CO2 from the atmosphere. Those pathways include engineered systems designed for “direct air capture” (DAC), as well as various “nature-based” approaches that call for planting trees or taking steps to enhance removal by biomass or the oceans. The book offers easily accessible explanations of the fundamental science and engineering behind each approach.The authors compare the “quality” of the different pathways based on the following metrics:Accounting. For public acceptance of any carbon-removal strategy, the authors note, the developers need to get the accounting right — and that’s not always easy. “If you’re going to spend money to get CO2 out of the atmosphere, you want to get paid for doing it,” notes Herzog. It can be tricky to measure how much you have removed, because there’s a lot of CO2 going in and out of the atmosphere all the time. Also, if your approach involves, say, burning fossil fuels, you must subtract the amount of CO2 that’s emitted from the total amount you claim to have removed. Then there’s the timing of the removal. With a DAC device, the removal happens right now, and the removed CO2 can be measured. “But if I plant a tree, it’s going to remove CO2 for decades. Is that equivalent to removing it right now?” Herzog queries. How to take that factor into account hasn’t yet been resolved.Permanence. Different approaches keep the CO2 out of the atmosphere for different durations of time. How long is long enough? As the authors explain, this is one of the biggest issues, especially with nature-based solutions, where events such as wildfires or pestilence or land-use changes can release the stored CO2 back into the atmosphere. How do we deal with that?Cost. Cost is another key factor. Using a DAC device to remove CO2 costs far more than planting trees, but it yields immediate removal of a measurable amount of CO2 that can then be locked away forever. How does one monetize that trade-off?Additionality. “You’re doing this project, but would what you’re doing have been done anyway?” asks Herzog. “Is your effort additional to business as usual?” This question comes into play with many of the nature-based approaches involving trees, soils, and so on.Permitting and governance. These issues are especially important — and complicated — with approaches that involve doing things in the ocean. In addition, Herzog points out that some CCS projects could also achieve carbon removal, but they would have a hard time getting permits to build the pipelines and other needed infrastructure.The authors conclude that none of the CDR strategies now being proposed is a clear winner on all the metrics. However, they stress that carbon removal has the potential to play an important role in meeting our climate change goals — not by replacing our emissions-reduction efforts, but rather by supplementing them. However, as Herzog and Mac Dowell make clear in their book, many challenges must be addressed to move CDR from today’s speculation to deployment at scale, and the book supports the wider discussion about how to move forward. Indeed, the authors have fulfilled their stated goal: “to provide an objective analysis of the opportunities and challenges for CDR and to separate myth from reality.” More

  • in

    How to reduce greenhouse gas emissions from ammonia production

    Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.  More

  • in

    Report: Sustainability in supply chains is still a firm-level priority

    Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.   During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.”  More

  • in

    Simpler models can outperform deep learning at climate prediction

    Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.Comparing emulatorsBecause the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.Constructing a new evaluationFrom there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.” More

  • in

    After more than a decade of successes, ESI’s work will spread out across the Institute

    MIT’s Environmental Solutions Initiative (ESI), a pioneering cross-disciplinary body that helped give a major boost to sustainability and solutions to climate change at MIT, will close as a separate entity at the end of June. But that’s far from the end for its wide-ranging work, which will go forward under different auspices. Many of its key functions will become part of MIT’s recently launched Climate Project. John Fernandez, head of ESI for nearly a decade, will return to the School of Architecture and Planning, where some of ESI’s important work will continue as part of a new interdisciplinary lab.When the ideas that led to the founding of MIT’s Environmental Solutions Initiative first began to be discussed, its founders recall, there was already a great deal of work happening at MIT relating to climate change and sustainability. As Professor John Sterman of the MIT Sloan School of Management puts it, “there was a lot going on, but it wasn’t integrated. So the whole added up to less than the sum of its parts.”ESI was founded in 2014 to help fill that coordinating role, and in the years since it has accomplished a wide range of significant milestones in research, education, and communication about sustainable solutions in a wide range of areas. Its founding director, Professor Susan Solomon, helmed it for its first year, and then handed the leadership to Fernandez, who has led it since 2015.“There wasn’t much of an ecosystem [on sustainability] back then,” Solomon recalls. But with the help of ESI and some other entities, that ecosystem has blossomed. She says that Fernandez “has nurtured some incredible things under ESI,” including work on nature-based climate solutions, and also other areas such as sustainable mining, and reduction of plastics in the environment.Desiree Plata, director of MIT’s Climate and Sustainability Consortium and associate professor of civil and environmental engineering, says that one key achievement of the initiative has been in “communication with the external world, to help take really complex systems and topics and put them in not just plain-speak, but something that’s scientifically rigorous and defensible, for the outside world to consume.”In particular, ESI has created three very successful products, which continue under the auspices of the Climate Project. These include the popular TIL Climate Podcast, the Webby Award-winning Climate Portal website, and the online climate primer developed with Professor Kerry Emanuel. “These are some of the most frequented websites at MIT,” Plata says, and “the impact of this work on the global knowledge base cannot be overstated.”Fernandez says that ESI has played a significant part in helping to catalyze what has become “a rich institutional landscape of work in sustainability and climate change” at MIT. He emphasizes three major areas where he feels the ESI has been able to have the most impact: engaging the MIT community, initiating and stewarding critical environmental research, and catalyzing efforts to promote sustainability as fundamental to the mission of a research university.Engagement of the MIT community, he says, began with two programs: a research seed grant program and the creation of MIT’s undergraduate minor in environment and sustainability, launched in 2017.ESI also created a Rapid Response Group, which gave students a chance to work on real-world projects with external partners, including government agencies, community groups, nongovernmental organizations, and businesses. In the process, they often learned why dealing with environmental challenges in the real world takes so much longer than they might have thought, he says, and that a challenge that “seemed fairly straightforward at the outset turned out to be more complex and nuanced than expected.”The second major area, initiating and stewarding environmental research, grew into a set of six specific program areas: natural climate solutions, mining, cities and climate change, plastics and the environment, arts and climate, and climate justice.These efforts included collaborations with a Nobel Peace Prize laureate, three successive presidential administrations from Colombia, and members of communities affected by climate change, including coal miners, indigenous groups, various cities, companies, the U.N., many agencies — and the popular musical group Coldplay, which has pledged to work toward climate neutrality for its performances. “It was the role that the ESI played as a host and steward of these research programs that may serve as a key element of our legacy,” Fernandez says.The third broad area, he says, “is the idea that the ESI as an entity at MIT would catalyze this movement of a research university toward sustainability as a core priority.” While MIT was founded to be an academic partner to the industrialization of the world, “aren’t we in a different world now? The kind of massive infrastructure planning and investment and construction that needs to happen to decarbonize the energy system is maybe the largest industrialization effort ever undertaken. Even more than in the recent past, the set of priorities driving this have to do with sustainable development.”Overall, Fernandez says, “we did everything we could to infuse the Institute in its teaching and research activities with the idea that the world is now in dire need of sustainable solutions.”Fernandez “has nurtured some incredible things under ESI,” Solomon says. “It’s been a very strong and useful program, both for education and research.” But it is appropriate at this time to distribute its projects to other venues, she says. “We do now have a major thrust in the Climate Project, and you don’t want to have redundancies and overlaps between the two.”Fernandez says “one of the missions of the Climate Project is really acting to coalesce and aggregate lots of work around MIT.” Now, with the Climate Project itself, along with the Climate Policy Center and the Center for Sustainability Science and Strategy, it makes more sense for ESI’s climate-related projects to be integrated into these new entities, and other projects that are less directly connected to climate to take their places in various appropriate departments or labs, he says.“We did enough with ESI that we made it possible for these other centers to really flourish,” he says. “And in that sense, we played our role.”As of June 1, Fernandez has returned to his role as professor of architecture and urbanism and building technology in the School of Architecture and Planning, where he directs the Urban Metabolism Group. He will also be starting up a new group called Environment ResearchAction (ERA) to continue ESI work in cities, nature, and artificial intelligence.  More

  • in

    Study helps pinpoint areas where microplastics will accumulate

    The accumulation of microplastics in the environment, and within our bodies, is an increasingly worrisome issue. But predicting where these ubiquitous particles will accumulate, and therefore where remediation efforts should be focused, has been difficult because of the many factors that contribute to their dispersal and deposition.New research from MIT shows that one key factor in determining where microparticles are likely to build up has to do with the presence of biofilms. These thin, sticky biopolymer layers are shed by microorganisms and can accumulate on surfaces, including along sandy riverbeds or seashores. The study found that, all other conditions being equal, microparticles are less likely to accumulate in sediment infused with biofilms, because if they land there, they are more likely to be resuspended by flowing water and carried away.The open-access findings appear in the journal Geophysical Research Letters, in a paper by MIT postdoc Hyoungchul Park and professor of civil and environmental engineering Heidi Nepf. “Microplastics are definitely in the news a lot,” Nepf says, “and we don’t fully understand where the hotspots of accumulation are likely to be. This work gives a little bit of guidance” on some of the factors that can cause these particles, and small particles in general, to accumulate in certain locations.Most experiments looking at the ways microparticles are transported and deposited have been conducted over bare sand, Park says. “But in nature, there are a lot of microorganisms, such as bacteria, fungi, and algae, and when they adhere to the stream bed they generate some sticky things.” These substances are known as extracellular polymeric substances, or EPS, and they “can significantly affect the channel bed characteristics,” he says. The new research focused on determining exactly how these substances affected the transport of microparticles, including microplastics.The research involved a flow tank with a bottom lined with fine sand, and sometimes with vertical plastic tubes simulating the presence of mangrove roots. In some experiments the bed consisted of pure sand, and in others the sand was mixed with a biological material to simulate the natural biofilms found in many riverbed and seashore environments.Water mixed with tiny plastic particles was pumped through the tank for three hours, and then the bed surface was photographed under ultraviolet light that caused the plastic particles to fluoresce, allowing a quantitative measurement of their concentration.The results revealed two different phenomena that affected how much of the plastic accumulated on the different surfaces. Immediately around the rods that stood in for above-ground roots, turbulence prevented particle deposition. In addition, as the amount of simulated biofilms in the sediment bed increased, the accumulation of particles also decreased.Nepf and Park concluded that the biofilms filled up the spaces between the sand grains, leaving less room for the microparticles to fit in. The particles were more exposed because they penetrated less deeply in between the sand grains, and as a result they were much more easily resuspended and carried away by the flowing water.“These biological films fill the pore spaces between the sediment grains,” Park explains, “and that makes the deposited particles — the particles that land on the bed — more exposed to the forces generated by the flow, which makes it easier for them to be resuspended. What we found was that in a channel with the same flow conditions and the same vegetation and the same sand bed, if one is without EPS and one is with EPS, then the one without EPS has a much higher deposition rate than the one with EPS.”Nepf adds: “The biofilm is blocking the plastics from accumulating in the bed because they can’t go deep into the bed. They just stay right on the surface, and then they get picked up and moved elsewhere. So, if I spilled a large amount of microplastic in two rivers, and one had a sandy or gravel bottom, and one was muddier with more biofilm, I would expect more of the microplastics to be retained in the sandy or gravelly river.”All of this is complicated by other factors, such as the turbulence of the water or the roughness of the bottom surface, she says. But it provides a “nice lens” to provide some suggestions for people who are trying to study the impacts of microplastics in the field. “They’re trying to determine what kinds of habitats these plastics are in, and this gives a framework for how you might categorize those habitats,” she says. “It gives guidance to where you should go to find more plastics versus less.”As an example, Park suggests, in mangrove ecosystems, microplastics may preferentially accumulate in the outer edges, which tend to be sandy, while the interior zones have sediment with more biofilm. Thus, this work suggests “the sandy outer regions may be potential hotspots for microplastic accumulation,” he says, and can make this a priority zone for monitoring and protection.“This is a highly relevant finding,” says Isabella Schalko, a research scientist at ETH Zurich, who was not associated with this research. “It suggests that restoration measures such as re-vegetation or promoting biofilm growth could help mitigate microplastic accumulation in aquatic systems. It highlights the powerful role of biological and physical features in shaping particle transport processes.”The work was supported by Shell International Exploration and Production through the MIT Energy Initiative. More

  • in

    Study: Climate change may make it harder to reduce smog in some regions

    Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.Controlling ozoneGround-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.Capturing climate variability“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability. More