More stories

  • in

    Optimizing food subsidies: Applying digital platforms to maximize nutrition

    Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition. World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.” Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I’ve always been fascinated by trying to solve these issues,” he noted.His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains. For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from  the data, and so this is where we need more sophisticated and robust algorithms.”Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.” The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges. More

  • in

    Report: Sustainability in supply chains is still a firm-level priority

    Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.   During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.”  More

  • in

    Climate Action Learning Lab helps state and local leaders identify and implement effective climate mitigation strategies

    This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space. “The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter. More

  • in

    Simpler models can outperform deep learning at climate prediction

    Environmental scientists are increasingly using enormous artificial intelligence models to make predictions about changes in weather and climate, but a new study by MIT researchers shows that bigger models are not always better.The team demonstrates that, in certain climate scenarios, much simpler, physics-based models can generate more accurate predictions than state-of-the-art deep-learning models.Their analysis also reveals that a benchmarking technique commonly used to evaluate machine-learning techniques for climate predictions can be distorted by natural variations in the data, like fluctuations in weather patterns. This could lead someone to believe a deep-learning model makes more accurate predictions when that is not the case.The researchers developed a more robust way of evaluating these techniques, which shows that, while simple models are more accurate when estimating regional surface temperatures, deep-learning approaches can be the best choice for estimating local rainfall.They used these results to enhance a simulation tool known as a climate emulator, which can rapidly simulate the effect of human activities onto a future climate.The researchers see their work as a “cautionary tale” about the risk of deploying large AI models for climate science. While deep-learning models have shown incredible success in domains such as natural language, climate science contains a proven set of physical laws and approximations, and the challenge becomes how to incorporate those into AI models.“We are trying to develop models that are going to be useful and relevant for the kinds of things that decision-makers need going forward when making climate policy choices. While it might be attractive to use the latest, big-picture machine-learning model on a climate problem, what this study shows is that stepping back and really thinking about the problem fundamentals is important and useful,” says study senior author Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Selin’s co-authors are lead author Björn Lütjens, a former EAPS postdoc who is now a research scientist at IBM Research; senior author Raffaele Ferrari, the Cecil and Ida Green Professor of Oceanography in EAPS and co-director of the Lorenz Center; and Duncan Watson-Parris, assistant professor at the University of California at San Diego. Selin and Ferrari are also co-principal investigators of the Bringing Computation to the Climate Challenge project, out of which this research emerged. The paper appears today in the Journal of Advances in Modeling Earth Systems.Comparing emulatorsBecause the Earth’s climate is so complex, running a state-of-the-art climate model to predict how pollution levels will impact environmental factors like temperature can take weeks on the world’s most powerful supercomputers.Scientists often create climate emulators, simpler approximations of a state-of-the art climate model, which are faster and more accessible. A policymaker could use a climate emulator to see how alternative assumptions on greenhouse gas emissions would affect future temperatures, helping them develop regulations.But an emulator isn’t very useful if it makes inaccurate predictions about the local impacts of climate change. While deep learning has become increasingly popular for emulation, few studies have explored whether these models perform better than tried-and-true approaches.The MIT researchers performed such a study. They compared a traditional technique called linear pattern scaling (LPS) with a deep-learning model using a common benchmark dataset for evaluating climate emulators.Their results showed that LPS outperformed deep-learning models on predicting nearly all parameters they tested, including temperature and precipitation.“Large AI methods are very appealing to scientists, but they rarely solve a completely new problem, so implementing an existing solution first is necessary to find out whether the complex machine-learning approach actually improves upon it,” says Lütjens.Some initial results seemed to fly in the face of the researchers’ domain knowledge. The powerful deep-learning model should have been more accurate when making predictions about precipitation, since those data don’t follow a linear pattern.They found that the high amount of natural variability in climate model runs can cause the deep learning model to perform poorly on unpredictable long-term oscillations, like El Niño/La Niña. This skews the benchmarking scores in favor of LPS, which averages out those oscillations.Constructing a new evaluationFrom there, the researchers constructed a new evaluation with more data that address natural climate variability. With this new evaluation, the deep-learning model performed slightly better than LPS for local precipitation, but LPS was still more accurate for temperature predictions.“It is important to use the modeling tool that is right for the problem, but in order to do that you also have to set up the problem the right way in the first place,” Selin says.Based on these results, the researchers incorporated LPS into a climate emulation platform to predict local temperature changes in different emission scenarios.“We are not advocating that LPS should always be the goal. It still has limitations. For instance, LPS doesn’t predict variability or extreme weather events,” Ferrari adds.Rather, they hope their results emphasize the need to develop better benchmarking techniques, which could provide a fuller picture of which climate emulation technique is best suited for a particular situation.“With an improved climate emulation benchmark, we could use more complex machine-learning methods to explore problems that are currently very hard to address, like the impacts of aerosols or estimations of extreme precipitation,” Lütjens says.Ultimately, more accurate benchmarking techniques will help ensure policymakers are making decisions based on the best available information.The researchers hope others build on their analysis, perhaps by studying additional improvements to climate emulation methods and benchmarks. Such research could explore impact-oriented metrics like drought indicators and wildfire risks, or new variables like regional wind speeds.This research is funded, in part, by Schmidt Sciences, LLC, and is part of the MIT Climate Grand Challenges team for “Bringing Computation to the Climate Challenge.” More

  • in

    Merging AI and underwater photography to reveal hidden ocean worlds

    In the Northeastern United States, the Gulf of Maine represents one of the most biologically diverse marine ecosystems on the planet — home to whales, sharks, jellyfish, herring, plankton, and hundreds of other species. But even as this ecosystem supports rich biodiversity, it is undergoing rapid environmental change. The Gulf of Maine is warming faster than 99 percent of the world’s oceans, with consequences that are still unfolding.A new research initiative developing at MIT Sea Grant, called LOBSTgER — short for Learning Oceanic Bioecological Systems Through Generative Representations — brings together artificial intelligence and underwater photography to document the ocean life left vulnerable to these changes and share them with the public in new visual ways. Co-led by underwater photographer and visiting artist at MIT Sea Grant Keith Ellenbogen and MIT mechanical engineering PhD student Andreas Mentzelopoulos, the project explores how generative AI can expand scientific storytelling by building on field-based photographic data.Just as the 19th-century camera transformed our ability to document and reveal the natural world — capturing life with unprecedented detail and bringing distant or hidden environments into view — generative AI marks a new frontier in visual storytelling. Like early photography, AI opens a creative and conceptual space, challenging how we define authenticity and how we communicate scientific and artistic perspectives. In the LOBSTgER project, generative models are trained exclusively on a curated library of Ellenbogen’s original underwater photographs — each image crafted with artistic intent, technical precision, accurate species identification, and clear geographic context. By building a high-quality dataset grounded in real-world observations, the project ensures that the resulting imagery maintains both visual integrity and ecological relevance. In addition, LOBSTgER’s models are built using custom code developed by Mentzelopoulos to protect the process and outputs from any potential biases from external data or models. LOBSTgER’s generative AI builds upon real photography, expanding the researchers’ visual vocabulary to deepen the public’s connection to the natural world.

    This ocean sunfish (Mola mola) image was generated by LOBSTgER’s unconditional models.

    AI-generated image: Keith Ellenbogen, Andreas Mentzelopoulos, and LOBSTgER.

    Previous item
    Next item

    At its heart, LOBSTgER operates at the intersection of art, science, and technology. The project draws from the visual language of photography, the observational rigor of marine science, and the computational power of generative AI. By uniting these disciplines, the team is not only developing new ways to visualize ocean life — they are also reimagining how environmental stories can be told. This integrative approach makes LOBSTgER both a research tool and a creative experiment — one that reflects MIT’s long-standing tradition of interdisciplinary innovation.Underwater photography in New England’s coastal waters is notoriously difficult. Limited visibility, swirling sediment, bubbles, and the unpredictable movement of marine life all pose constant challenges. For the past several years, Ellenbogen has navigated these challenges and is building a comprehensive record of the region’s biodiversity through the project, Space to Sea: Visualizing New England’s Ocean Wilderness. This large dataset of underwater images provides the foundation for training LOBSTgER’s generative AI models. The images span diverse angles, lighting conditions, and animal behaviors, resulting in a visual archive that is both artistically striking and biologically accurate.

    Image synthesis via reverse diffusion: This short video shows the de-noising trajectory from Gaussian latent noise to photorealistic output using LOBSTgER’s unconditional models. Iterative de-noising requires 1,000 forward passes through the trained neural network.Video: Keith Ellenbogen and Andreas Mentzelopoulos / MIT Sea Grant

    LOBSTgER’s custom diffusion models are trained to replicate not only the biodiversity Ellenbogen documents, but also the artistic style he uses to capture it. By learning from thousands of real underwater images, the models internalize fine-grained details such as natural lighting gradients, species-specific coloration, and even the atmospheric texture created by suspended particles and refracted sunlight. The result is imagery that not only appears visually accurate, but also feels immersive and moving.The models can both generate new, synthetic, but scientifically accurate images unconditionally (i.e., requiring no user input/guidance), and enhance real photographs conditionally (i.e., image-to-image generation). By integrating AI into the photographic workflow, Ellenbogen will be able to use these tools to recover detail in turbid water, adjust lighting to emphasize key subjects, or even simulate scenes that would be nearly impossible to capture in the field. The team also believes this approach may benefit other underwater photographers and image editors facing similar challenges. This hybrid method is designed to accelerate the curation process and enable storytellers to construct a more complete and coherent visual narrative of life beneath the surface.

    Left: Enhanced image of an American lobster using LOBSTgER’s image-to-image models. Right: Original image.

    Left: AI genertated image by Keith Ellenbogen, Andreas Mentzelopoulos, and LOBSTgER. Right: Keith Ellenbogen

    Previous item
    Next item

    In one key series, Ellenbogen captured high-resolution images of lion’s mane jellyfish, blue sharks, American lobsters, and ocean sunfish (Mola mola) while free diving in coastal waters. “Getting a high-quality dataset is not easy,” Ellenbogen says. “It requires multiple dives, missed opportunities, and unpredictable conditions. But these challenges are part of what makes underwater documentation both difficult and rewarding.”Mentzelopoulos has developed original code to train a family of latent diffusion models for LOBSTgER grounded on Ellenbogen’s images. Developing such models requires a high level of technical expertise, and training models from scratch is a complex process demanding hundreds of hours of computation and meticulous hyperparameter tuning.The project reflects a parallel process: field documentation through photography and model development through iterative training. Ellenbogen works in the field, capturing rare and fleeting encounters with marine animals; Mentzelopoulos works in the lab, translating those moments into machine-learning contexts that can extend and reinterpret the visual language of the ocean.“The goal isn’t to replace photography,” Mentzelopoulos says. “It’s to build on and complement it — making the invisible visible, and helping people see environmental complexity in a way that resonates both emotionally and intellectually. Our models aim to capture not just biological realism, but the emotional charge that can drive real-world engagement and action.”LOBSTgER points to a hybrid future that merges direct observation with technological interpretation. The team’s long-term goal is to develop a comprehensive model that can visualize a wide range of species found in the Gulf of Maine and, eventually, apply similar methods to marine ecosystems around the world.The researchers suggest that photography and generative AI form a continuum, rather than a conflict. Photography captures what is — the texture, light, and animal behavior during actual encounters — while AI extends that vision beyond what is seen, toward what could be understood, inferred, or imagined based on scientific data and artistic vision. Together, they offer a powerful framework for communicating science through image-making.In a region where ecosystems are changing rapidly, the act of visualizing becomes more than just documentation. It becomes a tool for awareness, engagement, and, ultimately, conservation. LOBSTgER is still in its infancy, and the team looks forward to sharing more discoveries, images, and insights as the project evolves.Answer from the lead image: The left image was generated using using LOBSTgER’s unconditional models and the right image is real.For more information, contact Keith Ellenbogen and Andreas Mentzelopoulos. More

  • in

    A journey of resilience, fueled by learning

    In 2021, Hilal Mohammadzai was set to begin his senior year at the American University of Afghanistan (AUAF), where he was working toward a bachelor’s degree in computer science. However, that August, the Taliban seized control of the Afghani government, and Mohammadzai’s education — along with that of thousands of other students — was put on hold. “It was an uncertain future for all of the students,” says Mohammadzai.Mohammadzai ultimately did receive his undergraduate degree from AUAF in May 2023 after months of disruption, and after transferring and studying for one semester at the American University of Bulgaria. As he was considering where to take his studies next, Mohammadzai heard about the MIT Emerging Talent Certificate in Computer and Data Science. His friend graduated from the program in early 2023 and had only positive things to say about the education, community, and network. Creating opportunities to learn data sciencePart of MIT Open Learning, Emerging Talent develops global education programs for talented individuals from challenging economic and social circumstances, equipping them with the knowledge and tools to advance their education and careers.The Certificate in Computer and Data Science is a year-long online learning program for talented learners including refugees, migrants, and first-generation low-income students from historically marginalized backgrounds and underserved communities worldwide. The curriculum incorporates computer science and data analysis coursework from MITx, professional skill building, capstone projects, mentorship and internship options, and opportunities for networking with MIT’s global community. Throughout his undergraduate coursework, Mohammadzai discovered an affinity for data visualization, and decided that he wanted to pursue a career in data science. The opportunity with the Emerging Talent program presented itself at the perfect time. Mohammadzai applied and was accepted into the 2023-24 cohort, earning a spot out of a pool of over 2,000 applicants. “I thought it would be a great opportunity to learn more data science to build up on my existing knowledge,” he says.Expanding and deepening his data science knowledgeMohammadzai’s acceptance to the Emerging Talent program came around the same time that he began an MBA program at the American University of Central Asia in Kyrgyzstan. For him, the two programs made for a perfect pairing. “When you have data science knowledge, you usually also require domain knowledge — whether it’s in business or economics — to help with interpreting data and making decisions,” he says. “Analyzing the data is one piece, but understanding how to interpret that data and make a decision usually requires domain knowledge.”Although Mohammadzai had some data science experience from his undergraduate coursework, he learned new skills and new approaches to familiar knowledge in the Emerging Talent program.“Data structures were covered at university, but I found it much more in-depth in the MIT courses,” said Mohammadzai. “I liked the way it was explained with real-life examples.” He worked with students from different backgrounds, and used Github for group projects. Mohammadzai also took advantage of personal agency and job-readiness workshops provided by the Emerging Talent team, such as how to pursue freelancing and build a mentorship network — skills that he has taken forward in life.“I found it an exceptional opportunity,” he says. “The courses, the level of education, and the quality of education that was provided by MIT was really inspiring to me.”Applying data skills to real-world situationsAfter graduating with his Certificate in Computer and Data Science, Mohammadzai began a paid internship with TomorrowNow, which was facilitated by introductions from the Emerging Talent team. Mohammadzai’s resume and experience stood out to the hiring team, and he was selected for the internship program.TomorrowNow is a climate-tech nonprofit that works with philanthropic partners, commercial markets, R&D organizations, and local climate adaptation efforts to localize and open source weather data for smallholder farmers in Africa. The organization builds public capacity and facilitates partnerships to deploy and sustain next-generation weather services for vulnerable communities facing climate change, while also enabling equitable access to these services so that African farmers can optimize scarce resources such as water and farm inputs. Leveraging philanthropy as seed capital, TomorrowNow aims to de-risk weather and climate technologies to make high-quality data and products available for the public good, ultimately incentivizing the private sector to develop products that reach last-mile communities often excluded from advancements in weather technology.For his internship, Mohammadzai worked with TomorrowNow climatologist John Corbett to understand the weather data, and ultimately learn how to analyze it to make recommendations on what information to share with customers. “We challenged Hilal to create a library of training materials leveraging his knowledge of Python and targeting utilization of meteorological data,” says Corbett. “For Hilal, the meteorological data was a new type of data and he jumped right in, working to create training materials for Python users that not only manipulated weather data, but also helped make clear patterns and challenges useful for agricultural interpretation of these data. The training tools he built helped to visualize — and quantify — agricultural meteorological thresholds and their risk and potential impact on crops.” Although he had previously worked with real-world data, working with TomorrowNow marked Mohammadzai’s first experience in the domain of climate data. This area presented a unique set of challenges and insights that broadened his perspective. It not only solidified his desire to continue on a data science path, but also sparked a new interest in working with mission-focused organizations. Both TomorrowNow and Mohammadzai would like to continue working together, but he first needs to secure a work visa.Without a visa, Mohammadzai cannot work for more than three to four hours a day, which makes securing a full-time job impossible. Back in 2021, the American University of Afghanistan filed a P-1 (priority one) asylum case for their students to seek resettlement in the United States because of the potential threat posed to them by the Taliban.Mohammadzai’s hearing was scheduled for Feb. 1, but it was postponed after the program was suspended early this year. As Mohammadzai looks to the end of his MBA program, his future feels uncertain. He has lived abroad since 2021 thanks to student visas and scholarships, but until he can secure a work visa he has limited options. He is considering pursuing a PhD program in order to keep his student visa status, while he waits on news about a more permanent option. “I just want to find a place where I can work and contribute to the community.” More

  • in

    Study: Climate change may make it harder to reduce smog in some regions

    Global warming will likely hinder our future ability to control ground-level ozone, a harmful air pollutant that is a primary component of smog, according to a new MIT study.The results could help scientists and policymakers develop more effective strategies for improving both air quality and human health. Ground-level ozone causes a host of detrimental health impacts, from asthma to heart disease, and contributes to thousands of premature deaths each year.The researchers’ modeling approach reveals that, as the Earth warms due to climate change, ground-level ozone will become less sensitive to reductions in nitrogen oxide emissions in eastern North America and Western Europe. In other words, it will take greater nitrogen oxide emission reductions to get the same air quality benefits.However, the study also shows that the opposite would be true in northeast Asia, where cutting emissions would have a greater impact on reducing ground-level ozone in the future. The researchers combined a climate model that simulates meteorological factors, such as temperature and wind speeds, with a chemical transport model that estimates the movement and composition of chemicals in the atmosphere.By generating a range of possible future outcomes, the researchers’ ensemble approach better captures inherent climate variability, allowing them to paint a fuller picture than many previous studies.“Future air quality planning should consider how climate change affects the chemistry of air pollution. We may need steeper cuts in nitrogen oxide emissions to achieve the same air quality goals,” says Emmie Le Roy, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author of a paper on this study.Her co-authors include Anthony Y.H. Wong, a postdoc in the MIT Center for Sustainability Science and Strategy; Sebastian D. Eastham, principal research scientist in the MIT Center for Sustainability Science and Strategy; Arlene Fiore, the Peter H. Stone and Paola Malanotte Stone Professor of EAPS; and senior author Noelle Selin, a professor in the Institute for Data, Systems, and Society (IDSS) and EAPS. The research appears today in Environmental Science and Technology.Controlling ozoneGround-level ozone differs from the stratospheric ozone layer that protects the Earth from harmful UV radiation. It is a respiratory irritant that is harmful to the health of humans, animals, and plants.Controlling ground-level ozone is particularly challenging because it is a secondary pollutant, formed in the atmosphere by complex reactions involving nitrogen oxides and volatile organic compounds in the presence of sunlight.“That is why you tend to have higher ozone days when it is warm and sunny,” Le Roy explains.Regulators typically try to reduce ground-level ozone by cutting nitrogen oxide emissions from industrial processes. But it is difficult to predict the effects of those policies because ground-level ozone interacts with nitrogen oxide and volatile organic compounds in nonlinear ways.Depending on the chemical environment, reducing nitrogen oxide emissions could cause ground-level ozone to increase instead.“Past research has focused on the role of emissions in forming ozone, but the influence of meteorology is a really important part of Emmie’s work,” Selin says.To conduct their study, the researchers combined a global atmospheric chemistry model with a climate model that simulate future meteorology.They used the climate model to generate meteorological inputs for each future year in their study, simulating factors such as likely temperature and wind speeds, in a way that captures the inherent variability of a region’s climate.Then they fed those inputs to the atmospheric chemistry model, which calculates how the chemical composition of the atmosphere would change because of meteorology and emissions.The researchers focused on Eastern North America, Western Europe, and Northeast China, since those regions have historically high levels of the precursor chemicals that form ozone and well-established monitoring networks to provide data.They chose to model two future scenarios, one with high warming and one with low warming, over a 16-year period between 2080 and 2095. They compared them to a historical scenario capturing 2000 to 2015 to see the effects of a 10 percent reduction in nitrogen oxide emissions.Capturing climate variability“The biggest challenge is that the climate naturally varies from year to year. So, if you want to isolate the effects of climate change, you need to simulate enough years to see past that natural variability,” Le Roy says.They could overcome that challenge due to recent advances in atmospheric chemistry modeling and by taking advantage of parallel computing to simulate multiple years at the same time. They simulated five 16-year realizations, resulting in 80 model years for each scenario.The researchers found that eastern North America and Western Europe are especially sensitive to increases in nitrogen oxide emissions from the soil, which are natural emissions driven by increases in temperature.Due to that sensitivity, as the Earth warms and more nitrogen oxide from soil enters the atmosphere, reducing nitrogen oxide emissions from human activities will have less of an impact on ground-level ozone.“This shows how important it is to improve our representation of the biosphere in these models to better understand how climate change may impact air quality,” Le Roy says.On the other hand, since industrial processes in northeast Asia cause more ozone per unit of nitrogen oxide emitted, cutting emissions there would cause greater reductions in ground-level ozone in future warming scenarios.“But I wouldn’t say that is a good thing because it means that, overall, there are higher levels of ozone,” Le Roy adds.Running detailed meteorology simulations, rather than relying on annual average weather data, gave the researchers a more complete picture of the potential effects on human health.“Average climate isn’t the only thing that matters. One high ozone day, which might be a statistical anomaly, could mean we don’t meet our air quality target and have negative human health impacts that we should care about,” Le Roy says.In the future, the researchers want to continue exploring the intersection of meteorology and air quality. They also want to expand their modeling approach to consider other climate change factors with high variability, like wildfires or biomass burning.“We’ve shown that it is important for air quality scientists to consider the full range of climate variability, even if it is hard to do in your models, because it really does affect the answer that you get,” says Selin.This work is funded, in part, by the MIT Praecis Presidential Fellowship, the J.H. and E.V. Wade Fellowship, and the MIT Martin Family Society of Fellows for Sustainability. More

  • in

    How can India decarbonize its coal-dependent electric power system?

    As the world struggles to reduce climate-warming carbon emissions, India has pledged to do its part, and its success is critical: In 2023, India was the third-largest carbon emitter worldwide. The Indian government has committed to having net-zero carbon emissions by 2070.To fulfill that promise, India will need to decarbonize its electric power system, and that will be a challenge: Fully 60 percent of India’s electricity comes from coal-burning power plants that are extremely inefficient. To make matters worse, the demand for electricity in India is projected to more than double in the coming decade due to population growth and increased use of air conditioning, electric cars, and so on.Despite having set an ambitious target, the Indian government has not proposed a plan for getting there. Indeed, as in other countries, in India the government continues to permit new coal-fired power plants to be built, and aging plants to be renovated and their retirement postponed.To help India define an effective — and realistic — plan for decarbonizing its power system, key questions must be addressed. For example, India is already rapidly developing carbon-free solar and wind power generators. What opportunities remain for further deployment of renewable generation? Are there ways to retrofit or repurpose India’s existing coal plants that can substantially and affordably reduce their greenhouse gas emissions? And do the responses to those questions differ by region?With funding from IHI Corp. through the MIT Energy Initiative (MITEI), Yifu Ding, a postdoc at MITEI, and her colleagues set out to answer those questions by first using machine learning to determine the efficiency of each of India’s current 806 coal plants, and then investigating the impacts that different decarbonization approaches would have on the mix of power plants and the price of electricity in 2035 under increasingly stringent caps on emissions.First step: Develop the needed datasetAn important challenge in developing a decarbonization plan for India has been the lack of a complete dataset describing the current power plants in India. While other studies have generated plans, they haven’t taken into account the wide variation in the coal-fired power plants in different regions of the country. “So, we first needed to create a dataset covering and characterizing all of the operating coal plants in India. Such a dataset was not available in the existing literature,” says Ding.Making a cost-effective plan for expanding the capacity of a power system requires knowing the efficiencies of all the power plants operating in the system. For this study, the researchers used as their metric the “station heat rate,” a standard measurement of the overall fuel efficiency of a given power plant. The station heat rate of each plant is needed in order to calculate the fuel consumption and power output of that plant as plans for capacity expansion are being developed.Some of the Indian coal plants’ efficiencies were recorded before 2022, so Ding and her team used machine-learning models to predict the efficiencies of all the Indian coal plants operating now. In 2024, they created and posted online the first comprehensive, open-sourced dataset for all 806 power plants in 30 regions of India. The work won the 2024 MIT Open Data Prize. This dataset includes each plant’s power capacity, efficiency, age, load factor (a measure indicating how much of the time it operates), water stress, and more.In addition, they categorized each plant according to its boiler design. A “supercritical” plant operates at a relatively high temperature and pressure, which makes it thermodynamically efficient, so it produces a lot of electricity for each unit of heat in the fuel. A “subcritical” plant runs at a lower temperature and pressure, so it’s less thermodynamically efficient. Most of the Indian coal plants are still subcritical plants running at low efficiency.Next step: Investigate decarbonization optionsEquipped with their detailed dataset covering all the coal power plants in India, the researchers were ready to investigate options for responding to tightening limits on carbon emissions. For that analysis, they turned to GenX, a modeling platform that was developed at MITEI to help guide decision-makers as they make investments and other plans for the future of their power systems.Ding built a GenX model based on India’s power system in 2020, including details about each power plant and transmission network across 30 regions of the country. She also entered the coal price, potential resources for wind and solar power installations, and other attributes of each region. Based on the parameters given, the GenX model would calculate the lowest-cost combination of equipment and operating conditions that can fulfill a defined future level of demand while also meeting specified policy constraints, including limits on carbon emissions. The model and all data sources were also released as open-source tools for all viewers to use.Ding and her colleagues — Dharik Mallapragada, a former principal research scientist at MITEI who is now an assistant professor of chemical and biomolecular energy at NYU Tandon School of Engineering and a MITEI visiting scientist; and Robert J. Stoner, the founding director of the MIT Tata Center for Technology and Design and former deputy director of MITEI for science and technology — then used the model to explore options for meeting demands in 2035 under progressively tighter carbon emissions caps, taking into account region-to-region variations in the efficiencies of the coal plants, the price of coal, and other factors. They describe their methods and their findings in a paper published in the journal Energy for Sustainable Development.In separate runs, they explored plans involving various combinations of current coal plants, possible new renewable plants, and more, to see their outcome in 2035. Specifically, they assumed the following four “grid-evolution scenarios:”Baseline: The baseline scenario assumes limited onshore wind and solar photovoltaics development and excludes retrofitting options, representing a business-as-usual pathway.High renewable capacity: This scenario calls for the development of onshore wind and solar power without any supply chain constraints.Biomass co-firing: This scenario assumes the baseline limits on renewables, but here all coal plants — both subcritical and supercritical — can be retrofitted for “co-firing” with biomass, an approach in which clean-burning biomass replaces some of the coal fuel. Certain coal power plants in India already co-fire coal and biomass, so the technology is known.Carbon capture and sequestration plus biomass co-firing: This scenario is based on the same assumptions as the biomass co-firing scenario with one addition: All of the high-efficiency supercritical plants are also retrofitted for carbon capture and sequestration (CCS), a technology that captures and removes carbon from a power plant’s exhaust stream and prepares it for permanent disposal. Thus far, CCS has not been used in India. This study specifies that 90 percent of all carbon in the power plant exhaust is captured.Ding and her team investigated power system planning under each of those grid-evolution scenarios and four assumptions about carbon caps: no cap, which is the current situation; 1,000 million tons (Mt) of carbon dioxide (CO2) emissions, which reflects India’s announced targets for 2035; and two more-ambitious targets, namely 800 Mt and 500 Mt. For context, CO2 emissions from India’s power sector totaled about 1,100 Mt in 2021. (Note that transmission network expansion is allowed in all scenarios.)Key findingsAssuming the adoption of carbon caps under the four scenarios generated a vast array of detailed numerical results. But taken together, the results show interesting trends in the cost-optimal mix of generating capacity and the cost of electricity under the different scenarios.Even without any limits on carbon emissions, most new capacity additions will be wind and solar generators — the lowest-cost option for expanding India’s electricity-generation capacity. Indeed, this is observed to be the case now in India. However, the increasing demand for electricity will still require some new coal plants to be built. Model results show a 10 to 20 percent increase in coal plant capacity by 2035 relative to 2020.Under the baseline scenario, renewables are expanded up to the maximum allowed under the assumptions, implying that more deployment would be economical. More coal capacity is built, and as the cap on emissions tightens, there is also investment in natural gas power plants, as well as batteries to help compensate for the now-large amount of intermittent solar and wind generation. When a 500 Mt cap on carbon is imposed, the cost of electricity generation is twice as high as it was with no cap.The high renewable capacity scenario reduces the development of new coal capacity and produces the lowest electricity cost of the four scenarios. Under the most stringent cap — 500 Mt — onshore wind farms play an important role in bringing the cost down. “Otherwise, it’ll be very expensive to reach such stringent carbon constraints,” notes Ding. “Certain coal plants that remain run only a few hours per year, so are inefficient as well as financially unviable. But they still need to be there to support wind and solar.” She explains that other backup sources of electricity, such as batteries, are even more costly. The biomass co-firing scenario assumes the same capacity limit on renewables as in the baseline scenario, and the results are much the same, in part because the biomass replaces such a low fraction — just 20 percent — of the coal in the fuel feedstock. “This scenario would be most similar to the current situation in India,” says Ding. “It won’t bring down the cost of electricity, so we’re basically saying that adding this technology doesn’t contribute effectively to decarbonization.”But CCS plus biomass co-firing is a different story. It also assumes the limits on renewables development, yet it is the second-best option in terms of reducing costs. Under the 500 Mt cap on CO2 emissions, retrofitting for both CCS and biomass co-firing produces a 22 percent reduction in the cost of electricity compared to the baseline scenario. In addition, as the carbon cap tightens, this option reduces the extent of deployment of natural gas plants and significantly improves overall coal plant utilization. That increased utilization “means that coal plants have switched from just meeting the peak demand to supplying part of the baseline load, which will lower the cost of coal generation,” explains Ding.Some concernsWhile those trends are enlightening, the analyses also uncovered some concerns for India to consider, in particular, with the two approaches that yielded the lowest electricity costs.The high renewables scenario is, Ding notes, “very ideal.” It assumes that there will be little limiting the development of wind and solar capacity, so there won’t be any issues with supply chains, which is unrealistic. More importantly, the analyses showed that implementing the high renewables approach would create uneven investment in renewables across the 30 regions. Resources for onshore and offshore wind farms are mainly concentrated in a few regions in western and southern India. “So all the wind farms would be put in those regions, near where the rich cities are,” says Ding. “The poorer cities on the eastern side, where the coal power plants are, will have little renewable investment.”So the approach that’s best in terms of cost is not best in terms of social welfare, because it tends to benefit the rich regions more than the poor ones. “It’s like [the government will] need to consider the trade-off between energy justice and cost,” says Ding. Enacting state-level renewable generation targets could encourage a more even distribution of renewable capacity installation. Also, as transmission expansion is planned, coordination among power system operators and renewable energy investors in different regions could help in achieving the best outcome.CCS plus biomass co-firing — the second-best option for reducing prices — solves the equity problem posed by high renewables, and it assumes a more realistic level of renewable power adoption. However, CCS hasn’t been used in India, so there is no precedent in terms of costs. The researchers therefore based their cost estimates on the cost of CCS in China and then increased the required investment by 10 percent, the “first-of-a-kind” index developed by the U.S. Energy Information Administration. Based on those costs and other assumptions, the researchers conclude that coal plants with CCS could come into use by 2035 when the carbon cap for power generation is less than 1,000 Mt.But will CCS actually be implemented in India? While there’s been discussion about using CCS in heavy industry, the Indian government has not announced any plans for implementing the technology in coal-fired power plants. Indeed, India is currently “very conservative about CCS,” says Ding. “Some researchers say CCS won’t happen because it’s so expensive, and as long as there’s no direct use for the captured carbon, the only thing you can do is put it in the ground.” She adds, “It’s really controversial to talk about whether CCS will be implemented in India in the next 10 years.”Ding and her colleagues hope that other researchers and policymakers — especially those working in developing countries — may benefit from gaining access to their datasets and learning about their methods. Based on their findings for India, she stresses the importance of understanding the detailed geographical situation in a country in order to design plans and policies that are both realistic and equitable. More