More stories

  • in

    Sustainable supply chains put the customer first

    When we consider the supply chain, we typically think of factories, ships, trucks, and warehouses. Yet, the customer side is equally important, especially in efforts to make our distribution networks more sustainable. Customers are an untapped resource in building sustainability, says Josué C. Velázquez Martínez, a research scientist at MIT Center for Transportation and Logistics. 

    Velázquez Martínez, who is director of MIT’s Sustainable Supply Chain Lab, investigates how customer-facing supply chains can be made more environmentally and socially sustainable. One way is a Green Button project that explores how to optimize e-commerce delivery schedules to reduce carbon emissions and persuade customers to use less carbon-intensive four- or five-day shipping options instead of one or two days. Velázquez Martínez has also launched the MIT Low Income Firms Transformation (LIFT) Lab that is researching ways to improve micro-retailer supply chains in the developing world to provide owners with the necessary tools for survival.  

    “The definition of sustainable supply chain keeps evolving because things that were sustainable 20 to 30 years ago are not as sustainable now,” says Velázquez Martínez. “Today, there are more companies that are capturing information to build strategies for environmental, economic, and social sustainability. They are investing in alternative energy and other solutions to make the supply chain more environmentally friendly and are tracking their suppliers and identifying key vulnerabilities. A big part of this is an attempt to create fairer conditions for people who work in supply chains or are dependent on them.”

    Play video

    The move toward sustainable supply chain is being driven as much by people as by companies, whether they are playing the role of selective consumer or voting citizens. The consumer aspect is often overlooked, says Velázquez Martínez. “Consumers are the ones who move the supply chain. We are looking at how companies can provide transparency to involve customers in their sustainability strategy.” 

    Proposed solutions for sustainability are not always as effective as promised. Some fashion rental schemes fall into this category, says Velázquez Martínez. “There are many new rental companies that are trying to get more use out of clothes to offset the emissions associated with production. We recently researched the environmental impact of monthly subscription models where consumers pay a fee to receive clothes for a month before returning them, as well as peer-to-peer sharing models.” 

    The researchers found that while rental services generally have a lower carbon footprint than retail sales, hidden emissions from logistics played a surprisingly large role. “First, you need to deliver the clothes and pick them up, and there are high return rates,” says Velázquez Martínez. “When you factor in dry cleaning and packaging emissions, the rental models in some cases have a worse carbon footprint than buying new clothes.” Peer-to-peer sharing could be better, he adds, but that depends on how far the consumers travel to meet-up points. 

    Typically, says Velázquez Martínez, garment types that are frequently used are not well suited to rental models. “But for specialty clothes such as wedding dresses or prom dresses, it is better to rent.” 

    Waiting a few days to save the planet 

    Even before the pandemic, online retailing gained a second wind due to low-cost same- and next-day delivery options. While e-commerce may have its drawbacks as a contributor to social isolation and reduced competition, it has proven itself to be far more eco-friendly than brick-and-mortar shopping, not to mention a lot more convenient. Yet rapid deliveries are cutting into online-shopping’s carbon-cutting advantage.

    In 2019, MIT’s Sustainable Supply Chain Lab launched a Green Bottle project to study the rapid delivery phenomenon. The project has been “testing whether consumers would be willing to delay their e-commerce deliveries to reduce the environmental impact of fast shipping,” says Velázquez Martínez. “Many companies such as Walmart and Target have followed Amazon’s 2019 strategy of moving from two-day to same-day delivery. Instead of sending a fully loaded truck to a neighborhood every few days, they now send multiple trucks to that neighborhood every day, and there are more days when trucks are targeting each neighborhood. All this increases carbon emissions and makes it hard for shippers to consolidate. ”  

    Working with Coppel, one of Mexico’s largest retailers, the Green Button project inspired a related Consolidation Ecommerce Project that built a large-scale mathematical model to provide a strategy for consolidation. The model determined what delivery time window each neighborhood demands and then calculated the best day to deliver to each neighborhood to meet the desired window while minimizing carbon emissions. 

    No matter what mixture of delivery times was used, the consolidation model helped retailers schedule deliveries more efficiently. Yet, the biggest cuts in emissions emerged when customers were willing to wait several days.

    Play video

    “When we ran a month-long simulation comparing our model for four-to-five-day delivery with Coppel’s existing model for one- or two-day delivery, we saw savings in fuel consumption of over 50 percent on certain routes” says Velázquez Martínez. “This is huge compared to other strategies for squeezing more efficiency from the last-mile supply chain, such as routing optimization, where savings are close to 5 percent. The optimal solution depends on factors such as the capacity for consolidation, the frequency of delivery, the store capacity, and the impact on inbound operations.” 

    The researchers next set out to determine if customers could be persuaded to wait longer for deliveries. Considering that the price differential is low or nonexistent, this was a considerable challenge. Yet, the same day habit is only a few years old, and some consumers have come to realize they don’t always need rapid deliveries. “Some consumers who order by rapid delivery find they are too busy to open the packages right away,” says Velázquez Martínez.  

    Trees beat kilograms of CO2

    The researchers set out to find if consumers would be willing to sacrifice a bit of convenience if they knew they were helping to reduce climate change. The Green Button project tested different public outreach strategies. For one test group, they reported the carbon impact of delivery times in kilograms of carbon dioxide (CO2). Another group received the information expressed in terms of the energy required to recycle a certain amount of garbage. A third group learned about emissions in terms of the number of trees required to trap the carbon. “Explaining the impact in terms of trees led to almost 90 percent willing to wait another day or two,” says Velázquez Martínez. “This is compared to less than 40 percent for the group that received the data in kilograms of CO2.” 

    Another surprise was that there was no difference in response based on income, gender, or age. “Most studies of green consumers suggest they are predominantly high income, female, highly educated, or younger,” says Velázquez Martínez. “However, our results show that the differences were the same between low and high income, women and men, and younger and older people. We have shown that disclosing emissions transparently and making the consumer a part of the strategy can be a new opportunity for more consumer-driven logistics sustainability.” 

    The researchers are now developing similar models for business-to-business (B2B) e-commerce. “We found that B2B supply chain emissions are often high because many shipping companies require strict delivery windows,” says Velázquez Martínez.  

    The B2B models drill down to examine the Corporate Value Chain (Scope 3) emissions of suppliers. “Although some shipping companies are now asking their suppliers to review emissions, it is a challenge to create a transparent supply chain,” says Velázquez Martínez.  “Technological innovations have made it easier, starting with RFID [radio frequency identification], and then real-time GPS mapping and blockchain. But these technologies need to be more accessible and affordable, and we need more companies willing to use them.” 

    Some companies have been hesitant to dig too deeply into their supply chain, fearing they might uncover a scandal that might risk their reputation, says Velázquez Martínez. Other organizations are forced to look at the issue when nongovernmental organizations research sustainability issues such as social injustice in sweat shops and conflict mineral mines. 

    One challenge to building a transparent supply chain is that “in many companies, the sustainability teams are separate from the rest of the company,” says Velázquez Martínez. “Even if the CEOs receive information on sustainability issues, it often doesn’t filter down because the information does not belong to the planners or managers. We are pushing companies to not only account for sustainability factors in supply chain network design but also examine daily operations that affect sustainability. This is a big topic now: How can we translate sustainability information into something that everybody can understand and use?” 

    LIFT Lab lifts micro-retailers  

    In 2016, Velázquez Martínez launched the MIT GeneSys project to gain insights into micro and small enterprises (MSEs) in developing countries. The project released a GeneSys mobile app, which was used by more than 500 students throughout Latin America to collect data on more than 800 microfirms. In 2022, he launched the LIFT Lab, which focuses more specifically on studying and improving the supply chain for MSEs.  

    Worldwide, some 90 percent of companies have fewer than 10 employees. In Latin America and the Caribbean, companies with fewer than 50 employees represent 99 percent of all companies and 47 percent of employment. 

    Although MSEs represent much of the world’s economy, they are poorly understood, notes Velázquez Martínez. “Those tiny businesses are driving a lot of the economy and serve as important customers for the large companies working in developing countries. They range from small businesses down to people trying to get some money to eat by selling cakes or tacos through their windows.”  

    The MIT LIFT Lab researchers investigated whether MSE supply chain issues could help shed light on why many Latin American countries have been limited to marginal increases in gross domestic product. “Large companies from the developed world that are operating in Latin America, such as Unilever, Walmart, and Coca-Cola, have huge growth there, in some cases higher than they have in the developed world,” says Velázquez Martínez. “Yet, the countries are not developing as fast as we would expect.” 

    The LIFT Lab data showed that while the multinationals are thriving in Latin America, the local MSEs are decreasing in productivity. The study also found the trend has worsened with Covid-19.  

    The LIFT Lab’s first big project, which is sponsored by Mexican beverage and retail company FEMSA, is studying supply chains in Mexico. The study spans 200,000 micro-retailers and 300,000 consumers. In a collaboration with Tecnológico de Monterrey, hundreds of students are helping with a field study.  

    “We are looking at supply chain management and business capabilities and identifying the challenges to adoption of technology and digitalization,” says Velázquez Martínez. “We want to find the best ways for micro-firms to work with suppliers and consumers by identifying the consumers who access this market, as well as the products and services that can best help the micro-firms drive growth.” 

    Based on the earlier research by GeneSys, Velázquez Martínez has developed some hypotheses for potential improvements for micro-retailer supply chain, starting with payment terms. “We found that the micro-firms often get the worst purchasing deals. Owners without credit cards and with limited cash often buy in smaller amounts at much higher prices than retailers like Walmart. The big suppliers are squeezing them.” 

    While large retailers usually get 60 to 120 days to pay, micro-retailers “either pay at the moment of the transaction or in advance,” says Velázquez Martínez. “In a study of 500 micro-retailers in five countries in Latin America, we found the average payment time was minus seven days payment in advance. These terms reduce cash availability and often lead to bankruptcy.” 

    LIFT Lab is working with suppliers to persuade them to offer a minimum payment time of two weeks. “We can show the suppliers that the change in terms will let them move more product and increase sales,” says Velázquez Martínez. “Meanwhile, the micro-retailers gain higher profits and become more stable, even if they may pay a bit more.” 

    LIFT Lab is also looking at ways that micro-retailers can leverage smartphones for digitalization and planning. “Some of these companies are keeping records on napkins,” says Velázquez Martínez. “By using a cellphone, they can charge orders to suppliers and communicate with consumers. We are testing different dashboards for mobile apps to help with planning and financial performance. We are also recommending services the stores can provide, such as paying electricity or water bills. The idea is to build more capabilities and knowledge and increase business competencies for the supply chain that are tailored for micro-retailers.” 

    From a financial perspective, micro-retailers are not always the most efficient way to move products. Yet they also play an important role in building social cohesion within neighborhoods. By offering more services, the corner bodega can bring people together in ways that are impossible with e-commerce and big-box stores.  

    Whether the consumers are micro-firms buying from suppliers or e-commerce customers waiting for packages, “transparency is key to building a sustainable supply chain,” says Velázquez Martínez. “To change consumer habits, consumers need to be better educated on the impacts of their behaviors. With consumer-facing logistics, ‘The last shall be first, and the first last.’” More

  • in

    Studying floods to better predict their dangers

    “My job is basically flooding Cambridge,” says Katerina “Katya” Boukin, a graduate student in civil and environmental engineering at MIT and the MIT Concrete Sustainability Hub’s resident expert on flood simulations. 

    You can often find her fine-tuning high-resolution flood risk models for the City of Cambridge, Massachusetts, or talking about hurricanes with fellow researcher Ipek Bensu Manav.

    Flooding represents one of the world’s gravest natural hazards. Extreme climate events inducing flooding, like severe storms, winter storms, and tropical cyclones, caused an estimated $128.1 billion of damages in 2021 alone. 

    Climate simulation models suggest that severe storms will become more frequent in the coming years, necessitating a better understanding of which parts of cities are most vulnerable — an understanding that can be improved through modeling.

    A problem with current flood models is that they struggle to account for an oft-misunderstood type of flooding known as pluvial flooding. 

    “You might think of flooding as the overflowing of a body of water, like a river. This is fluvial flooding. This can be somewhat predictable, as you can think of proximity to water as a risk factor,” Boukin explains.

    However, the “flash flooding” that causes many deaths each year can happen even in places nowhere near a body of water. This is an example of pluvial flooding, which is affected by terrain, urban infrastructure, and the dynamic nature of storm loads.

    “If we don’t know how a flood is propagating, we don’t know the risk it poses to the urban environment. And if we don’t understand the risk, we can’t really discuss mitigation strategies,” says Boukin, “That’s why I pursue improving flood propagation models.”

    Boukin is leading development of a new flood prediction method that seeks to address these shortcomings. By better representing the complex morphology of cities, Boukin’s approach may provide a clearer forecast of future urban flooding.

    Katya Boukin developed this model of the City of Cambridge, Massachusetts. The base model was provided through a collaboration between MIT, the City of Cambridge, and Dewberry Engineering.

    Image: Katya Boukin

    Previous item
    Next item

    “In contrast to the more typical traditional catchment model, our method has rainwater spread around the urban environment based on the city’s topography, below-the-surface features like sewer pipes, and the characteristics of local soils,” notes Boukin.

    “We can simulate the flooding of regions with local rain forecasts. Our results can show how flooding propagates by the foot and by the second,” she adds.

    While Boukin’s current focus is flood simulation, her unconventional academic career has taken her research in many directions, like examining structural bottlenecks in dense urban rail systems and forecasting ground displacement due to tunneling. 

    “I’ve always been interested in the messy side of problem-solving. I think that difficult problems present a real chance to gain a deeper understanding,” says Boukin.

    Boukin credits her upbringing for giving her this perspective. A native of Israel, Boukin says that civil engineering is the family business. “My parents are civil engineers, my mom’s parents are, too, her grandfather was a professor in civil engineering, and so on. Civil engineering is my bloodline.”

    However, the decision to follow the family tradition did not come so easily. “After I took the Israeli equivalent of the SAT, I was at a decision point: Should I go to engineering school or medical school?” she recalls.

    “I decided to go on a backpacking trip to help make up my mind. It’s sort of an Israeli rite to explore internationally, so I spent six months in South America. I think backpacking is something everyone should do.”

    After this soul searching, Boukin landed on engineering school, where she fell in love with structural engineering. “It was the option that felt most familiar and interesting. I grew up playing with AutoCAD on the family computer, and now I use AutoCAD professionally!” she notes.

    “For my master’s degree, I was looking to study in a department that would help me integrate knowledge from fields like climatology and civil engineering. I found the MIT Department of Civil and Environmental Engineering to be an excellent fit,” she says.

    “I am lucky that MIT has so many people that work together as well as they do. I ended up at the Concrete Sustainability Hub, where I’m working on projects which are the perfect fit between what I wanted to do and what the department wanted to do.” 

    Boukin’s move to Cambridge has given her a new perspective on her family and childhood. 

    “My parents brought me to Israel when I was just 1 year old. In moving here as a second-time immigrant, I have a new perspective on what my parents went through during the move to Israel. I moved when I was 27 years old, the same age as they were. They didn’t have a support network and worked any job they could find,” she explains.

    “I am incredibly grateful to them for the morals they instilled in my sister, who recently graduated medical school, and I. I know I can call my parents if I ever need something, and they will do whatever they can to help.”

    Boukin hopes to honor her parents’ efforts through her research.

    “Not only do I want to help stakeholders understand flood risks, I want to make awareness of flooding more accessible. Each community needs different things to be resilient, and different cultures have different ways of delivering and receiving information,” she says.

    “Everyone should understand that they, in addition to the buildings and infrastructure around them, are part of a complex ecosystem. Any change to a city can affect the rest of it. If designers and residents are aware of this when considering flood mitigation strategies, we can better design cities and understand the consequences of damage.” More

  • in

    Simulating neutron behavior in nuclear reactors

    Amelia Trainer applied to MIT because she lost a bet.

    As part of what the fourth-year nuclear science and engineering (NSE) doctoral student labels her “teenage rebellious phase,” Trainer was quite convinced she would just be wasting the application fee were she to submit an application. She wasn’t even “super sure” she wanted to go to college. But a high-school friend was convinced Trainer would get into a “top school” if she only applied. A bet followed: If Trainer lost, she would have to apply to MIT. Trainer lost — and is glad she did.

    Growing up in Daytona Beach, Florida, good grades were Trainer’s thing. Seeing friends participate in interschool math competitions, Trainer decided she would tag along and soon found she loved them. She remembers being adept at reading the room: If teams were especially struggling over a problem, Trainer figured the answer had to be something easy, like zero or one. “The hardest problems would usually have the most goofball answers,” she laughs.

    Simulating neutron behavior

    As a doctoral student, hard problems in math, specifically computational reactor physics, continue to be Trainer’s forte.

    Her research, under the guidance of Professor Benoit Forget in MIT NSE’s Computational Reactor Physics Group (CRPG), focuses on modeling complicated neutron behavior in reactors. Simulation helps forecast the behavior of reactors before millions of dollars sink into development of a potentially uneconomical unit. Using simulations, Trainer can see “where the neutrons are going, how much heat is being produced, and how much power the reactor can generate.” Her research helps form the foundation for the next generation of nuclear power plants.

    To simulate neutron behavior inside of a nuclear reactor, you first need to know how neutrons will interact with the various materials inside the system. These neutrons can have wildly different energies, thereby making them susceptible to different physical phenomena. For the entirety of her graduate studies, Trainer has been primarily interested in the physics regarding slow-moving neutrons and their scattering behavior.

    When a slow neutron scatters off of a material, it can induce or cancel out molecular vibrations between the material’s atoms. The effect that material vibrations can have on neutron energies, and thereby on reactor behavior, has been heavily approximated over the years. Trainer is primarily interested in chipping away at these approximations by creating scattering data for materials that have historically been misrepresented and by exploring new techniques for preparing slow-neutron scattering data.

    Trainer remembers waiting for a simulation to complete in the early days of the Covid-19 pandemic, when she discovered a way to predict neutron behavior with limited input data. Traditionally, “people have to store large tables of what neutrons will do under specific circumstances,” she says. “I’m really happy about it because it’s this really cool method of sampling what your neutron does from very little information,” Trainer says.

    Amelia Trainer — Modeling complicated neutron behavior in nuclear reactors

    As part of her research, Trainer often works closely with two software packages: OpenMC and NJOY. OpenMC is a Monte Carlo neutron transport simulation code that was developed in the CRPG and is used to simulate neutron behavior in reactor systems. NJOY is a nuclear data processing tool, and is used to create, augment, and prepare material data that is fed into tools like OpenMC. By editing both these codes to her specifications, Trainer is able to observe the effect that “upstream” material data has on the “downstream” reactor calculations. Through this, she hopes to identify additional problems: approximations that could lead to a noticeable misrepresentation of the physics.

    A love of geometry and poetry

    Trainer discovered the coolness of science as a child. Her mother, who cares for indoor plants and runs multiple greenhouses, and her father, a blacksmith and farrier, who explored materials science through his craft, were self-taught inspirations.

    Trainer’s father urged his daughter to learn and pursue any topics that she found exciting and encouraged her to read poems from “Calvin and Hobbes” out loud when she struggled with a speech impediment in early childhood. Reading the same passages every day helped her memorize them. “The natural manifestation of that extended into [a love of] poetry,” Trainer says.

    A love of poetry, combined with Trainer’s propensity for fun, led her to compose an ode to pi as part of an MIT-sponsored event for alumni. “I was really only in it for the cupcake,” she laughs. (Participants received an indulgent treat).

    Play video

    MIT Matters: A Love Poem to Pi

    Computations and nuclear science

    After being accepted at MIT, Trainer knew she wanted to study in a field that would take her skills at the levels they were at — “my math skills were pretty underdeveloped in the grand scheme of things,” she says. An open-house weekend at MIT, where she met with faculty from the NSE department, and the opportunity to contribute to a discipline working toward clean energy, cemented Trainer’s decision to join NSE.

    As a high schooler, Trainer won a scholarship to Embry-Riddle Aeronautical University to learn computer coding and knew computational physics might be more aligned with her interests. After she joined MIT as an undergraduate student in 2014, she realized that the CRPG, with its focus on coding and modeling, might be a good fit. Fortunately, a graduate student from Forget’s team welcomed Trainer’s enthusiasm for research even as an undergraduate first-year. She has stayed with the lab ever since. 

    Research internships at Los Alamos National Laboratory, the creators of NJOY, have furthered Trainer’s enthusiasm for modeling and computational physics. She met a Los Alamos scientist after he presented a talk at MIT and it snowballed into a collaboration where she could work on parts of the NJOY code. “It became a really cool collaboration which led me into a deep dive into physics and data preparation techniques, which was just so fulfilling,” Trainer says. As for what’s next, Trainer was awarded the Rickover fellowship in nuclear engineering by the the Department of Energy’s Naval Reactors Division and will join the program in Pittsburgh after she graduates.

    For many years, Trainer’s cats, Jacques and Monster, have been a constant companion. “Neutrons, computers, and cats, that’s my personality,” she laughs. Work continues to fuel her passion. To borrow a favorite phrase from Spaceman Spiff, Trainer’s favorite “Calvin” avatar, Trainer’s approach to research has invariably been: “Another day, another mind-boggling adventure.” More

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More

  • in

    New J-WAFS-led project combats food insecurity

    Today the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT announced a new research project, supported by Community Jameel, to tackle one of the most urgent crises facing the planet: food insecurity. Approximately 276 million people worldwide are severely food insecure, and more than half a million face famine conditions.     To better understand and analyze food security, this three-year research project will develop a comprehensive index assessing countries’ food security vulnerability, called the Jameel Index for Food Trade and Vulnerability. Global changes spurred by social and economic transitions, energy and environmental policy, regional geopolitics, conflict, and of course climate change, can impact food demand and supply. The Jameel Index will measure countries’ dependence on global food trade and imports and how these regional-scale threats might affect the ability to trade food goods across diverse geographic regions. A main outcome of the research will be a model to project global food demand, supply balance, and bilateral trade under different likely future scenarios, with a focus on climate change. The work will help guide policymakers over the next 25 years while the global population is expected to grow, and the climate crisis is predicted to worsen.    

    The work will be the foundational project for the J-WAFS-led Food and Climate Systems Transformation Alliance, or FACT Alliance. Formally launched at the COP26 climate conference last November, the FACT Alliance is a global network of 20 leading research institutions and stakeholder organizations that are driving research and innovation and informing better decision-making for healthy, resilient, equitable, and sustainable food systems in a rapidly changing climate. The initiative is co-directed by Greg Sixt, research manager for climate and food systems at J-WAFS, and Professor Kenneth Strzepek, climate, water, and food specialist at J-WAFS.

    The dire state of our food systems

    The need for this project is evidenced by the hundreds of millions of people around the globe currently experiencing food shortages. While several factors contribute to food insecurity, climate change is one of the most notable. Devastating extreme weather events are increasingly crippling crop and livestock production around the globe. From Southwest Asia to the Arabian Peninsula to the Horn of Africa, communities are migrating in search of food. In the United States, extreme heat and lack of rainfall in the Southwest have drastically lowered Lake Mead’s water levels, restricting water access and drying out farmlands. 

    Social, political, and economic issues also disrupt food systems. The effects of the Covid-19 pandemic, supply chain disruptions, and inflation continue to exacerbate food insecurity. Russia’s invasion of Ukraine is dramatically worsening the situation, disrupting agricultural exports from both Russia and Ukraine — two of the world’s largest producers of wheat, sunflower seed oil, and corn. Other countries like Lebanon, Sri Lanka, and Cuba are confronting food insecurity due to domestic financial crises.

    Few countries are immune to threats to food security from sudden disruptions in food production or trade. When an enormous container ship became lodged in the Suez Canal in March 2021, the vital international trade route was blocked for three months. The resulting delays in international shipping affected food supplies around the world. These situations demonstrate the importance of food trade in achieving food security: a disaster in one part of the world can drastically affect the availability of food in another. This puts into perspective just how interconnected the earth’s food systems are and how vulnerable they remain to external shocks. 

    An index to prepare for the future of food

    Despite the need for more secure food systems, significant knowledge gaps exist when it comes to understanding how different climate scenarios may affect both agricultural productivity and global food supply chains and security. The Global Trade Analysis Project database from Purdue University, and the current IMPACT modeling system from the International Food Policy Research Institute (IFPRI), enable assessments of existing conditions but cannot project or model changes in the future.

    In 2021, Strzepek and Sixt developed an initial Food Import Vulnerability Index (FIVI) as part of a regional assessment of the threat of climate change to food security in the Gulf Cooperation Council states and West Asia. FIVI is also limited in that it can only assess current trade conditions and climate change threats to food production. Additionally, FIVI is a national aggregate index and does not address issues of hunger, poverty, or equity that stem from regional variations within a country.

    “Current models are really good at showing global food trade flows, but we don’t have systems for looking at food trade between individual countries and how different food systems stressors such as climate change and conflict disrupt that trade,” says Greg Sixt of J-WAFS and the FACT Alliance. “This timely index will be a valuable tool for policymakers to understand the vulnerabilities to their food security from different shocks in the countries they import their food from. The project will also illustrate the stakeholder-guided, transdisciplinary approach that is central to the FACT Alliance,” Sixt adds.

    Phase 1 of the project will support a collaboration between four FACT Alliance members: MIT J-WAFS, Ethiopian Institute of Agricultural Research, IFPRI (which is also part of the CGIAR network), and the Martin School at the University of Oxford. An external partner, United Arab Emirates University, will also assist with the project work. This first phase will build on Strzepek and Sixt’s previous work on FIVI by developing a comprehensive Global Food System Modeling Framework that takes into consideration climate and global changes projected out to 2050, and assesses their impacts on domestic production, world market prices, and national balance of payments and bilateral trade. The framework will also utilize a mixed-modeling approach that includes the assessment of bilateral trade and macroeconomic data associated with varying agricultural productivity under the different climate and economic policy scenarios. In this way, consistent and harmonized projections of global food demand and supply balance, and bilateral trade under climate and global change can be achieved. 

    “Just like in the global response to Covid-19, using data and modeling are critical to understanding and tackling vulnerabilities in the global supply of food,” says George Richards, director of Community Jameel. “The Jameel Index for Food Trade and Vulnerability will help inform decision-making to manage shocks and long-term disruptions to food systems, with the aim of ensuring food security for all.”

    On a national level, the researchers will enrich the Jameel Index through country-level food security analyses of regions within countries and across various socioeconomic groups, allowing for a better understanding of specific impacts on key populations. The research will present vulnerability scores for a variety of food security metrics for 126 countries. Case studies of food security and food import vulnerability in Ethiopia and Sudan will help to refine the applicability of the Jameel Index with on-the-ground information. The case studies will use an IFPRI-developed tool called the Rural Investment and Policy Analysis model, which allows for analysis of urban and rural populations and different income groups. Local capacity building and stakeholder engagement will be critical to enable the use of the tools developed by this research for national-level planning in priority countries, and ultimately to inform policy.  Phase 2 of the project will build on phase 1 and the lessons learned from the Ethiopian and Sudanese case studies. It will entail a number of deeper, country-level analyses to assess the role of food imports on future hunger, poverty, and equity across various regional and socioeconomic groups within the modeled countries. This work will link the geospatial national models with the global analysis. A scholarly paper is expected to be submitted to show findings from this work, and a website will be launched so that interested stakeholders and organizations can learn more information. More

  • in

    Study finds natural sources of air pollution exceed air quality guidelines in many regions

    Alongside climate change, air pollution is one of the biggest environmental threats to human health. Tiny particles known as particulate matter or PM2.5 (named for their diameter of just 2.5 micrometers or less) are a particularly hazardous type of pollutant. These particles are produced from a variety of sources, including wildfires and the burning of fossil fuels, and can enter our bloodstream, travel deep into our lungs, and cause respiratory and cardiovascular damage. Exposure to particulate matter is responsible for millions of premature deaths globally every year.

    In response to the increasing body of evidence on the detrimental effects of PM2.5, the World Health Organization (WHO) recently updated its air quality guidelines, lowering its recommended annual PM2.5 exposure guideline by 50 percent, from 10 micrograms per meter cubed (μm3) to 5 μm3. These updated guidelines signify an aggressive attempt to promote the regulation and reduction of anthropogenic emissions in order to improve global air quality.

    A new study by researchers in the MIT Department of Civil and Environmental Engineering explores if the updated air quality guideline of 5 μm3 is realistically attainable across different regions of the world, particularly if anthropogenic emissions are aggressively reduced. 

    The first question the researchers wanted to investigate was to what degree moving to a no-fossil-fuel future would help different regions meet this new air quality guideline.

    “The answer we found is that eliminating fossil-fuel emissions would improve air quality around the world, but while this would help some regions come into compliance with the WHO guidelines, for many other regions high contributions from natural sources would impede their ability to meet that target,” says senior author Colette Heald, the Germeshausen Professor in the MIT departments of Civil and Environmental Engineering, and Earth, Atmospheric and Planetary Sciences. 

    The study by Heald, Professor Jesse Kroll, and graduate students Sidhant Pai and Therese Carter, published June 6 in the journal Environmental Science and Technology Letters, finds that over 90 percent of the global population is currently exposed to average annual concentrations that are higher than the recommended guideline. The authors go on to demonstrate that over 50 percent of the world’s population would still be exposed to PM2.5 concentrations that exceed the new air quality guidelines, even in the absence of all anthropogenic emissions.

    This is due to the large natural sources of particulate matter — dust, sea salt, and organics from vegetation — that still exist in the atmosphere when anthropogenic emissions are removed from the air. 

    “If you live in parts of India or northern Africa that are exposed to large amounts of fine dust, it can be challenging to reduce PM2.5 exposures below the new guideline,” says Sidhant Pai, co-lead author and graduate student. “This study challenges us to rethink the value of different emissions abatement controls across different regions and suggests the need for a new generation of air quality metrics that can enable targeted decision-making.”

    The researchers conducted a series of model simulations to explore the viability of achieving the updated PM2.5 guidelines worldwide under different emissions reduction scenarios, using 2019 as a representative baseline year. 

    Their model simulations used a suite of different anthropogenic sources that could be turned on and off to study the contribution of a particular source. For instance, the researchers conducted a simulation that turned off all human-based emissions in order to determine the amount of PM2.5 pollution that could be attributed to natural and fire sources. By analyzing the chemical composition of the PM2.5 aerosol in the atmosphere (e.g., dust, sulfate, and black carbon), the researchers were also able to get a more accurate understanding of the most important PM2.5 sources in a particular region. For example, elevated PM2.5 concentrations in the Amazon were shown to predominantly consist of carbon-containing aerosols from sources like deforestation fires. Conversely, nitrogen-containing aerosols were prominent in Northern Europe, with large contributions from vehicles and fertilizer usage. The two regions would thus require very different policies and methods to improve their air quality. 

    “Analyzing particulate pollution across individual chemical species allows for mitigation and adaptation decisions that are specific to the region, as opposed to a one-size-fits-all approach, which can be challenging to execute without an understanding of the underlying importance of different sources,” says Pai. 

    When the WHO air quality guidelines were last updated in 2005, they had a significant impact on environmental policies. Scientists could look at an area that was not in compliance and suggest high-level solutions to improve the region’s air quality. But as the guidelines have tightened, globally-applicable solutions to manage and improve air quality are no longer as evident. 

    “Another benefit of speciating is that some of the particles have different toxicity properties that are correlated to health outcomes,” says Therese Carter, co-lead author and graduate student. “It’s an important area of research that this work can help motivate. Being able to separate out that piece of the puzzle can provide epidemiologists with more insights on the different toxicity levels and the impact of specific particles on human health.”

    The authors view these new findings as an opportunity to expand and iterate on the current guidelines.  

    “Routine and global measurements of the chemical composition of PM2.5 would give policymakers information on what interventions would most effectively improve air quality in any given location,” says Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering. “But it would also provide us with new insights into how different chemical species in PM2.5 affect human health.”

    “I hope that as we learn more about the health impacts of these different particles, our work and that of the broader atmospheric chemistry community can help inform strategies to reduce the pollutants that are most harmful to human health,” adds Heald. More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More

  • in

    MIT J-WAFS announces 2022 seed grant recipients

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT has awarded eight MIT principal investigators with 2022 J-WAFS seed grants. The grants support innovative MIT research that has the potential to have significant impact on water- and food-related challenges.

    The only program at MIT that is dedicated to water- and food-related research, J-WAFS has offered seed grant funding to MIT principal investigators and their teams for the past eight years. The grants provide up to $75,000 per year, overhead-free, for two years to support new, early-stage research in areas such as water and food security, safety, supply, and sustainability. Past projects have spanned many diverse disciplines, including engineering, science, technology, and business innovation, as well as social science and economics, architecture, and urban planning. 

    Seven new projects led by eight researchers will be supported this year. With funding going to four different MIT departments, the projects address a range of challenges by employing advanced materials, technology innovations, and new approaches to resource management. The new projects aim to remove harmful chemicals from water sources, develop drought monitoring systems for farmers, improve management of the shellfish industry, optimize water purification materials, and more.

    “Climate change, the pandemic, and most recently the war in Ukraine have exacerbated and put a spotlight on the serious challenges facing global water and food systems,” says J-WAFS director John H. Lienhard. He adds, “The proposals chosen this year have the potential to create measurable, real-world impacts in both the water and food sectors.”  

    The 2022 J-WAFS seed grant researchers and their projects are:

    Gang Chen, the Carl Richard Soderberg Professor of Power Engineering in MIT’s Department of Mechanical Engineering, is using sunlight to desalinate water. The use of solar energy for desalination is not a new idea, particularly solar thermal evaporation methods. However, the solar thermal evaporation process has an overall low efficiency because it relies on breaking hydrogen bonds among individual water molecules, which is very energy-intensive. Chen and his lab recently discovered a photomolecular effect that dramatically lowers the energy required for desalination. 

    The bonds among water molecules inside a water cluster in liquid water are mostly hydrogen bonds. Chen discovered that a photon with energy larger than the bonding energy between the water cluster and the remaining water liquids can cleave off the water cluster at the water-air interface, colliding with air molecules and disintegrating into 60 or even more individual water molecules. This effect has the potential to significantly boost clean water production via new desalination technology that produces a photomolecular evaporation rate that exceeds pure solar thermal evaporation by at least ten-fold. 

    John E. Fernández is the director of the MIT Environmental Solutions Initiative (ESI) and a professor in the Department of Architecture, and also affiliated with the Department of Urban Studies and Planning. Fernández is working with Scott D. Odell, a postdoc in the ESI, to better understand the impacts of mining and climate change in water-stressed regions of Chile.

    The country of Chile is one of the world’s largest exporters of both agricultural and mineral products; however, little research has been done on climate change effects at the intersection of these two sectors. Fernández and Odell will explore how desalination is being deployed by the mining industry to relieve pressure on continental water supplies in Chile, and with what effect. They will also research how climate change and mining intersect to affect Andean glaciers and agricultural communities dependent upon them. The researchers intend for this work to inform policies to reduce social and environmental harms from mining, desalination, and climate change.

    Ariel L. Furst is the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT. Her 2022 J-WAFS seed grant project seeks to effectively remove dangerous and long-lasting chemicals from water supplies and other environmental areas. 

    Perfluorooctanoic acid (PFOA), a component of Teflon, is a member of a group of chemicals known as per- and polyfluoroalkyl substances (PFAS). These human-made chemicals have been extensively used in consumer products like nonstick cooking pans. Exceptionally high levels of PFOA have been measured in water sources near manufacturing sites, which is problematic as these chemicals do not readily degrade in our bodies or the environment. The majority of humans have detectable levels of PFAS in their blood, which can lead to significant health issues including cancer, liver damage, and thyroid effects, as well as developmental effects in infants. Current remediation methods are limited to inefficient capture and are mostly confined to laboratory settings. Furst’s proposed method utilizes low-energy, scaffolded enzyme materials to move beyond simple capture to degrade these hazardous pollutants.

    Heather J. Kulik is an associate professor in the Department of Chemical Engineering at MIT who is developing novel computational strategies to identify optimal materials for purifying water. Water treatment requires purification by selectively separating small ions from water. However, human-made, scalable materials for water purification and desalination are often not stable in typical operating conditions and lack precision pores for good separation. 

    Metal-organic frameworks (MOFs) are promising materials for water purification because their pores can be tailored to have precise shapes and chemical makeup for selective ion affinity. Yet few MOFs have been assessed for their properties relevant to water purification. Kulik plans to use virtual high-throughput screening accelerated by machine learning models and molecular simulation to accelerate discovery of MOFs. Specifically, Kulik will be looking for MOFs with ultra-stable structures in water that do not break down at certain temperatures. 

    Gregory C. Rutledge is the Lammot du Pont Professor of Chemical Engineering at MIT. He is leading a project that will explore how to better separate oils from water. This is an important problem to solve given that industry-generated oil-contaminated water is a major source of pollution to the environment.

    Emulsified oils are particularly challenging to remove from water due to their small droplet sizes and long settling times. Microfiltration is an attractive technology for the removal of emulsified oils, but its major drawback is fouling, or the accumulation of unwanted material on solid surfaces. Rutledge will examine the mechanism of separation behind liquid-infused membranes (LIMs) in which an infused liquid coats the surface and pores of the membrane, preventing fouling. Robustness of the LIM technology for removal of different types of emulsified oils and oil mixtures will be evaluated. César Terrer is an assistant professor in the Department of Civil and Environmental Engineering whose J-WAFS project seeks to answer the question: How can satellite images be used to provide a high-resolution drought monitoring system for farmers? 

    Drought is recognized as one of the world’s most pressing issues, with direct impacts on vegetation that threaten water resources and food production globally. However, assessing and monitoring the impact of droughts on vegetation is extremely challenging as plants’ sensitivity to lack of water varies across species and ecosystems. Terrer will leverage a new generation of remote sensing satellites to provide high-resolution assessments of plant water stress at regional to global scales. The aim is to provide a plant drought monitoring product with farmland-specific services for water and socioeconomic management.

    Michael Triantafyllou is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. He is developing a web-based system for natural resources management that will deploy geospatial analysis, visualization, and reporting to better manage and facilitate aquaculture data.  By providing value to commercial fisheries’ permit holders who employ significant numbers of people and also to recreational shellfish permit holders who contribute to local economies, the project has attracted support from the Massachusetts Division of Marine Fisheries as well as a number of local resource management departments.

    Massachusetts shell fisheries generated roughly $339 million in 2020, accounting for 17 percent of U.S. East Coast production. Managing such a large industry is a time-consuming process, given there are thousands of acres of coastal areas grouped within over 800 classified shellfish growing areas. Extreme climate events present additional challenges. Triantafyllou’s research will help efforts to enforce environmental regulations, support habitat restoration efforts, and prevent shellfish-related food safety issues. More