More stories

  • in

    Simulating neutron behavior in nuclear reactors

    Amelia Trainer applied to MIT because she lost a bet.

    As part of what the fourth-year nuclear science and engineering (NSE) doctoral student labels her “teenage rebellious phase,” Trainer was quite convinced she would just be wasting the application fee were she to submit an application. She wasn’t even “super sure” she wanted to go to college. But a high-school friend was convinced Trainer would get into a “top school” if she only applied. A bet followed: If Trainer lost, she would have to apply to MIT. Trainer lost — and is glad she did.

    Growing up in Daytona Beach, Florida, good grades were Trainer’s thing. Seeing friends participate in interschool math competitions, Trainer decided she would tag along and soon found she loved them. She remembers being adept at reading the room: If teams were especially struggling over a problem, Trainer figured the answer had to be something easy, like zero or one. “The hardest problems would usually have the most goofball answers,” she laughs.

    Simulating neutron behavior

    As a doctoral student, hard problems in math, specifically computational reactor physics, continue to be Trainer’s forte.

    Her research, under the guidance of Professor Benoit Forget in MIT NSE’s Computational Reactor Physics Group (CRPG), focuses on modeling complicated neutron behavior in reactors. Simulation helps forecast the behavior of reactors before millions of dollars sink into development of a potentially uneconomical unit. Using simulations, Trainer can see “where the neutrons are going, how much heat is being produced, and how much power the reactor can generate.” Her research helps form the foundation for the next generation of nuclear power plants.

    To simulate neutron behavior inside of a nuclear reactor, you first need to know how neutrons will interact with the various materials inside the system. These neutrons can have wildly different energies, thereby making them susceptible to different physical phenomena. For the entirety of her graduate studies, Trainer has been primarily interested in the physics regarding slow-moving neutrons and their scattering behavior.

    When a slow neutron scatters off of a material, it can induce or cancel out molecular vibrations between the material’s atoms. The effect that material vibrations can have on neutron energies, and thereby on reactor behavior, has been heavily approximated over the years. Trainer is primarily interested in chipping away at these approximations by creating scattering data for materials that have historically been misrepresented and by exploring new techniques for preparing slow-neutron scattering data.

    Trainer remembers waiting for a simulation to complete in the early days of the Covid-19 pandemic, when she discovered a way to predict neutron behavior with limited input data. Traditionally, “people have to store large tables of what neutrons will do under specific circumstances,” she says. “I’m really happy about it because it’s this really cool method of sampling what your neutron does from very little information,” Trainer says.

    Amelia Trainer — Modeling complicated neutron behavior in nuclear reactors

    As part of her research, Trainer often works closely with two software packages: OpenMC and NJOY. OpenMC is a Monte Carlo neutron transport simulation code that was developed in the CRPG and is used to simulate neutron behavior in reactor systems. NJOY is a nuclear data processing tool, and is used to create, augment, and prepare material data that is fed into tools like OpenMC. By editing both these codes to her specifications, Trainer is able to observe the effect that “upstream” material data has on the “downstream” reactor calculations. Through this, she hopes to identify additional problems: approximations that could lead to a noticeable misrepresentation of the physics.

    A love of geometry and poetry

    Trainer discovered the coolness of science as a child. Her mother, who cares for indoor plants and runs multiple greenhouses, and her father, a blacksmith and farrier, who explored materials science through his craft, were self-taught inspirations.

    Trainer’s father urged his daughter to learn and pursue any topics that she found exciting and encouraged her to read poems from “Calvin and Hobbes” out loud when she struggled with a speech impediment in early childhood. Reading the same passages every day helped her memorize them. “The natural manifestation of that extended into [a love of] poetry,” Trainer says.

    A love of poetry, combined with Trainer’s propensity for fun, led her to compose an ode to pi as part of an MIT-sponsored event for alumni. “I was really only in it for the cupcake,” she laughs. (Participants received an indulgent treat).

    Play video

    MIT Matters: A Love Poem to Pi

    Computations and nuclear science

    After being accepted at MIT, Trainer knew she wanted to study in a field that would take her skills at the levels they were at — “my math skills were pretty underdeveloped in the grand scheme of things,” she says. An open-house weekend at MIT, where she met with faculty from the NSE department, and the opportunity to contribute to a discipline working toward clean energy, cemented Trainer’s decision to join NSE.

    As a high schooler, Trainer won a scholarship to Embry-Riddle Aeronautical University to learn computer coding and knew computational physics might be more aligned with her interests. After she joined MIT as an undergraduate student in 2014, she realized that the CRPG, with its focus on coding and modeling, might be a good fit. Fortunately, a graduate student from Forget’s team welcomed Trainer’s enthusiasm for research even as an undergraduate first-year. She has stayed with the lab ever since. 

    Research internships at Los Alamos National Laboratory, the creators of NJOY, have furthered Trainer’s enthusiasm for modeling and computational physics. She met a Los Alamos scientist after he presented a talk at MIT and it snowballed into a collaboration where she could work on parts of the NJOY code. “It became a really cool collaboration which led me into a deep dive into physics and data preparation techniques, which was just so fulfilling,” Trainer says. As for what’s next, Trainer was awarded the Rickover fellowship in nuclear engineering by the the Department of Energy’s Naval Reactors Division and will join the program in Pittsburgh after she graduates.

    For many years, Trainer’s cats, Jacques and Monster, have been a constant companion. “Neutrons, computers, and cats, that’s my personality,” she laughs. Work continues to fuel her passion. To borrow a favorite phrase from Spaceman Spiff, Trainer’s favorite “Calvin” avatar, Trainer’s approach to research has invariably been: “Another day, another mind-boggling adventure.” More

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More

  • in

    New J-WAFS-led project combats food insecurity

    Today the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT announced a new research project, supported by Community Jameel, to tackle one of the most urgent crises facing the planet: food insecurity. Approximately 276 million people worldwide are severely food insecure, and more than half a million face famine conditions.     To better understand and analyze food security, this three-year research project will develop a comprehensive index assessing countries’ food security vulnerability, called the Jameel Index for Food Trade and Vulnerability. Global changes spurred by social and economic transitions, energy and environmental policy, regional geopolitics, conflict, and of course climate change, can impact food demand and supply. The Jameel Index will measure countries’ dependence on global food trade and imports and how these regional-scale threats might affect the ability to trade food goods across diverse geographic regions. A main outcome of the research will be a model to project global food demand, supply balance, and bilateral trade under different likely future scenarios, with a focus on climate change. The work will help guide policymakers over the next 25 years while the global population is expected to grow, and the climate crisis is predicted to worsen.    

    The work will be the foundational project for the J-WAFS-led Food and Climate Systems Transformation Alliance, or FACT Alliance. Formally launched at the COP26 climate conference last November, the FACT Alliance is a global network of 20 leading research institutions and stakeholder organizations that are driving research and innovation and informing better decision-making for healthy, resilient, equitable, and sustainable food systems in a rapidly changing climate. The initiative is co-directed by Greg Sixt, research manager for climate and food systems at J-WAFS, and Professor Kenneth Strzepek, climate, water, and food specialist at J-WAFS.

    The dire state of our food systems

    The need for this project is evidenced by the hundreds of millions of people around the globe currently experiencing food shortages. While several factors contribute to food insecurity, climate change is one of the most notable. Devastating extreme weather events are increasingly crippling crop and livestock production around the globe. From Southwest Asia to the Arabian Peninsula to the Horn of Africa, communities are migrating in search of food. In the United States, extreme heat and lack of rainfall in the Southwest have drastically lowered Lake Mead’s water levels, restricting water access and drying out farmlands. 

    Social, political, and economic issues also disrupt food systems. The effects of the Covid-19 pandemic, supply chain disruptions, and inflation continue to exacerbate food insecurity. Russia’s invasion of Ukraine is dramatically worsening the situation, disrupting agricultural exports from both Russia and Ukraine — two of the world’s largest producers of wheat, sunflower seed oil, and corn. Other countries like Lebanon, Sri Lanka, and Cuba are confronting food insecurity due to domestic financial crises.

    Few countries are immune to threats to food security from sudden disruptions in food production or trade. When an enormous container ship became lodged in the Suez Canal in March 2021, the vital international trade route was blocked for three months. The resulting delays in international shipping affected food supplies around the world. These situations demonstrate the importance of food trade in achieving food security: a disaster in one part of the world can drastically affect the availability of food in another. This puts into perspective just how interconnected the earth’s food systems are and how vulnerable they remain to external shocks. 

    An index to prepare for the future of food

    Despite the need for more secure food systems, significant knowledge gaps exist when it comes to understanding how different climate scenarios may affect both agricultural productivity and global food supply chains and security. The Global Trade Analysis Project database from Purdue University, and the current IMPACT modeling system from the International Food Policy Research Institute (IFPRI), enable assessments of existing conditions but cannot project or model changes in the future.

    In 2021, Strzepek and Sixt developed an initial Food Import Vulnerability Index (FIVI) as part of a regional assessment of the threat of climate change to food security in the Gulf Cooperation Council states and West Asia. FIVI is also limited in that it can only assess current trade conditions and climate change threats to food production. Additionally, FIVI is a national aggregate index and does not address issues of hunger, poverty, or equity that stem from regional variations within a country.

    “Current models are really good at showing global food trade flows, but we don’t have systems for looking at food trade between individual countries and how different food systems stressors such as climate change and conflict disrupt that trade,” says Greg Sixt of J-WAFS and the FACT Alliance. “This timely index will be a valuable tool for policymakers to understand the vulnerabilities to their food security from different shocks in the countries they import their food from. The project will also illustrate the stakeholder-guided, transdisciplinary approach that is central to the FACT Alliance,” Sixt adds.

    Phase 1 of the project will support a collaboration between four FACT Alliance members: MIT J-WAFS, Ethiopian Institute of Agricultural Research, IFPRI (which is also part of the CGIAR network), and the Martin School at the University of Oxford. An external partner, United Arab Emirates University, will also assist with the project work. This first phase will build on Strzepek and Sixt’s previous work on FIVI by developing a comprehensive Global Food System Modeling Framework that takes into consideration climate and global changes projected out to 2050, and assesses their impacts on domestic production, world market prices, and national balance of payments and bilateral trade. The framework will also utilize a mixed-modeling approach that includes the assessment of bilateral trade and macroeconomic data associated with varying agricultural productivity under the different climate and economic policy scenarios. In this way, consistent and harmonized projections of global food demand and supply balance, and bilateral trade under climate and global change can be achieved. 

    “Just like in the global response to Covid-19, using data and modeling are critical to understanding and tackling vulnerabilities in the global supply of food,” says George Richards, director of Community Jameel. “The Jameel Index for Food Trade and Vulnerability will help inform decision-making to manage shocks and long-term disruptions to food systems, with the aim of ensuring food security for all.”

    On a national level, the researchers will enrich the Jameel Index through country-level food security analyses of regions within countries and across various socioeconomic groups, allowing for a better understanding of specific impacts on key populations. The research will present vulnerability scores for a variety of food security metrics for 126 countries. Case studies of food security and food import vulnerability in Ethiopia and Sudan will help to refine the applicability of the Jameel Index with on-the-ground information. The case studies will use an IFPRI-developed tool called the Rural Investment and Policy Analysis model, which allows for analysis of urban and rural populations and different income groups. Local capacity building and stakeholder engagement will be critical to enable the use of the tools developed by this research for national-level planning in priority countries, and ultimately to inform policy.  Phase 2 of the project will build on phase 1 and the lessons learned from the Ethiopian and Sudanese case studies. It will entail a number of deeper, country-level analyses to assess the role of food imports on future hunger, poverty, and equity across various regional and socioeconomic groups within the modeled countries. This work will link the geospatial national models with the global analysis. A scholarly paper is expected to be submitted to show findings from this work, and a website will be launched so that interested stakeholders and organizations can learn more information. More

  • in

    Study finds natural sources of air pollution exceed air quality guidelines in many regions

    Alongside climate change, air pollution is one of the biggest environmental threats to human health. Tiny particles known as particulate matter or PM2.5 (named for their diameter of just 2.5 micrometers or less) are a particularly hazardous type of pollutant. These particles are produced from a variety of sources, including wildfires and the burning of fossil fuels, and can enter our bloodstream, travel deep into our lungs, and cause respiratory and cardiovascular damage. Exposure to particulate matter is responsible for millions of premature deaths globally every year.

    In response to the increasing body of evidence on the detrimental effects of PM2.5, the World Health Organization (WHO) recently updated its air quality guidelines, lowering its recommended annual PM2.5 exposure guideline by 50 percent, from 10 micrograms per meter cubed (μm3) to 5 μm3. These updated guidelines signify an aggressive attempt to promote the regulation and reduction of anthropogenic emissions in order to improve global air quality.

    A new study by researchers in the MIT Department of Civil and Environmental Engineering explores if the updated air quality guideline of 5 μm3 is realistically attainable across different regions of the world, particularly if anthropogenic emissions are aggressively reduced. 

    The first question the researchers wanted to investigate was to what degree moving to a no-fossil-fuel future would help different regions meet this new air quality guideline.

    “The answer we found is that eliminating fossil-fuel emissions would improve air quality around the world, but while this would help some regions come into compliance with the WHO guidelines, for many other regions high contributions from natural sources would impede their ability to meet that target,” says senior author Colette Heald, the Germeshausen Professor in the MIT departments of Civil and Environmental Engineering, and Earth, Atmospheric and Planetary Sciences. 

    The study by Heald, Professor Jesse Kroll, and graduate students Sidhant Pai and Therese Carter, published June 6 in the journal Environmental Science and Technology Letters, finds that over 90 percent of the global population is currently exposed to average annual concentrations that are higher than the recommended guideline. The authors go on to demonstrate that over 50 percent of the world’s population would still be exposed to PM2.5 concentrations that exceed the new air quality guidelines, even in the absence of all anthropogenic emissions.

    This is due to the large natural sources of particulate matter — dust, sea salt, and organics from vegetation — that still exist in the atmosphere when anthropogenic emissions are removed from the air. 

    “If you live in parts of India or northern Africa that are exposed to large amounts of fine dust, it can be challenging to reduce PM2.5 exposures below the new guideline,” says Sidhant Pai, co-lead author and graduate student. “This study challenges us to rethink the value of different emissions abatement controls across different regions and suggests the need for a new generation of air quality metrics that can enable targeted decision-making.”

    The researchers conducted a series of model simulations to explore the viability of achieving the updated PM2.5 guidelines worldwide under different emissions reduction scenarios, using 2019 as a representative baseline year. 

    Their model simulations used a suite of different anthropogenic sources that could be turned on and off to study the contribution of a particular source. For instance, the researchers conducted a simulation that turned off all human-based emissions in order to determine the amount of PM2.5 pollution that could be attributed to natural and fire sources. By analyzing the chemical composition of the PM2.5 aerosol in the atmosphere (e.g., dust, sulfate, and black carbon), the researchers were also able to get a more accurate understanding of the most important PM2.5 sources in a particular region. For example, elevated PM2.5 concentrations in the Amazon were shown to predominantly consist of carbon-containing aerosols from sources like deforestation fires. Conversely, nitrogen-containing aerosols were prominent in Northern Europe, with large contributions from vehicles and fertilizer usage. The two regions would thus require very different policies and methods to improve their air quality. 

    “Analyzing particulate pollution across individual chemical species allows for mitigation and adaptation decisions that are specific to the region, as opposed to a one-size-fits-all approach, which can be challenging to execute without an understanding of the underlying importance of different sources,” says Pai. 

    When the WHO air quality guidelines were last updated in 2005, they had a significant impact on environmental policies. Scientists could look at an area that was not in compliance and suggest high-level solutions to improve the region’s air quality. But as the guidelines have tightened, globally-applicable solutions to manage and improve air quality are no longer as evident. 

    “Another benefit of speciating is that some of the particles have different toxicity properties that are correlated to health outcomes,” says Therese Carter, co-lead author and graduate student. “It’s an important area of research that this work can help motivate. Being able to separate out that piece of the puzzle can provide epidemiologists with more insights on the different toxicity levels and the impact of specific particles on human health.”

    The authors view these new findings as an opportunity to expand and iterate on the current guidelines.  

    “Routine and global measurements of the chemical composition of PM2.5 would give policymakers information on what interventions would most effectively improve air quality in any given location,” says Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering. “But it would also provide us with new insights into how different chemical species in PM2.5 affect human health.”

    “I hope that as we learn more about the health impacts of these different particles, our work and that of the broader atmospheric chemistry community can help inform strategies to reduce the pollutants that are most harmful to human health,” adds Heald. More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More

  • in

    MIT J-WAFS announces 2022 seed grant recipients

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT has awarded eight MIT principal investigators with 2022 J-WAFS seed grants. The grants support innovative MIT research that has the potential to have significant impact on water- and food-related challenges.

    The only program at MIT that is dedicated to water- and food-related research, J-WAFS has offered seed grant funding to MIT principal investigators and their teams for the past eight years. The grants provide up to $75,000 per year, overhead-free, for two years to support new, early-stage research in areas such as water and food security, safety, supply, and sustainability. Past projects have spanned many diverse disciplines, including engineering, science, technology, and business innovation, as well as social science and economics, architecture, and urban planning. 

    Seven new projects led by eight researchers will be supported this year. With funding going to four different MIT departments, the projects address a range of challenges by employing advanced materials, technology innovations, and new approaches to resource management. The new projects aim to remove harmful chemicals from water sources, develop drought monitoring systems for farmers, improve management of the shellfish industry, optimize water purification materials, and more.

    “Climate change, the pandemic, and most recently the war in Ukraine have exacerbated and put a spotlight on the serious challenges facing global water and food systems,” says J-WAFS director John H. Lienhard. He adds, “The proposals chosen this year have the potential to create measurable, real-world impacts in both the water and food sectors.”  

    The 2022 J-WAFS seed grant researchers and their projects are:

    Gang Chen, the Carl Richard Soderberg Professor of Power Engineering in MIT’s Department of Mechanical Engineering, is using sunlight to desalinate water. The use of solar energy for desalination is not a new idea, particularly solar thermal evaporation methods. However, the solar thermal evaporation process has an overall low efficiency because it relies on breaking hydrogen bonds among individual water molecules, which is very energy-intensive. Chen and his lab recently discovered a photomolecular effect that dramatically lowers the energy required for desalination. 

    The bonds among water molecules inside a water cluster in liquid water are mostly hydrogen bonds. Chen discovered that a photon with energy larger than the bonding energy between the water cluster and the remaining water liquids can cleave off the water cluster at the water-air interface, colliding with air molecules and disintegrating into 60 or even more individual water molecules. This effect has the potential to significantly boost clean water production via new desalination technology that produces a photomolecular evaporation rate that exceeds pure solar thermal evaporation by at least ten-fold. 

    John E. Fernández is the director of the MIT Environmental Solutions Initiative (ESI) and a professor in the Department of Architecture, and also affiliated with the Department of Urban Studies and Planning. Fernández is working with Scott D. Odell, a postdoc in the ESI, to better understand the impacts of mining and climate change in water-stressed regions of Chile.

    The country of Chile is one of the world’s largest exporters of both agricultural and mineral products; however, little research has been done on climate change effects at the intersection of these two sectors. Fernández and Odell will explore how desalination is being deployed by the mining industry to relieve pressure on continental water supplies in Chile, and with what effect. They will also research how climate change and mining intersect to affect Andean glaciers and agricultural communities dependent upon them. The researchers intend for this work to inform policies to reduce social and environmental harms from mining, desalination, and climate change.

    Ariel L. Furst is the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT. Her 2022 J-WAFS seed grant project seeks to effectively remove dangerous and long-lasting chemicals from water supplies and other environmental areas. 

    Perfluorooctanoic acid (PFOA), a component of Teflon, is a member of a group of chemicals known as per- and polyfluoroalkyl substances (PFAS). These human-made chemicals have been extensively used in consumer products like nonstick cooking pans. Exceptionally high levels of PFOA have been measured in water sources near manufacturing sites, which is problematic as these chemicals do not readily degrade in our bodies or the environment. The majority of humans have detectable levels of PFAS in their blood, which can lead to significant health issues including cancer, liver damage, and thyroid effects, as well as developmental effects in infants. Current remediation methods are limited to inefficient capture and are mostly confined to laboratory settings. Furst’s proposed method utilizes low-energy, scaffolded enzyme materials to move beyond simple capture to degrade these hazardous pollutants.

    Heather J. Kulik is an associate professor in the Department of Chemical Engineering at MIT who is developing novel computational strategies to identify optimal materials for purifying water. Water treatment requires purification by selectively separating small ions from water. However, human-made, scalable materials for water purification and desalination are often not stable in typical operating conditions and lack precision pores for good separation. 

    Metal-organic frameworks (MOFs) are promising materials for water purification because their pores can be tailored to have precise shapes and chemical makeup for selective ion affinity. Yet few MOFs have been assessed for their properties relevant to water purification. Kulik plans to use virtual high-throughput screening accelerated by machine learning models and molecular simulation to accelerate discovery of MOFs. Specifically, Kulik will be looking for MOFs with ultra-stable structures in water that do not break down at certain temperatures. 

    Gregory C. Rutledge is the Lammot du Pont Professor of Chemical Engineering at MIT. He is leading a project that will explore how to better separate oils from water. This is an important problem to solve given that industry-generated oil-contaminated water is a major source of pollution to the environment.

    Emulsified oils are particularly challenging to remove from water due to their small droplet sizes and long settling times. Microfiltration is an attractive technology for the removal of emulsified oils, but its major drawback is fouling, or the accumulation of unwanted material on solid surfaces. Rutledge will examine the mechanism of separation behind liquid-infused membranes (LIMs) in which an infused liquid coats the surface and pores of the membrane, preventing fouling. Robustness of the LIM technology for removal of different types of emulsified oils and oil mixtures will be evaluated. César Terrer is an assistant professor in the Department of Civil and Environmental Engineering whose J-WAFS project seeks to answer the question: How can satellite images be used to provide a high-resolution drought monitoring system for farmers? 

    Drought is recognized as one of the world’s most pressing issues, with direct impacts on vegetation that threaten water resources and food production globally. However, assessing and monitoring the impact of droughts on vegetation is extremely challenging as plants’ sensitivity to lack of water varies across species and ecosystems. Terrer will leverage a new generation of remote sensing satellites to provide high-resolution assessments of plant water stress at regional to global scales. The aim is to provide a plant drought monitoring product with farmland-specific services for water and socioeconomic management.

    Michael Triantafyllou is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. He is developing a web-based system for natural resources management that will deploy geospatial analysis, visualization, and reporting to better manage and facilitate aquaculture data.  By providing value to commercial fisheries’ permit holders who employ significant numbers of people and also to recreational shellfish permit holders who contribute to local economies, the project has attracted support from the Massachusetts Division of Marine Fisheries as well as a number of local resource management departments.

    Massachusetts shell fisheries generated roughly $339 million in 2020, accounting for 17 percent of U.S. East Coast production. Managing such a large industry is a time-consuming process, given there are thousands of acres of coastal areas grouped within over 800 classified shellfish growing areas. Extreme climate events present additional challenges. Triantafyllou’s research will help efforts to enforce environmental regulations, support habitat restoration efforts, and prevent shellfish-related food safety issues. More

  • in

    Team creates map for production of eco-friendly metals

    In work that could usher in more efficient, eco-friendly processes for producing important metals like lithium, iron, and cobalt, researchers from MIT and the SLAC National Accelerator Laboratory have mapped what is happening at the atomic level behind a particularly promising approach called metal electrolysis.

    By creating maps for a wide range of metals, they not only determined which metals should be easiest to produce using this approach, but also identified fundamental barriers behind the efficient production of others. As a result, the researchers’ map could become an important design tool for optimizing the production of all these metals.

    The work could also aid the development of metal-air batteries, cousins of the lithium-ion batteries used in today’s electric vehicles.

    Most of the metals key to society today are produced using fossil fuels. These fuels generate the high temperatures necessary to convert the original ore into its purified metal. But that process is a significant source of greenhouse gases — steel alone accounts for some 7 percent of carbon dioxide emissions globally. As a result, researchers from around the world are working to identify more eco-friendly ways for the production of metals.

    One promising approach is metal electrolysis, in which a metal oxide, the ore, is zapped with electricity to create pure metal with oxygen as the byproduct. That is the reaction explored at the atomic level in new research reported in the April 8 issue of the journal Chemistry of Materials.

    Donald Siegel is department chair and professor of mechanical engineering at the University of Texas at Austin. Says Siegel, who was not involved in the Chemistry of Materials study: “This work is an important contribution to improving the efficiency of metal production from metal oxides. It clarifies our understanding of low-carbon electrolysis processes by tracing the underlying thermodynamics back to elementary metal-oxygen interactions. I expect that this work will aid in the creation of design rules that will make these industrially important processes less reliant on fossil fuels.”

    Yang Shao-Horn, the JR East Professor of Engineering in MIT’s Department of Materials Science and Engineering (DMSE) and Department of Mechanical Engineering, is a leader of the current work, with Michal Bajdich of SLAC.

    “Here we aim to establish some basic understanding to predict the efficiency of electrochemical metal production and metal-air batteries from examining computed thermodynamic barriers for the conversion between metal and metal oxides,” says Shao-Horn, who is on the research team for MIT’s new Center for Electrification and Decarbonization of Industry, a winner of the Institute’s first-ever Climate Grand Challenges competition. Shao-Horn is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics.

    In addition to Shao-Horn and Bajdich, other authors of the Chemistry of Materials paper are Jaclyn R. Lunger, first author and a DMSE graduate student; mechanical engineering senior Naomi Lutz; and DMSE graduate student Jiayu Peng.

    Other applications

    The work could also aid in developing metal-air batteries such as lithium-air, aluminum-air, and zinc-air batteries. These cousins of the lithium-ion batteries used in today’s electric vehicles have the potential to electrify aviation because their energy densities are much higher. However, they are not yet on the market due to a variety of problems including inefficiency.

    Charging metal-air batteries also involves electrolysis. As a result, the new atomic-level understanding of these reactions could not only help engineers develop efficient electrochemical routes for metal production, but also design more efficient metal-air batteries.

    Learning from water splitting

    Electrolysis is also used to split water into oxygen and hydrogen, which stores the resulting energy. That hydrogen, in turn, could become an eco-friendly alternative to fossil fuels. Since much more is known about water electrolysis, the focus of Bajdich’s work at SLAC, than the electrolysis of metal oxides, the team compared the two processes for the first time.

    The result: “Slowly, we uncovered the elementary steps involved in metal electrolysis,” says Bajdich. The work was challenging, says Lunger, because “it was unclear to us what those steps are. We had to figure out how to get from A to B,” or from a metal oxide to metal and oxygen.

    All of the work was conducted with supercomputer simulations. “It’s like a sandbox of atoms, and then we play with them. It’s a little like Legos,” says Bajdich. More specifically, the team explored different scenarios for the electrolysis of several metals. Each involved different catalysts, molecules that boost the speed of a reaction.

    Says Lunger, “To optimize the reaction, you want to find the catalyst that makes it most efficient.” The team’s map is essentially a guide for designing the best catalysts for each different metal.

    What’s next? Lunger noted that the current work focused on the electrolysis of pure metals. “I’m interested in seeing what happens in more complex systems involving multiple metals. Can you make the reaction more efficient if there’s sodium and lithium present, or cadmium and cesium?”

    This work was supported by a U.S. Department of Energy Office of Science Graduate Student Research award. It was also supported by an MIT Energy Initiative fellowship, the Toyota Research Institute through the Accelerated Materials Design and Discovery Program, the Catalysis Science Program of Department of Energy, Office of Basic Energy Sciences, and by the Differentiate Program through the U.S. Advanced Research Projects Agency — Energy.  More

  • in

    Absent legislative victory, the president can still meet US climate goals

    The most recent United Nations climate change report indicates that without significant action to mitigate global warming, the extent and magnitude of climate impacts — from floods to droughts to the spread of disease — could outpace the world’s ability to adapt to them. The latest effort to introduce meaningful climate legislation in the United States Congress, the Build Back Better bill, has stalled. The climate package in that bill — $555 billion in funding for climate resilience and clean energy — aims to reduce U.S. greenhouse gas emissions by about 50 percent below 2005 levels by 2030, the nation’s current Paris Agreement pledge. With prospects of passing a standalone climate package in the Senate far from assured, is there another pathway to fulfilling that pledge?

    Recent detailed legal analysis shows that there is at least one viable option for the United States to achieve the 2030 target without legislative action. Under Section 115 on International Air Pollution of the Clean Air Act, the U.S. Environmental Protection Agency (EPA) could assign emissions targets to the states that collectively meet the national goal. The president could simply issue an executive order to empower the EPA to do just that. But would that be prudent?

    A new study led by researchers at the MIT Joint Program on the Science and Policy of Global Change explores how, under a federally coordinated carbon dioxide emissions cap-and-trade program aligned with the U.S. Paris Agreement pledge and implemented through Section 115 of the Clean Air Act, the EPA might allocate emissions cuts among states. Recognizing that the Biden or any future administration considering this strategy would need to carefully weigh its benefits against its potential political risks, the study highlights the policy’s net economic benefits to the nation.

    The researchers calculate those net benefits by combining the estimated total cost of carbon dioxide emissions reduction under the policy with the corresponding estimated expenditures that would be avoided as a result of the policy’s implementation — expenditures on health care due to particulate air pollution, and on society at large due to climate impacts.

    Assessing three carbon dioxide emissions allocation strategies (each with legal precedent) for implementing Section 115 to return cap-and-trade program revenue to the states and distribute it to state residents on an equal per-capita basis, the study finds that at the national level, the economic net benefits are substantial, ranging from $70 to $150 billion in 2030. The results appear in the journal Environmental Research Letters.

    “Our findings not only show significant net gains to the U.S. economy under a national emissions policy implemented through the Clean Air Act’s Section 115,” says Mei Yuan, a research scientist at the MIT Joint Program and lead author of the study. “They also show the policy impact on consumer costs may differ across states depending on the choice of allocation strategy.”

    The national price on carbon needed to achieve the policy’s emissions target, as well as the policy’s ultimate cost to consumers, are substantially lower than those found in studies a decade earlier, although in line with other recent studies. The researchers speculate that this is largely due to ongoing expansion of ambitious state policies in the electricity sector and declining renewable energy costs. The policy is also progressive, consistent with earlier studies, in that equal lump-sum distribution of allowance revenue to state residents generally leads to net benefits to lower-income households. Regional disparities in consumer costs can be moderated by the allocation of allowances among states.

    State-by-state emissions estimates for the study are derived from MIT’s U.S. Regional Energy Policy model, with electricity sector detail of the Renewable Energy Development System model developed by the U.S. National Renewable Energy Laboratory; air quality benefits are estimated using U.S. EPA and other models; and the climate benefits estimate is based on the social cost of carbon, the U.S. federal government’s assessment of the economic damages that would result from emitting one additional ton of carbon dioxide into the atmosphere (currently $51/ton, adjusted for inflation). 

    “In addition to illustrating the economic, health, and climate benefits of a Section 115 implementation, our study underscores the advantages of a policy that imposes a uniform carbon price across all economic sectors,” says John Reilly, former co-director of the MIT Joint Program and a study co-author. “A national carbon price would serve as a major incentive for all sectors to decarbonize.” More