More stories

  • in

    Study finds natural sources of air pollution exceed air quality guidelines in many regions

    Alongside climate change, air pollution is one of the biggest environmental threats to human health. Tiny particles known as particulate matter or PM2.5 (named for their diameter of just 2.5 micrometers or less) are a particularly hazardous type of pollutant. These particles are produced from a variety of sources, including wildfires and the burning of fossil fuels, and can enter our bloodstream, travel deep into our lungs, and cause respiratory and cardiovascular damage. Exposure to particulate matter is responsible for millions of premature deaths globally every year.

    In response to the increasing body of evidence on the detrimental effects of PM2.5, the World Health Organization (WHO) recently updated its air quality guidelines, lowering its recommended annual PM2.5 exposure guideline by 50 percent, from 10 micrograms per meter cubed (μm3) to 5 μm3. These updated guidelines signify an aggressive attempt to promote the regulation and reduction of anthropogenic emissions in order to improve global air quality.

    A new study by researchers in the MIT Department of Civil and Environmental Engineering explores if the updated air quality guideline of 5 μm3 is realistically attainable across different regions of the world, particularly if anthropogenic emissions are aggressively reduced. 

    The first question the researchers wanted to investigate was to what degree moving to a no-fossil-fuel future would help different regions meet this new air quality guideline.

    “The answer we found is that eliminating fossil-fuel emissions would improve air quality around the world, but while this would help some regions come into compliance with the WHO guidelines, for many other regions high contributions from natural sources would impede their ability to meet that target,” says senior author Colette Heald, the Germeshausen Professor in the MIT departments of Civil and Environmental Engineering, and Earth, Atmospheric and Planetary Sciences. 

    The study by Heald, Professor Jesse Kroll, and graduate students Sidhant Pai and Therese Carter, published June 6 in the journal Environmental Science and Technology Letters, finds that over 90 percent of the global population is currently exposed to average annual concentrations that are higher than the recommended guideline. The authors go on to demonstrate that over 50 percent of the world’s population would still be exposed to PM2.5 concentrations that exceed the new air quality guidelines, even in the absence of all anthropogenic emissions.

    This is due to the large natural sources of particulate matter — dust, sea salt, and organics from vegetation — that still exist in the atmosphere when anthropogenic emissions are removed from the air. 

    “If you live in parts of India or northern Africa that are exposed to large amounts of fine dust, it can be challenging to reduce PM2.5 exposures below the new guideline,” says Sidhant Pai, co-lead author and graduate student. “This study challenges us to rethink the value of different emissions abatement controls across different regions and suggests the need for a new generation of air quality metrics that can enable targeted decision-making.”

    The researchers conducted a series of model simulations to explore the viability of achieving the updated PM2.5 guidelines worldwide under different emissions reduction scenarios, using 2019 as a representative baseline year. 

    Their model simulations used a suite of different anthropogenic sources that could be turned on and off to study the contribution of a particular source. For instance, the researchers conducted a simulation that turned off all human-based emissions in order to determine the amount of PM2.5 pollution that could be attributed to natural and fire sources. By analyzing the chemical composition of the PM2.5 aerosol in the atmosphere (e.g., dust, sulfate, and black carbon), the researchers were also able to get a more accurate understanding of the most important PM2.5 sources in a particular region. For example, elevated PM2.5 concentrations in the Amazon were shown to predominantly consist of carbon-containing aerosols from sources like deforestation fires. Conversely, nitrogen-containing aerosols were prominent in Northern Europe, with large contributions from vehicles and fertilizer usage. The two regions would thus require very different policies and methods to improve their air quality. 

    “Analyzing particulate pollution across individual chemical species allows for mitigation and adaptation decisions that are specific to the region, as opposed to a one-size-fits-all approach, which can be challenging to execute without an understanding of the underlying importance of different sources,” says Pai. 

    When the WHO air quality guidelines were last updated in 2005, they had a significant impact on environmental policies. Scientists could look at an area that was not in compliance and suggest high-level solutions to improve the region’s air quality. But as the guidelines have tightened, globally-applicable solutions to manage and improve air quality are no longer as evident. 

    “Another benefit of speciating is that some of the particles have different toxicity properties that are correlated to health outcomes,” says Therese Carter, co-lead author and graduate student. “It’s an important area of research that this work can help motivate. Being able to separate out that piece of the puzzle can provide epidemiologists with more insights on the different toxicity levels and the impact of specific particles on human health.”

    The authors view these new findings as an opportunity to expand and iterate on the current guidelines.  

    “Routine and global measurements of the chemical composition of PM2.5 would give policymakers information on what interventions would most effectively improve air quality in any given location,” says Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering. “But it would also provide us with new insights into how different chemical species in PM2.5 affect human health.”

    “I hope that as we learn more about the health impacts of these different particles, our work and that of the broader atmospheric chemistry community can help inform strategies to reduce the pollutants that are most harmful to human health,” adds Heald. More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More

  • in

    MIT Climate and Sustainability Consortium announces recipients of inaugural MCSC Seed Awards

    The MIT Climate and Sustainability Consortium (MCSC) has awarded 20 projects a total of $5 million over two years in its first-ever 2022 MCSC Seed Awards program. The winning projects are led by principal investigators across all five of MIT’s schools.

    The goal of the MCSC Seed Awards is to engage MIT researchers and link the economy-wide work of the consortium to ongoing and emerging climate and sustainability efforts across campus. The program offers further opportunity to build networks among the awarded projects to deepen the impact of each and ensure the total is greater than the sum of its parts.

    For example, to drive progress under the awards category Circularity and Materials, the MCSC can facilitate connections between the technologists at MIT who are developing recovery approaches for metals, plastics, and fiber; the urban planners who are uncovering barriers to reuse; and the engineers, who will look for efficiency opportunities in reverse supply chains.

    “The MCSC Seed Awards are designed to complement actions previously outlined in Fast Forward: MIT’s Climate Action Plan for the Decade and, more specifically, the Climate Grand Challenges,” says Anantha P. Chandrakasan, dean of the MIT School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and chair of the MIT Climate and Sustainability Consortium. “In collaboration with seed award recipients and MCSC industry members, we are eager to engage in interdisciplinary exploration and propel urgent advancements in climate and sustainability.” 

    By supporting MIT researchers with expertise in economics, infrastructure, community risk assessment, mobility, and alternative fuels, the MCSC will accelerate implementation of cross-disciplinary solutions in the awards category Decarbonized and Resilient Value Chains. Enhancing Natural Carbon Sinks and building connections to local communities will require associations across experts in ecosystem change, biodiversity, improved agricultural practice and engagement with farmers, all of which the consortium can begin to foster through the seed awards.

    “Funding opportunities across campus has been a top priority since launching the MCSC,” says Jeremy Gregory, MCSC executive director. “It is our honor to support innovative teams of MIT researchers through the inaugural 2022 MCSC Seed Awards program.”

    The winning projects are tightly aligned with the MCSC’s areas of focus, which were derived from a year of highly engaged collaborations with MCSC member companies. The projects apply across the member’s climate and sustainability goals.

    The MCSC’s 16 member companies span many industries, and since early 2021, have met with members of the MIT community to define focused problem statements for industry-specific challenges, identify meaningful partnerships and collaborations, and develop clear and scalable priorities. Outcomes from these collaborations laid the foundation for the focus areas, which have shaped the work of the MCSC. Specifically, the MCSC Industry Advisory Board engaged with MIT on key strategic directions, and played a critical role in the MCSC’s series of interactive events. These included virtual workshops hosted last summer, each on a specific topic that allowed companies to work with MIT and each other to align key assumptions, identify blind spots in corporate goal-setting, and leverage synergies between members, across industries. The work continued in follow-up sessions and an annual symposium.

    “We are excited to see how the seed award efforts will help our member companies reach or even exceed their ambitious climate targets, find new cross-sector links among each other, seek opportunities to lead, and ripple key lessons within their industry, while also deepening the Institute’s strong foundation in climate and sustainability research,” says Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering and MCSC co-director.

    As the seed projects take shape, the MCSC will provide ongoing opportunities for awardees to engage with the Industry Advisory Board and technical teams from the MCSC member companies to learn more about the potential for linking efforts to support and accelerate their climate and sustainability goals. Awardees will also have the chance to engage with other members of the MCSC community, including its interdisciplinary Faculty Steering Committee.

    “One of our mantras in the MCSC is to ‘amplify and extend’ existing efforts across campus; we’re always looking for ways to connect the collaborative industry relationships we’re building and the work we’re doing with other efforts on campus,” notes Jeffrey Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems, head of the Department of Materials Science and Engineering, and MCSC co-director. “We feel the urgency as well as the potential, and we don’t want to miss opportunities to do more and go faster.”

    The MCSC Seed Awards complement the Climate Grand Challenges, a new initiative to mobilize the entire MIT research community around developing the bold, interdisciplinary solutions needed to address difficult, unsolved climate problems. The 27 finalist teams addressed four broad research themes, which align with the MCSC’s focus areas. From these finalist teams, five flagship projects were announced in April 2022.

    The parallels between MCSC’s focus areas and the Climate Grand Challenges themes underscore an important connection between the shared long-term research interests of industry and academia. The challenges that some of the world’s largest and most influential companies have identified are complementary to MIT’s ongoing research and innovation — highlighting the tremendous opportunity to develop breakthroughs and scalable solutions quickly and effectively. Special Presidential Envoy for Climate John Kerry underscored the importance of developing these scalable solutions, including critical new technology, during a conversation with MIT President L. Rafael Reif at MIT’s first Climate Grand Challenges showcase event last month.

    Both the MCSC Seed Awards and the Climate Grand Challenges are part of MIT’s larger commitment and initiative to combat climate change; this was underscored in “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021.

    The project titles and research leads for each of the 20 awardees listed below are categorized by MCSC focus area.

    Decarbonized and resilient value chains

    “Collaborative community mapping toolkit for resilience planning,” led by Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab (a research lead on Climate Grand Challenges flagship project) and Nicholas de Monchaux, professor and department head in the Department of Architecture
    “CP4All: Fast and local climate projections with scientific machine learning — towards accessibility for all of humanity,” led by Chris Hill, principal research scientist in the Department of Earth, Atmospheric and Planetary Sciences and Dava Newman, director of the MIT Media Lab and the Apollo Program Professor in the Department of Aeronautics and Astronautics
    “Emissions reductions and productivity in U.S. manufacturing,” led by Mert Demirer, assistant professor of applied economics at the MIT Sloan School of Management and Jing Li, assistant professor and William Barton Rogers Career Development Chair of Energy Economics in the MIT Sloan School of Management
    “Logistics electrification through scalable and inter-operable charging infrastructure: operations, planning, and policy,” led by Alex Jacquillat, the 1942 Career Development Professor and assistant professor of operations research and statistics in the MIT Sloan School of Management
    “Powertrain and system design for LOHC-powered long-haul trucking,” led by William Green, the Hoyt Hottel Professor in Chemical Engineering in the Department of Chemical Engineering and postdoctoral officer, and Wai K. Cheng, professor in the Department of Mechanical Engineering and director of the Sloan Automotive Laboratory
    “Sustainable Separation and Purification of Biochemicals and Biofuels using Membranes,” led by John Lienhard, the Abdul Latif Jameel Professor of Water in the Department of Mechanical Engineering, director of the Abdul Latif Jameel Water and Food Systems Lab, and director of the Rohsenow Kendall Heat Transfer Laboratory; and Nicolas Hadjiconstantinou, professor in the Department of Mechanical Engineering, co-director of the Center for Computational Science and Engineering, associate director of the Center for Exascale Simulation of Materials in Extreme Environments, and graduate officer
    “Toolkit for assessing the vulnerability of industry infrastructure siting to climate change,” led by Michael Howland, assistant professor in the Department of Civil and Environmental Engineering

    Circularity and Materials

    “Colorimetric Sulfidation for Aluminum Recycling,” led by Antoine Allanore, associate professor of metallurgy in the Department of Materials Science and Engineering
    “Double Loop Circularity in Materials Design Demonstrated on Polyurethanes,” led by Brad Olsen, the Alexander and I. Michael Kasser (1960) Professor and graduate admissions co-chair in the Department of Chemical Engineering, and Kristala Prather, the Arthur Dehon Little Professor and department executive officer in the Department of Chemical Engineering
    “Engineering of a microbial consortium to degrade and valorize plastic waste,” led by Otto Cordero, associate professor in the Department of Civil and Environmental Engineering, and Desiree Plata, the Gilbert W. Winslow (1937) Career Development Professor in Civil Engineering and associate professor in the Department of Civil and Environmental Engineering
    “Fruit-peel-inspired, biodegradable packaging platform with multifunctional barrier properties,” led by Kripa Varanasi, professor in the Department of Mechanical Engineering
    “High Throughput Screening of Sustainable Polyesters for Fibers,” led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Brad Olsen, Alexander and I. Michael Kasser (1960) Professor and graduate admissions co-chair in the Department of Chemical Engineering
    “Short-term and long-term efficiency gains in reverse supply chains,” led by Yossi Sheffi, the Elisha Gray II Professor of Engineering Systems, professor in the Department of Civil and Environmental Engineering, and director of the Center for Transportation and Logistics
    The costs and benefits of circularity in building construction, led by Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at the MIT Center for Real Estate and Department of Urban Studies and Planning, faculty director of the MIT Center for Real Estate, and faculty director for the MIT Sustainable Urbanization Lab; and Randolph Kirchain, principal research scientist and co-director of MIT Concrete Sustainability Hub

    Natural carbon sinks

    “Carbon sequestration through sustainable practices by smallholder farmers,” led by Joann de Zegher, the Maurice F. Strong Career Development Professor and assistant professor of operations management in the MIT Sloan School of Management, and Karen Zheng the George M. Bunker Professor and associate professor of operations management in the MIT Sloan School of Management
    “Coatings to protect and enhance diverse microbes for improved soil health and crop yields,” led by Ariel Furst, the Raymond A. (1921) And Helen E. St. Laurent Career Development Professor of Chemical Engineering in the Department of Chemical Engineering, and Mary Gehring, associate professor of biology in the Department of Biology, core member of the Whitehead Institute for Biomedical Research, and graduate officer
    “ECO-LENS: Mainstreaming biodiversity data through AI,” led by John Fernández, professor of building technology in the Department of Architecture and director of MIT Environmental Solutions Initiative
    “Growing season length, productivity, and carbon balance of global ecosystems under climate change,” led by Charles Harvey, professor in the Department of Civil and Environmental Engineering, and César Terrer, assistant professor in the Department of Civil and Environmental Engineering

    Social dimensions and adaptation

    “Anthro-engineering decarbonization at the million-person scale,” led by Manduhai Buyandelger, professor in the Anthropology Section, and Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering in the Department of Nuclear Science and Engineering
    “Sustainable solutions for climate change adaptation: weaving traditional ecological knowledge and STEAM,” led by Janelle Knox-Hayes, the Lister Brothers Associate Professor of Economic Geography and Planning and head of the Environmental Policy and Planning Group in the Department of Urban Studies and Planning, and Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab (a research lead on a Climate Grand Challenges flagship project) More

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Machine learning, harnessed to extreme computing, aids fusion energy development

    MIT research scientists Pablo Rodriguez-Fernandez and Nathan Howard have just completed one of the most demanding calculations in fusion science — predicting the temperature and density profiles of a magnetically confined plasma via first-principles simulation of plasma turbulence. Solving this problem by brute force is beyond the capabilities of even the most advanced supercomputers. Instead, the researchers used an optimization methodology developed for machine learning to dramatically reduce the CPU time required while maintaining the accuracy of the solution.

    Fusion energyFusion offers the promise of unlimited, carbon-free energy through the same physical process that powers the sun and the stars. It requires heating the fuel to temperatures above 100 million degrees, well above the point where the electrons are stripped from their atoms, creating a form of matter called plasma. On Earth, researchers use strong magnetic fields to isolate and insulate the hot plasma from ordinary matter. The stronger the magnetic field, the better the quality of the insulation that it provides.

    Rodriguez-Fernandez and Howard have focused on predicting the performance expected in the SPARC device, a compact, high-magnetic-field fusion experiment, currently under construction by the MIT spin-out company Commonwealth Fusion Systems (CFS) and researchers from MIT’s Plasma Science and Fusion Center. While the calculation required an extraordinary amount of computer time, over 8 million CPU-hours, what was remarkable was not how much time was used, but how little, given the daunting computational challenge.

    The computational challenge of fusion energyTurbulence, which is the mechanism for most of the heat loss in a confined plasma, is one of the science’s grand challenges and the greatest problem remaining in classical physics. The equations that govern fusion plasmas are well known, but analytic solutions are not possible in the regimes of interest, where nonlinearities are important and solutions encompass an enormous range of spatial and temporal scales. Scientists resort to solving the equations by numerical simulation on computers. It is no accident that fusion researchers have been pioneers in computational physics for the last 50 years.

    One of the fundamental problems for researchers is reliably predicting plasma temperature and density given only the magnetic field configuration and the externally applied input power. In confinement devices like SPARC, the external power and the heat input from the fusion process are lost through turbulence in the plasma. The turbulence itself is driven by the difference in the extremely high temperature of the plasma core and the relatively cool temperatures of the plasma edge (merely a few million degrees). Predicting the performance of a self-heated fusion plasma therefore requires a calculation of the power balance between the fusion power input and the losses due to turbulence.

    These calculations generally start by assuming plasma temperature and density profiles at a particular location, then computing the heat transported locally by turbulence. However, a useful prediction requires a self-consistent calculation of the profiles across the entire plasma, which includes both the heat input and turbulent losses. Directly solving this problem is beyond the capabilities of any existing computer, so researchers have developed an approach that stitches the profiles together from a series of demanding but tractable local calculations. This method works, but since the heat and particle fluxes depend on multiple parameters, the calculations can be very slow to converge.

    However, techniques emerging from the field of machine learning are well suited to optimize just such a calculation. Starting with a set of computationally intensive local calculations run with the full-physics, first-principles CGYRO code (provided by a team from General Atomics led by Jeff Candy) Rodriguez-Fernandez and Howard fit a surrogate mathematical model, which was used to explore and optimize a search within the parameter space. The results of the optimization were compared to the exact calculations at each optimum point, and the system was iterated to a desired level of accuracy. The researchers estimate that the technique reduced the number of runs of the CGYRO code by a factor of four.

    New approach increases confidence in predictionsThis work, described in a recent publication in the journal Nuclear Fusion, is the highest fidelity calculation ever made of the core of a fusion plasma. It refines and confirms predictions made with less demanding models. Professor Jonathan Citrin, of the Eindhoven University of Technology and leader of the fusion modeling group for DIFFER, the Dutch Institute for Fundamental Energy Research, commented: “The work significantly accelerates our capabilities in more routinely performing ultra-high-fidelity tokamak scenario prediction. This algorithm can help provide the ultimate validation test of machine design or scenario optimization carried out with faster, more reduced modeling, greatly increasing our confidence in the outcomes.” 

    In addition to increasing confidence in the fusion performance of the SPARC experiment, this technique provides a roadmap to check and calibrate reduced physics models, which run with a small fraction of the computational power. Such models, cross-checked against the results generated from turbulence simulations, will provide a reliable prediction before each SPARC discharge, helping to guide experimental campaigns and improving the scientific exploitation of the device. It can also be used to tweak and improve even simple data-driven models, which run extremely quickly, allowing researchers to sift through enormous parameter ranges to narrow down possible experiments or possible future machines.

    The research was funded by CFS, with computational support from the National Energy Research Scientific Computing Center, a U.S. Department of Energy Office of Science User Facility. More

  • in

    What choices does the world need to make to keep global warming below 2 C?

    When the 2015 Paris Agreement set a long-term goal of keeping global warming “well below 2 degrees Celsius, compared to pre-industrial levels” to avoid the worst impacts of climate change, it did not specify how its nearly 200 signatory nations could collectively achieve that goal. Each nation was left to its own devices to reduce greenhouse gas emissions in alignment with the 2 C target. Now a new modeling strategy developed at the MIT Joint Program on the Science and Policy of Global Change that explores hundreds of potential future development pathways provides new insights on the energy and technology choices needed for the world to meet that target.

    Described in a study appearing in the journal Earth’s Future, the new strategy combines two well-known computer modeling techniques to scope out the energy and technology choices needed over the coming decades to reduce emissions sufficiently to achieve the Paris goal.

    The first technique, Monte Carlo analysis, quantifies uncertainty levels for dozens of energy and economic indicators including fossil fuel availability, advanced energy technology costs, and population and economic growth; feeds that information into a multi-region, multi-economic-sector model of the world economy that captures the cross-sectoral impacts of energy transitions; and runs that model hundreds of times to estimate the likelihood of different outcomes. The MIT study focuses on projections through the year 2100 of economic growth and emissions for different sectors of the global economy, as well as energy and technology use.

    The second technique, scenario discovery, uses machine learning tools to screen databases of model simulations in order to identify outcomes of interest and their conditions for occurring. The MIT study applies these tools in a unique way by combining them with the Monte Carlo analysis to explore how different outcomes are related to one another (e.g., do low-emission outcomes necessarily involve large shares of renewable electricity?). This approach can also identify individual scenarios, out of the hundreds explored, that result in specific combinations of outcomes of interest (e.g., scenarios with low emissions, high GDP growth, and limited impact on electricity prices), and also provide insight into the conditions needed for that combination of outcomes.

    Using this unique approach, the MIT Joint Program researchers find several possible patterns of energy and technology development under a specified long-term climate target or economic outcome.

    “This approach shows that there are many pathways to a successful energy transition that can be a win-win for the environment and economy,” says Jennifer Morris, an MIT Joint Program research scientist and the study’s lead author. “Toward that end, it can be used to guide decision-makers in government and industry to make sound energy and technology choices and avoid biases in perceptions of what ’needs’ to happen to achieve certain outcomes.”

    For example, while achieving the 2 C goal, the global level of combined wind and solar electricity generation by 2050 could be less than three times or more than 12 times the current level (which is just over 2,000 terawatt hours). These are very different energy pathways, but both can be consistent with the 2 C goal. Similarly, there are many different energy mixes that can be consistent with maintaining high GDP growth in the United States while also achieving the 2 C goal, with different possible roles for renewables, natural gas, carbon capture and storage, and bioenergy. The study finds renewables to be the most robust electricity investment option, with sizable growth projected under each of the long-term temperature targets explored.

    The researchers also find that long-term climate targets have little impact on economic output for most economic sectors through 2050, but do require each sector to significantly accelerate reduction of its greenhouse gas emissions intensity (emissions per unit of economic output) so as to reach near-zero levels by midcentury.

    “Given the range of development pathways that can be consistent with meeting a 2 degrees C goal, policies that target only specific sectors or technologies can unnecessarily narrow the solution space, leading to higher costs,” says former MIT Joint Program Co-Director John Reilly, a co-author of the study. “Our findings suggest that policies designed to encourage a portfolio of technologies and sectoral actions can be a wise strategy that hedges against risks.”

    The research was supported by the U.S. Department of Energy Office of Science. More

  • in

    At Climate Grand Challenges showcase event, an exploration of how to accelerate breakthrough solutions

    On the eve of Earth Day, more than 300 faculty, researchers, students, government officials, and industry leaders gathered in the Samberg Conference Center, along with thousands more who tuned in online, to celebrate MIT’s first-ever Climate Grand Challenges and the five most promising concepts to emerge from the two-year competition.

    The event began with a climate policy conversation between MIT President L. Rafael Reif and Special Presidential Envoy for Climate John Kerry, followed by presentations from each of the winning flagship teams, and concluded with an expert panel that explored pathways for moving from ideas to impact at scale as quickly as possible.

    “In 2020, when we launched the Climate Grand Challenges, we wanted to focus the daring creativity and pioneering expertise of the MIT community on the urgent problem of climate change,” said President Reif in kicking off the event. “Together these flagship projects will define a transformative new research agenda at MIT, one that has the potential to make meaningful contributions to the global climate response.”

    Reif and Kerry discussed multiple aspects of the climate crisis, including mitigation, adaptation, and the policies and strategies that can help the world avert the worst consequences of climate change and make the United States a leader again in bringing technology into commercial use. Referring to the accelerated wartime research effort that helped turn the tide in World War II, which included work conducted at MIT, Kerry said, “We need about five Manhattan Projects, frankly.”

    “People are now sensing a much greater urgency to finding solutions — new technology — and taking to scale some of the old technologies,” Kerry said. “There are things that are happening that I think are exciting, but the problem is it’s not happening fast enough.”

    Strategies for taking technology from the lab to the marketplace were the basis for the final portion of the event. The panel was moderated by Alicia Barton, president and CEO of FirstLight Power, and included Manish Bapna, president and CEO of the Natural Resources Defense Council; Jack Little, CEO and co-founder of MathWorks; Arati Prabhakar, president of Actuate and former head of the Defense Advanced Research Projects Agency; and Katie Rae, president and managing director of The Engine. The discussion touched upon the importance of marshaling the necessary resources and building the cross-sector partnerships required to scale the technologies being developed by the flagship teams and to deliver them to the world in time to make a difference. 

    “MIT doesn’t sit on its hands ever, and innovation is central to its founding,” said Rae. “The students coming out of MIT at every level, along with the professors, have been committed to these challenges for a long time and therefore will have a big impact. These flagships have always been in process, but now we have an extraordinary moment to commercialize these projects.”

    The panelists weighed in on how to change the mindset around finance, policy, business, and community adoption to scale massive shifts in energy generation, transportation, and other major carbon-emitting industries. They stressed the importance of policies that address the economic, equity, and public health impacts of climate change and of reimagining supply chains and manufacturing to grow and distribute these technologies quickly and affordably. 

    “We are embarking on five adventures, but we do not know yet, cannot know yet, where these projects will take us,” said Maria Zuber, MIT’s vice president for research. “These are powerful and promising ideas. But each one will require focused effort, creative and interdisciplinary teamwork, and sustained commitment and support if they are to become part of the climate and energy revolution that the world urgently needs. This work begins now.” 

    Zuber called for investment from philanthropists and financiers, and urged companies, governments, and others to join this all-of-humanity effort. Associate Provost for International Activities Richard Lester echoed this message in closing the event. 

    “Every one of us needs to put our shoulder to the wheel at the points where our leverage is maximized — where we can do what we’re best at,” Lester said. “For MIT, Climate Grand Challenges is one of those maximum leverage points.” More

  • in

    Using plant biology to address climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fourth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    The impact of our changing climate on agriculture and food security — and how contemporary agriculture contributes to climate change — is at the forefront of MIT’s multidisciplinary project “Revolutionizing agriculture with low-emissions, resilient crops.” The project The project is one of five flagship winners in the Climate Grand Challenges competition, and brings together researchers from the departments of Biology, Biological Engineering, Chemical Engineering, and Civil and Environmental Engineering.

    “Our team’s research seeks to address two connected challenges: first, the need to reduce the greenhouse gas emissions produced by agricultural fertilizer; second, the fact that the yields of many current agricultural crops will decrease, due to the effects of climate change on plant metabolism,” says the project’s faculty lead, Christopher Voigt, the Daniel I.C. Wang Professor in MIT’s Department of Biological Engineering. “We are pursuing six interdisciplinary projects that are each key to our overall goal of developing low-emissions methods for fertilizing plants that are bioengineered to be more resilient and productive in a changing climate.”

    Whitehead Institute members Mary Gehring and Jing-Ke Weng, plant biologists who are also associate professors in MIT’s Department of Biology, will lead two of those projects.

    Promoting crop resilience

    For most of human history, climate change occurred gradually, over hundreds or thousands of years. That pace allowed plants to adapt to variations in temperature, precipitation, and atmospheric composition. However, human-driven climate change has occurred much more quickly, and crop plants have suffered: Crop yields are down in many regions, as is seed protein content in cereal crops.

    “If we want to ensure an abundant supply of nutritious food for the world, we need to develop fundamental mechanisms for bioengineering a wide variety of crop plants that will be both hearty and nutritious in the face of our changing climate,” says Gehring. In her previous work, she has shown that many aspects of plant reproduction and seed development are controlled by epigenetics — that is, by information outside of the DNA sequence. She has been using that knowledge and the research methods she has developed to identify ways to create varieties of seed-producing plants that are more productive and resilient than current food crops.

    But plant biology is complex, and while it is possible to develop plants that integrate robustness-enhancing traits by combining dissimilar parental strains, scientists are still learning how to ensure that the new traits are carried forward from one generation to the next. “Plants that carry the robustness-enhancing traits have ‘hybrid vigor,’ and we believe that the perpetuation of those traits is controlled by epigenetics,” Gehring explains. “Right now, some food crops, like corn, can be engineered to benefit from hybrid vigor, but those traits are not inherited. That’s why farmers growing many of today’s most productive varieties of corn must purchase and plant new batches of seeds each year. Moreover, many important food crops have not yet realized the benefits of hybrid vigor.”

    The project Gehring leads, “Developing Clonal Seed Production to Fix Hybrid Vigor,” aims to enable food crop plants to create seeds that are both more robust and genetically identical to the parent — and thereby able to pass beneficial traits from generation to generation.

    The process of clonal (or asexual) production of seeds that are genetically identical to the maternal parent is called apomixis. Gehring says, “Because apomixis is present in 400 flowering plant species — about 1 percent of flowering plant species — it is probable that genes and signaling pathways necessary for apomixis are already present within crop plants. Our challenge is to tweak those genes and pathways so that the plant switches reproduction from sexual to asexual.”

    The project will leverage the fact that genes and pathways related to autonomous asexual development of the endosperm — a seed’s nutritive tissue — exist in the model plant Arabidopsis thaliana. In previous work on Arabidopsis, Gehring’s lab researched a specific gene that, when misregulated, drives development of an asexual endosperm-like material. “Normally, that seed would not be viable,” she notes. “But we believe that by epigenetic tuning of the expression of additional relevant genes, we will enable the plant to retain that material — and help achieve apomixis.”

    If Gehring and her colleagues succeed in creating a gene-expression “formula” for introducing endosperm apomixis into a wide range of crop plants, they will have made a fundamental and important achievement. Such a method could be applied throughout agriculture to create and perpetuate new crop breeds able to withstand their changing environments while requiring less fertilizer and fewer pesticides.

    Creating “self-fertilizing” crops

    Roughly a quarter of greenhouse gas (GHG) emissions in the United States are a product of agriculture. Fertilizer production and use accounts for one third of those emissions and includes nitrous oxide, which has heat-trapping capacity 298-fold stronger than carbon dioxide, according to a 2018 Frontiers in Plant Science study. Most artificial fertilizer production also consumes huge quantities of natural gas and uses minerals mined from nonrenewable resources. After all that, much of the nitrogen fertilizer becomes runoff that pollutes local waterways. For those reasons, this Climate Grand Challenges flagship project aims to greatly reduce use of human-made fertilizers.

    One tantalizing approach is to cultivate cereal crop plants — which account for about 75 percent of global food production — capable of drawing nitrogen from metabolic interactions with bacteria in the soil. Whitehead Institute’s Weng leads an effort to do just that: genetically bioengineer crops such as corn, rice, and wheat to, essentially, create their own fertilizer through a symbiotic relationship with nitrogen-fixing microbes.

    “Legumes such as bean and pea plants can form root nodules through which they receive nitrogen from rhizobia bacteria in exchange for carbon,” Weng explains. “This metabolic exchange means that legumes release far less greenhouse gas — and require far less investment of fossil energy — than do cereal crops, which use a huge portion of the artificially produced nitrogen fertilizers employed today.

    “Our goal is to develop methods for transferring legumes’ ‘self-fertilizing’ capacity to cereal crops,” Weng says. “If we can, we will revolutionize the sustainability of food production.”

    The project — formally entitled “Mimicking legume-rhizobia symbiosis for fertilizer production in cereals” — will be a multistage, five-year effort. It draws on Weng’s extensive studies of metabolic evolution in plants and his identification of molecules involved in formation of the root nodules that permit exchanges between legumes and nitrogen-fixing bacteria. It also leverages his expertise in reconstituting specific signaling and metabolic pathways in plants.

    Weng and his colleagues will begin by deciphering the full spectrum of small-molecule signaling processes that occur between legumes and rhizobium bacteria. Then they will genetically engineer an analogous system in nonlegume crop plants. Next, using state-of-the-art metabolomic methods, they will identify which small molecules excreted from legume roots prompt a nitrogen/carbon exchange from rhizobium bacteria. Finally, the researchers will genetically engineer the biosynthesis of those molecules in the roots of nonlegume plants and observe their effect on the rhizobium bacteria surrounding the roots.

    While the project is complex and technically challenging, its potential is staggering. “Focusing on corn alone, this could reduce the production and use of nitrogen fertilizer by 160,000 tons,” Weng notes. “And it could halve the related emissions of nitrous oxide gas.” More