More stories

  • in

    Study finds natural sources of air pollution exceed air quality guidelines in many regions

    Alongside climate change, air pollution is one of the biggest environmental threats to human health. Tiny particles known as particulate matter or PM2.5 (named for their diameter of just 2.5 micrometers or less) are a particularly hazardous type of pollutant. These particles are produced from a variety of sources, including wildfires and the burning of fossil fuels, and can enter our bloodstream, travel deep into our lungs, and cause respiratory and cardiovascular damage. Exposure to particulate matter is responsible for millions of premature deaths globally every year.

    In response to the increasing body of evidence on the detrimental effects of PM2.5, the World Health Organization (WHO) recently updated its air quality guidelines, lowering its recommended annual PM2.5 exposure guideline by 50 percent, from 10 micrograms per meter cubed (μm3) to 5 μm3. These updated guidelines signify an aggressive attempt to promote the regulation and reduction of anthropogenic emissions in order to improve global air quality.

    A new study by researchers in the MIT Department of Civil and Environmental Engineering explores if the updated air quality guideline of 5 μm3 is realistically attainable across different regions of the world, particularly if anthropogenic emissions are aggressively reduced. 

    The first question the researchers wanted to investigate was to what degree moving to a no-fossil-fuel future would help different regions meet this new air quality guideline.

    “The answer we found is that eliminating fossil-fuel emissions would improve air quality around the world, but while this would help some regions come into compliance with the WHO guidelines, for many other regions high contributions from natural sources would impede their ability to meet that target,” says senior author Colette Heald, the Germeshausen Professor in the MIT departments of Civil and Environmental Engineering, and Earth, Atmospheric and Planetary Sciences. 

    The study by Heald, Professor Jesse Kroll, and graduate students Sidhant Pai and Therese Carter, published June 6 in the journal Environmental Science and Technology Letters, finds that over 90 percent of the global population is currently exposed to average annual concentrations that are higher than the recommended guideline. The authors go on to demonstrate that over 50 percent of the world’s population would still be exposed to PM2.5 concentrations that exceed the new air quality guidelines, even in the absence of all anthropogenic emissions.

    This is due to the large natural sources of particulate matter — dust, sea salt, and organics from vegetation — that still exist in the atmosphere when anthropogenic emissions are removed from the air. 

    “If you live in parts of India or northern Africa that are exposed to large amounts of fine dust, it can be challenging to reduce PM2.5 exposures below the new guideline,” says Sidhant Pai, co-lead author and graduate student. “This study challenges us to rethink the value of different emissions abatement controls across different regions and suggests the need for a new generation of air quality metrics that can enable targeted decision-making.”

    The researchers conducted a series of model simulations to explore the viability of achieving the updated PM2.5 guidelines worldwide under different emissions reduction scenarios, using 2019 as a representative baseline year. 

    Their model simulations used a suite of different anthropogenic sources that could be turned on and off to study the contribution of a particular source. For instance, the researchers conducted a simulation that turned off all human-based emissions in order to determine the amount of PM2.5 pollution that could be attributed to natural and fire sources. By analyzing the chemical composition of the PM2.5 aerosol in the atmosphere (e.g., dust, sulfate, and black carbon), the researchers were also able to get a more accurate understanding of the most important PM2.5 sources in a particular region. For example, elevated PM2.5 concentrations in the Amazon were shown to predominantly consist of carbon-containing aerosols from sources like deforestation fires. Conversely, nitrogen-containing aerosols were prominent in Northern Europe, with large contributions from vehicles and fertilizer usage. The two regions would thus require very different policies and methods to improve their air quality. 

    “Analyzing particulate pollution across individual chemical species allows for mitigation and adaptation decisions that are specific to the region, as opposed to a one-size-fits-all approach, which can be challenging to execute without an understanding of the underlying importance of different sources,” says Pai. 

    When the WHO air quality guidelines were last updated in 2005, they had a significant impact on environmental policies. Scientists could look at an area that was not in compliance and suggest high-level solutions to improve the region’s air quality. But as the guidelines have tightened, globally-applicable solutions to manage and improve air quality are no longer as evident. 

    “Another benefit of speciating is that some of the particles have different toxicity properties that are correlated to health outcomes,” says Therese Carter, co-lead author and graduate student. “It’s an important area of research that this work can help motivate. Being able to separate out that piece of the puzzle can provide epidemiologists with more insights on the different toxicity levels and the impact of specific particles on human health.”

    The authors view these new findings as an opportunity to expand and iterate on the current guidelines.  

    “Routine and global measurements of the chemical composition of PM2.5 would give policymakers information on what interventions would most effectively improve air quality in any given location,” says Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering. “But it would also provide us with new insights into how different chemical species in PM2.5 affect human health.”

    “I hope that as we learn more about the health impacts of these different particles, our work and that of the broader atmospheric chemistry community can help inform strategies to reduce the pollutants that are most harmful to human health,” adds Heald. More

  • in

    Cracking the case of Arctic sea ice breakup

    Despite its below-freezing temperatures, the Arctic is warming twice as fast as the rest of the planet. As Arctic sea ice melts, fewer bright surfaces are available to reflect sunlight back into space. When fractures open in the ice cover, the water underneath gets exposed. Dark, ice-free water absorbs the sun’s energy, heating the ocean and driving further melting — a vicious cycle. This warming in turn melts glacial ice, contributing to rising sea levels.

    Warming climate and rising sea levels endanger the nearly 40 percent of the U.S. population living in coastal areas, the billions of people who depend on the ocean for food and their livelihoods, and species such as polar bears and Artic foxes. Reduced ice coverage is also making the once-impassable region more accessible, opening up new shipping lanes and ports. Interest in using these emerging trans-Arctic routes for product transit, extraction of natural resources (e.g., oil and gas), and military activity is turning an area traditionally marked by low tension and cooperation into one of global geopolitical competition.

    As the Arctic opens up, predicting when and where the sea ice will fracture becomes increasingly important in strategic decision-making. However, huge gaps exist in our understanding of the physical processes contributing to ice breakup. Researchers at MIT Lincoln Laboratory seek to help close these gaps by turning a data-sparse environment into a data-rich one. They envision deploying a distributed set of unattended sensors across the Arctic that will persistently detect and geolocate ice fracturing events. Concurrently, the network will measure various environmental conditions, including water temperature and salinity, wind speed and direction, and ocean currents at different depths. By correlating these fracturing events and environmental conditions, they hope to discover meaningful insights about what is causing the sea ice to break up. Such insights could help predict the future state of Arctic sea ice to inform climate modeling, climate change planning, and policy decision-making at the highest levels.

    “We’re trying to study the relationship between ice cracking, climate change, and heat flow in the ocean,” says Andrew March, an assistant leader of Lincoln Laboratory’s Advanced Undersea Systems and Technology Group. “Do cracks in the ice cause warm water to rise and more ice to melt? Do undersea currents and waves cause cracking? Does cracking cause undersea waves? These are the types of questions we aim to investigate.”

    Arctic access

    In March 2022, Ben Evans and Dave Whelihan, both researchers in March’s group, traveled for 16 hours across three flights to Prudhoe Bay, located on the North Slope of Alaska. From there, they boarded a small specialized aircraft and flew another 90 minutes to a three-and-a-half-mile-long sheet of ice floating 160 nautical miles offshore in the Arctic Ocean. In the weeks before their arrival, the U.S. Navy’s Arctic Submarine Laboratory had transformed this inhospitable ice floe into a temporary operating base called Ice Camp Queenfish, named after the first Sturgeon-class submarine to operate under the ice and the fourth to reach the North Pole. The ice camp featured a 2,500-foot-long runway, a command center, sleeping quarters to accommodate up to 60 personnel, a dining tent, and an extremely limited internet connection.

    At Queenfish, for the next four days, Evans and Whelihan joined U.S. Navy, Army, Air Force, Marine Corps, and Coast Guard members, and members of the Royal Canadian Air Force and Navy and United Kingdom Royal Navy, who were participating in Ice Exercise (ICEX) 2022. Over the course of about three weeks, more than 200 personnel stationed at Queenfish, Prudhoe Bay, and aboard two U.S. Navy submarines participated in this biennial exercise. The goals of ICEX 2022 were to assess U.S. operational readiness in the Arctic; increase our country’s experience in the region; advance our understanding of the Arctic environment; and continue building relationships with other services, allies, and partner organizations to ensure a free and peaceful Arctic. The infrastructure provided for ICEX concurrently enables scientists to conduct research in an environment — either in person or by sending their research equipment for exercise organizers to deploy on their behalf — that would be otherwise extremely difficult and expensive to access.

    In the Arctic, windchill temperatures can plummet to as low as 60 degrees Fahrenheit below zero, cold enough to freeze exposed skin within minutes. Winds and ocean currents can drift the entire camp beyond the reach of nearby emergency rescue aircraft, and the ice can crack at any moment. To ensure the safety of participants, a team of Navy meteorological specialists continually monitors the ever-changing conditions. The original camp location for ICEX 2022 had to be evacuated and relocated after a massive crack formed in the ice, delaying Evans’ and Whelihan’s trip. Even the newly selected site had a large crack form behind the camp and another crack that necessitated moving a number of tents.

    “Such cracking events are only going to increase as the climate warms, so it’s more critical now than ever to understand the physical processes behind them,” Whelihan says. “Such an understanding will require building technology that can persist in the environment despite these incredibly harsh conditions. So, it’s a challenge not only from a scientific perspective but also an engineering one.”

    “The weather always gets a vote, dictating what you’re able to do out here,” adds Evans. “The Arctic Submarine Laboratory does a lot of work to construct the camp and make it a safe environment where researchers like us can come to do good science. ICEX is really the only opportunity we have to go onto the sea ice in a place this remote to collect data.”

    A legacy of sea ice experiments

    Though this trip was Whelihan’s and Evans’ first to the Arctic region, staff from the laboratory’s Advanced Undersea Systems and Technology Group have been conducting experiments at ICEX since 2018. However, because of the Arctic’s remote location and extreme conditions, data collection has rarely been continuous over long periods of time or widespread across large areas. The team now hopes to change that by building low-cost, expendable sensing platforms consisting of co-located devices that can be left unattended for automated, persistent, near-real-time monitoring. 

    “The laboratory’s extensive expertise in rapid prototyping, seismo-acoustic signal processing, remote sensing, and oceanography make us a natural fit to build this sensor network,” says Evans.

    In the months leading up to the Arctic trip, the team collected seismometer data at Firepond, part of the laboratory’s Haystack Observatory site in Westford, Massachusetts. Through this local data collection, they aimed to gain a sense of what anthropogenic (human-induced) noise would look like so they could begin to anticipate the kinds of signatures they might see in the Arctic. They also collected ice melting/fracturing data during a thaw cycle and correlated these data with the weather conditions (air temperature, humidity, and pressure). Through this analysis, they detected an increase in seismic signals as the temperature rose above 32 F — an indication that air temperature and ice cracking may be related.

    A sensing network

    At ICEX, the team deployed various commercial off-the-shelf sensors and new sensors developed by the laboratory and University of New Hampshire (UNH) to assess their resiliency in the frigid environment and to collect an initial dataset.

    “One aspect that differentiates these experiments from those of the past is that we concurrently collected seismo-acoustic data and environmental parameters,” says Evans.

    The commercial technologies were seismometers to detect the vibrational energy released when sea ice fractures or collides with other ice floes; a hydrophone (underwater microphone) array to record the acoustic energy created by ice-fracturing events; a sound speed profiler to measure the speed of sound through the water column; and a conductivity, temperature, and depth (CTD) profiler to measure the salinity (related to conductivity), temperature, and pressure (related to depth) throughout the water column. The speed of sound in the ocean primarily depends on these three quantities. 

    To precisely measure the temperature across the entire water column at one location, they deployed an array of transistor-based temperature sensors developed by the laboratory’s Advanced Materials and Microsystems Group in collaboration with the Advanced Functional Fabrics of America Manufacturing Innovation Institute. The small temperature sensors run along the length of a thread-like polymer fiber embedded with multiple conductors. This fiber platform, which can support a broad range of sensors, can be unspooled hundreds of feet below the water’s surface to concurrently measure temperature or other water properties — the fiber deployed in the Arctic also contained accelerometers to measure depth — at many points in the water column. Traditionally, temperature profiling has required moving a device up and down through the water column.

    The team also deployed a high-frequency echosounder supplied by Anthony Lyons and Larry Mayer, collaborators at UNH’s Center for Coastal and Ocean Mapping. This active sonar uses acoustic energy to detect internal waves, or waves occurring beneath the ocean’s surface.

    “You may think of the ocean as a homogenous body of water, but it’s not,” Evans explains. “Different currents can exist as you go down in depth, much like how you can get different winds when you go up in altitude. The UNH echosounder allows us to see the different currents in the water column, as well as ice roughness when we turn the sensor to look upward.”

    “The reason we care about currents is that we believe they will tell us something about how warmer water from the Atlantic Ocean is coming into contact with sea ice,” adds Whelihan. “Not only is that water melting ice but it also has lower salt content, resulting in oceanic layers and affecting how long ice lasts and where it lasts.”

    Back home, the team has begun analyzing their data. For the seismic data, this analysis involves distinguishing any ice events from various sources of anthropogenic noise, including generators, snowmobiles, footsteps, and aircraft. Similarly, the researchers know their hydrophone array acoustic data are contaminated by energy from a sound source that another research team participating in ICEX placed in the water. Based on their physics, icequakes — the seismic events that occur when ice cracks — have characteristic signatures that can be used to identify them. One approach is to manually find an icequake and use that signature as a guide for finding other icequakes in the dataset.

    From their water column profiling sensors, they identified an interesting evolution in the sound speed profile 30 to 40 meters below the ocean surface, related to a mass of colder water moving in later in the day. The group’s physical oceanographer believes this change in the profile is due to water coming up from the Bering Sea, water that initially comes from the Atlantic Ocean. The UNH-supplied echosounder also generated an interesting signal at a similar depth.

    “Our supposition is that this result has something to do with the large sound speed variation we detected, either directly because of reflections off that layer or because of plankton, which tend to rise on top of that layer,” explains Evans.  

    A future predictive capability

    Going forward, the team will continue mining their collected data and use these data to begin building algorithms capable of automatically detecting and localizing — and ultimately predicting — ice events correlated with changes in environmental conditions. To complement their experimental data, they have initiated conversations with organizations that model the physical behavior of sea ice, including the National Oceanic and Atmospheric Administration and the National Ice Center. Merging the laboratory’s expertise in sensor design and signal processing with their expertise in ice physics would provide a more complete understanding of how the Arctic is changing.

    The laboratory team will also start exploring cost-effective engineering approaches for integrating the sensors into packages hardened for deployment in the harsh environment of the Arctic.

    “Until these sensors are truly unattended, the human factor of usability is front and center,” says Whelihan. “Because it’s so cold, equipment can break accidentally. For example, at ICEX 2022, our waterproof enclosure for the seismometers survived, but the enclosure for its power supply, which was made out of a cheaper plastic, shattered in my hand when I went to pick it up.”

    The sensor packages will not only need to withstand the frigid environment but also be able to “phone home” over some sort of satellite data link and sustain their power. The team plans to investigate whether waste heat from processing can keep the instruments warm and how energy could be harvested from the Arctic environment.

    Before the next ICEX scheduled for 2024, they hope to perform preliminary testing of their sensor packages and concepts in Arctic-like environments. While attending ICEX 2022, they engaged with several other attendees — including the U.S. Navy, Arctic Submarine Laboratory, National Ice Center, and University of Alaska Fairbanks (UAF) — and identified cold room experimentation as one area of potential collaboration. Testing can also be performed at outdoor locations a bit closer to home and more easily accessible, such as the Great Lakes in Michigan and a UAF-maintained site in Barrow, Alaska. In the future, the laboratory team may have an opportunity to accompany U.S. Coast Guard personnel on ice-breaking vessels traveling from Alaska to Greenland. The team is also thinking about possible venues for collecting data far removed from human noise sources.

    “Since I’ve told colleagues, friends, and family I was going to the Arctic, I’ve had a lot of interesting conversations about climate change and what we’re doing there and why we’re doing it,” Whelihan says. “People don’t have an intrinsic, automatic understanding of this environment and its impact because it’s so far removed from us. But the Arctic plays a crucial role in helping to keep the global climate in balance, so it’s imperative we understand the processes leading to sea ice fractures.”

    This work is funded through Lincoln Laboratory’s internally administered R&D portfolio on climate. More

  • in

    How the universe got its magnetic field

    When we look out into space, all of the astrophysical objects that we see are embedded in magnetic fields. This is true not only in the neighborhood of stars and planets, but also in the deep space between galaxies and galactic clusters. These fields are weak — typically much weaker than those of a refrigerator magnet — but they are dynamically significant in the sense that they have profound effects on the dynamics of the universe. Despite decades of intense interest and research, the origin of these cosmic magnetic fields remains one of the most profound mysteries in cosmology.

    In previous research, scientists came to understand how turbulence, the churning motion common to fluids of all types, could amplify preexisting magnetic fields through the so-called dynamo process. But this remarkable discovery just pushed the mystery one step deeper. If a turbulent dynamo could only amplify an existing field, where did the “seed” magnetic field come from in the first place?

    We wouldn’t have a complete and self-consistent answer to the origin of astrophysical magnetic fields until we understood how the seed fields arose. New work carried out by MIT graduate student Muni Zhou, her advisor Nuno Loureiro, a professor of nuclear science and engineering at MIT, and colleagues at Princeton University and the University of Colorado at Boulder provides an answer that shows the basic processes that generate a field from a completely unmagnetized state to the point where it is strong enough for the dynamo mechanism to take over and amplify the field to the magnitudes that we observe.

    Magnetic fields are everywhere

    Naturally occurring magnetic fields are seen everywhere in the universe. They were first observed on Earth thousands of years ago, through their interaction with magnetized minerals like lodestone, and used for navigation long before people had any understanding of their nature or origin. Magnetism on the sun was discovered at the beginning of the 20th century by its effects on the spectrum of light that the sun emitted. Since then, more powerful telescopes looking deep into space found that the fields were ubiquitous.

    And while scientists had long learned how to make and use permanent magnets and electromagnets, which had all sorts of practical applications, the natural origins of magnetic fields in the universe remained a mystery. Recent work has provided part of the answer, but many aspects of this question are still under debate.

    Amplifying magnetic fields — the dynamo effect

    Scientists started thinking about this problem by considering the way that electric and magnetic fields were produced in the laboratory. When conductors, like copper wire, move in magnetic fields, electric fields are created. These fields, or voltages, can then drive electrical currents. This is how the electricity that we use every day is produced. Through this process of induction, large generators or “dynamos” convert mechanical energy into the electromagnetic energy that powers our homes and offices. A key feature of dynamos is that they need magnetic fields in order to work.

    But out in the universe, there are no obvious wires or big steel structures, so how do the fields arise? Progress on this problem began about a century ago as scientists pondered the source of the Earth’s magnetic field. By then, studies of the propagation of seismic waves showed that much of the Earth, below the cooler surface layers of the mantle, was liquid, and that there was a core composed of molten nickel and iron. Researchers theorized that the convective motion of this hot, electrically conductive liquid and the rotation of the Earth combined in some way to generate the Earth’s field.

    Eventually, models emerged that showed how the convective motion could amplify an existing field. This is an example of “self-organization” — a feature often seen in complex dynamical systems — where large-scale structures grow spontaneously from small-scale dynamics. But just like in a power station, you needed a magnetic field to make a magnetic field.

    A similar process is at work all over the universe. However, in stars and galaxies and in the space between them, the electrically conducting fluid is not molten metal, but plasma — a state of matter that exists at extremely high temperatures where the electrons are ripped away from their atoms. On Earth, plasmas can be seen in lightning or neon lights. In such a medium, the dynamo effect can amplify an existing magnetic field, provided it starts at some minimal level.

    Making the first magnetic fields

    Where does this seed field come from? That’s where the recent work of Zhou and her colleagues, published May 5 in PNAS, comes in. Zhou developed the underlying theory and performed numerical simulations on powerful supercomputers that show how the seed field can be produced and what fundamental processes are at work. An important aspect of the plasma that exists between stars and galaxies is that it is extraordinarily diffuse — typically about one particle per cubic meter. That is a very different situation from the interior of stars, where the particle density is about 30 orders of magnitude higher. The low densities mean that the particles in cosmological plasmas never collide, which has important effects on their behavior that had to be included in the model that these researchers were developing.   

    Calculations performed by the MIT researchers followed the dynamics in these plasmas, which developed from well-ordered waves but became turbulent as the amplitude grew and the interactions became strongly nonlinear. By including detailed effects of the plasma dynamics at small scales on macroscopic astrophysical processes, they demonstrated that the first magnetic fields can be spontaneously produced through generic large-scale motions as simple as sheared flows. Just like the terrestrial examples, mechanical energy was converted into magnetic energy.

    An important output of their computation was the amplitude of the expected spontaneously generated magnetic field. What this showed was that the field amplitude could rise from zero to a level where the plasma is “magnetized” — that is, where the plasma dynamics are strongly affected by the presence of the field. At this point, the traditional dynamo mechanism can take over and raise the fields to the levels that are observed. Thus, their work represents a self-consistent model for the generation of magnetic fields at cosmological scale.

    Professor Ellen Zweibel of the University of Wisconsin at Madison notes that “despite decades of remarkable progress in cosmology, the origin of magnetic fields in the universe remains unknown. It is wonderful to see state-of-the-art plasma physics theory and numerical simulation brought to bear on this fundamental problem.”

    Zhou and co-workers will continue to refine their model and study the handoff from the generation of the seed field to the amplification phase of the dynamo. An important part of their future research will be to determine if the process can work on a time scale consistent with astronomical observations. To quote the researchers, “This work provides the first step in the building of a new paradigm for understanding magnetogenesis in the universe.”

    This work was funded by the National Science Foundation CAREER Award and the Future Investigators of NASA Earth and Space Science Technology (FINESST) grant. More

  • in

    MIT Climate and Sustainability Consortium announces recipients of inaugural MCSC Seed Awards

    The MIT Climate and Sustainability Consortium (MCSC) has awarded 20 projects a total of $5 million over two years in its first-ever 2022 MCSC Seed Awards program. The winning projects are led by principal investigators across all five of MIT’s schools.

    The goal of the MCSC Seed Awards is to engage MIT researchers and link the economy-wide work of the consortium to ongoing and emerging climate and sustainability efforts across campus. The program offers further opportunity to build networks among the awarded projects to deepen the impact of each and ensure the total is greater than the sum of its parts.

    For example, to drive progress under the awards category Circularity and Materials, the MCSC can facilitate connections between the technologists at MIT who are developing recovery approaches for metals, plastics, and fiber; the urban planners who are uncovering barriers to reuse; and the engineers, who will look for efficiency opportunities in reverse supply chains.

    “The MCSC Seed Awards are designed to complement actions previously outlined in Fast Forward: MIT’s Climate Action Plan for the Decade and, more specifically, the Climate Grand Challenges,” says Anantha P. Chandrakasan, dean of the MIT School of Engineering, Vannevar Bush Professor of Electrical Engineering and Computer Science, and chair of the MIT Climate and Sustainability Consortium. “In collaboration with seed award recipients and MCSC industry members, we are eager to engage in interdisciplinary exploration and propel urgent advancements in climate and sustainability.” 

    By supporting MIT researchers with expertise in economics, infrastructure, community risk assessment, mobility, and alternative fuels, the MCSC will accelerate implementation of cross-disciplinary solutions in the awards category Decarbonized and Resilient Value Chains. Enhancing Natural Carbon Sinks and building connections to local communities will require associations across experts in ecosystem change, biodiversity, improved agricultural practice and engagement with farmers, all of which the consortium can begin to foster through the seed awards.

    “Funding opportunities across campus has been a top priority since launching the MCSC,” says Jeremy Gregory, MCSC executive director. “It is our honor to support innovative teams of MIT researchers through the inaugural 2022 MCSC Seed Awards program.”

    The winning projects are tightly aligned with the MCSC’s areas of focus, which were derived from a year of highly engaged collaborations with MCSC member companies. The projects apply across the member’s climate and sustainability goals.

    The MCSC’s 16 member companies span many industries, and since early 2021, have met with members of the MIT community to define focused problem statements for industry-specific challenges, identify meaningful partnerships and collaborations, and develop clear and scalable priorities. Outcomes from these collaborations laid the foundation for the focus areas, which have shaped the work of the MCSC. Specifically, the MCSC Industry Advisory Board engaged with MIT on key strategic directions, and played a critical role in the MCSC’s series of interactive events. These included virtual workshops hosted last summer, each on a specific topic that allowed companies to work with MIT and each other to align key assumptions, identify blind spots in corporate goal-setting, and leverage synergies between members, across industries. The work continued in follow-up sessions and an annual symposium.

    “We are excited to see how the seed award efforts will help our member companies reach or even exceed their ambitious climate targets, find new cross-sector links among each other, seek opportunities to lead, and ripple key lessons within their industry, while also deepening the Institute’s strong foundation in climate and sustainability research,” says Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering and MCSC co-director.

    As the seed projects take shape, the MCSC will provide ongoing opportunities for awardees to engage with the Industry Advisory Board and technical teams from the MCSC member companies to learn more about the potential for linking efforts to support and accelerate their climate and sustainability goals. Awardees will also have the chance to engage with other members of the MCSC community, including its interdisciplinary Faculty Steering Committee.

    “One of our mantras in the MCSC is to ‘amplify and extend’ existing efforts across campus; we’re always looking for ways to connect the collaborative industry relationships we’re building and the work we’re doing with other efforts on campus,” notes Jeffrey Grossman, the Morton and Claire Goulder and Family Professor in Environmental Systems, head of the Department of Materials Science and Engineering, and MCSC co-director. “We feel the urgency as well as the potential, and we don’t want to miss opportunities to do more and go faster.”

    The MCSC Seed Awards complement the Climate Grand Challenges, a new initiative to mobilize the entire MIT research community around developing the bold, interdisciplinary solutions needed to address difficult, unsolved climate problems. The 27 finalist teams addressed four broad research themes, which align with the MCSC’s focus areas. From these finalist teams, five flagship projects were announced in April 2022.

    The parallels between MCSC’s focus areas and the Climate Grand Challenges themes underscore an important connection between the shared long-term research interests of industry and academia. The challenges that some of the world’s largest and most influential companies have identified are complementary to MIT’s ongoing research and innovation — highlighting the tremendous opportunity to develop breakthroughs and scalable solutions quickly and effectively. Special Presidential Envoy for Climate John Kerry underscored the importance of developing these scalable solutions, including critical new technology, during a conversation with MIT President L. Rafael Reif at MIT’s first Climate Grand Challenges showcase event last month.

    Both the MCSC Seed Awards and the Climate Grand Challenges are part of MIT’s larger commitment and initiative to combat climate change; this was underscored in “Fast Forward: MIT’s Climate Action Plan for the Decade,” which the Institute published in May 2021.

    The project titles and research leads for each of the 20 awardees listed below are categorized by MCSC focus area.

    Decarbonized and resilient value chains

    “Collaborative community mapping toolkit for resilience planning,” led by Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab (a research lead on Climate Grand Challenges flagship project) and Nicholas de Monchaux, professor and department head in the Department of Architecture
    “CP4All: Fast and local climate projections with scientific machine learning — towards accessibility for all of humanity,” led by Chris Hill, principal research scientist in the Department of Earth, Atmospheric and Planetary Sciences and Dava Newman, director of the MIT Media Lab and the Apollo Program Professor in the Department of Aeronautics and Astronautics
    “Emissions reductions and productivity in U.S. manufacturing,” led by Mert Demirer, assistant professor of applied economics at the MIT Sloan School of Management and Jing Li, assistant professor and William Barton Rogers Career Development Chair of Energy Economics in the MIT Sloan School of Management
    “Logistics electrification through scalable and inter-operable charging infrastructure: operations, planning, and policy,” led by Alex Jacquillat, the 1942 Career Development Professor and assistant professor of operations research and statistics in the MIT Sloan School of Management
    “Powertrain and system design for LOHC-powered long-haul trucking,” led by William Green, the Hoyt Hottel Professor in Chemical Engineering in the Department of Chemical Engineering and postdoctoral officer, and Wai K. Cheng, professor in the Department of Mechanical Engineering and director of the Sloan Automotive Laboratory
    “Sustainable Separation and Purification of Biochemicals and Biofuels using Membranes,” led by John Lienhard, the Abdul Latif Jameel Professor of Water in the Department of Mechanical Engineering, director of the Abdul Latif Jameel Water and Food Systems Lab, and director of the Rohsenow Kendall Heat Transfer Laboratory; and Nicolas Hadjiconstantinou, professor in the Department of Mechanical Engineering, co-director of the Center for Computational Science and Engineering, associate director of the Center for Exascale Simulation of Materials in Extreme Environments, and graduate officer
    “Toolkit for assessing the vulnerability of industry infrastructure siting to climate change,” led by Michael Howland, assistant professor in the Department of Civil and Environmental Engineering

    Circularity and Materials

    “Colorimetric Sulfidation for Aluminum Recycling,” led by Antoine Allanore, associate professor of metallurgy in the Department of Materials Science and Engineering
    “Double Loop Circularity in Materials Design Demonstrated on Polyurethanes,” led by Brad Olsen, the Alexander and I. Michael Kasser (1960) Professor and graduate admissions co-chair in the Department of Chemical Engineering, and Kristala Prather, the Arthur Dehon Little Professor and department executive officer in the Department of Chemical Engineering
    “Engineering of a microbial consortium to degrade and valorize plastic waste,” led by Otto Cordero, associate professor in the Department of Civil and Environmental Engineering, and Desiree Plata, the Gilbert W. Winslow (1937) Career Development Professor in Civil Engineering and associate professor in the Department of Civil and Environmental Engineering
    “Fruit-peel-inspired, biodegradable packaging platform with multifunctional barrier properties,” led by Kripa Varanasi, professor in the Department of Mechanical Engineering
    “High Throughput Screening of Sustainable Polyesters for Fibers,” led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Brad Olsen, Alexander and I. Michael Kasser (1960) Professor and graduate admissions co-chair in the Department of Chemical Engineering
    “Short-term and long-term efficiency gains in reverse supply chains,” led by Yossi Sheffi, the Elisha Gray II Professor of Engineering Systems, professor in the Department of Civil and Environmental Engineering, and director of the Center for Transportation and Logistics
    The costs and benefits of circularity in building construction, led by Siqi Zheng, the STL Champion Professor of Urban and Real Estate Sustainability at the MIT Center for Real Estate and Department of Urban Studies and Planning, faculty director of the MIT Center for Real Estate, and faculty director for the MIT Sustainable Urbanization Lab; and Randolph Kirchain, principal research scientist and co-director of MIT Concrete Sustainability Hub

    Natural carbon sinks

    “Carbon sequestration through sustainable practices by smallholder farmers,” led by Joann de Zegher, the Maurice F. Strong Career Development Professor and assistant professor of operations management in the MIT Sloan School of Management, and Karen Zheng the George M. Bunker Professor and associate professor of operations management in the MIT Sloan School of Management
    “Coatings to protect and enhance diverse microbes for improved soil health and crop yields,” led by Ariel Furst, the Raymond A. (1921) And Helen E. St. Laurent Career Development Professor of Chemical Engineering in the Department of Chemical Engineering, and Mary Gehring, associate professor of biology in the Department of Biology, core member of the Whitehead Institute for Biomedical Research, and graduate officer
    “ECO-LENS: Mainstreaming biodiversity data through AI,” led by John Fernández, professor of building technology in the Department of Architecture and director of MIT Environmental Solutions Initiative
    “Growing season length, productivity, and carbon balance of global ecosystems under climate change,” led by Charles Harvey, professor in the Department of Civil and Environmental Engineering, and César Terrer, assistant professor in the Department of Civil and Environmental Engineering

    Social dimensions and adaptation

    “Anthro-engineering decarbonization at the million-person scale,” led by Manduhai Buyandelger, professor in the Anthropology Section, and Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering in the Department of Nuclear Science and Engineering
    “Sustainable solutions for climate change adaptation: weaving traditional ecological knowledge and STEAM,” led by Janelle Knox-Hayes, the Lister Brothers Associate Professor of Economic Geography and Planning and head of the Environmental Policy and Planning Group in the Department of Urban Studies and Planning, and Miho Mazereeuw, associate professor of architecture and urbanism in the Department of Architecture and director of the Urban Risk Lab (a research lead on a Climate Grand Challenges flagship project) More

  • in

    MIT J-WAFS announces 2022 seed grant recipients

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT has awarded eight MIT principal investigators with 2022 J-WAFS seed grants. The grants support innovative MIT research that has the potential to have significant impact on water- and food-related challenges.

    The only program at MIT that is dedicated to water- and food-related research, J-WAFS has offered seed grant funding to MIT principal investigators and their teams for the past eight years. The grants provide up to $75,000 per year, overhead-free, for two years to support new, early-stage research in areas such as water and food security, safety, supply, and sustainability. Past projects have spanned many diverse disciplines, including engineering, science, technology, and business innovation, as well as social science and economics, architecture, and urban planning. 

    Seven new projects led by eight researchers will be supported this year. With funding going to four different MIT departments, the projects address a range of challenges by employing advanced materials, technology innovations, and new approaches to resource management. The new projects aim to remove harmful chemicals from water sources, develop drought monitoring systems for farmers, improve management of the shellfish industry, optimize water purification materials, and more.

    “Climate change, the pandemic, and most recently the war in Ukraine have exacerbated and put a spotlight on the serious challenges facing global water and food systems,” says J-WAFS director John H. Lienhard. He adds, “The proposals chosen this year have the potential to create measurable, real-world impacts in both the water and food sectors.”  

    The 2022 J-WAFS seed grant researchers and their projects are:

    Gang Chen, the Carl Richard Soderberg Professor of Power Engineering in MIT’s Department of Mechanical Engineering, is using sunlight to desalinate water. The use of solar energy for desalination is not a new idea, particularly solar thermal evaporation methods. However, the solar thermal evaporation process has an overall low efficiency because it relies on breaking hydrogen bonds among individual water molecules, which is very energy-intensive. Chen and his lab recently discovered a photomolecular effect that dramatically lowers the energy required for desalination. 

    The bonds among water molecules inside a water cluster in liquid water are mostly hydrogen bonds. Chen discovered that a photon with energy larger than the bonding energy between the water cluster and the remaining water liquids can cleave off the water cluster at the water-air interface, colliding with air molecules and disintegrating into 60 or even more individual water molecules. This effect has the potential to significantly boost clean water production via new desalination technology that produces a photomolecular evaporation rate that exceeds pure solar thermal evaporation by at least ten-fold. 

    John E. Fernández is the director of the MIT Environmental Solutions Initiative (ESI) and a professor in the Department of Architecture, and also affiliated with the Department of Urban Studies and Planning. Fernández is working with Scott D. Odell, a postdoc in the ESI, to better understand the impacts of mining and climate change in water-stressed regions of Chile.

    The country of Chile is one of the world’s largest exporters of both agricultural and mineral products; however, little research has been done on climate change effects at the intersection of these two sectors. Fernández and Odell will explore how desalination is being deployed by the mining industry to relieve pressure on continental water supplies in Chile, and with what effect. They will also research how climate change and mining intersect to affect Andean glaciers and agricultural communities dependent upon them. The researchers intend for this work to inform policies to reduce social and environmental harms from mining, desalination, and climate change.

    Ariel L. Furst is the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT. Her 2022 J-WAFS seed grant project seeks to effectively remove dangerous and long-lasting chemicals from water supplies and other environmental areas. 

    Perfluorooctanoic acid (PFOA), a component of Teflon, is a member of a group of chemicals known as per- and polyfluoroalkyl substances (PFAS). These human-made chemicals have been extensively used in consumer products like nonstick cooking pans. Exceptionally high levels of PFOA have been measured in water sources near manufacturing sites, which is problematic as these chemicals do not readily degrade in our bodies or the environment. The majority of humans have detectable levels of PFAS in their blood, which can lead to significant health issues including cancer, liver damage, and thyroid effects, as well as developmental effects in infants. Current remediation methods are limited to inefficient capture and are mostly confined to laboratory settings. Furst’s proposed method utilizes low-energy, scaffolded enzyme materials to move beyond simple capture to degrade these hazardous pollutants.

    Heather J. Kulik is an associate professor in the Department of Chemical Engineering at MIT who is developing novel computational strategies to identify optimal materials for purifying water. Water treatment requires purification by selectively separating small ions from water. However, human-made, scalable materials for water purification and desalination are often not stable in typical operating conditions and lack precision pores for good separation. 

    Metal-organic frameworks (MOFs) are promising materials for water purification because their pores can be tailored to have precise shapes and chemical makeup for selective ion affinity. Yet few MOFs have been assessed for their properties relevant to water purification. Kulik plans to use virtual high-throughput screening accelerated by machine learning models and molecular simulation to accelerate discovery of MOFs. Specifically, Kulik will be looking for MOFs with ultra-stable structures in water that do not break down at certain temperatures. 

    Gregory C. Rutledge is the Lammot du Pont Professor of Chemical Engineering at MIT. He is leading a project that will explore how to better separate oils from water. This is an important problem to solve given that industry-generated oil-contaminated water is a major source of pollution to the environment.

    Emulsified oils are particularly challenging to remove from water due to their small droplet sizes and long settling times. Microfiltration is an attractive technology for the removal of emulsified oils, but its major drawback is fouling, or the accumulation of unwanted material on solid surfaces. Rutledge will examine the mechanism of separation behind liquid-infused membranes (LIMs) in which an infused liquid coats the surface and pores of the membrane, preventing fouling. Robustness of the LIM technology for removal of different types of emulsified oils and oil mixtures will be evaluated. César Terrer is an assistant professor in the Department of Civil and Environmental Engineering whose J-WAFS project seeks to answer the question: How can satellite images be used to provide a high-resolution drought monitoring system for farmers? 

    Drought is recognized as one of the world’s most pressing issues, with direct impacts on vegetation that threaten water resources and food production globally. However, assessing and monitoring the impact of droughts on vegetation is extremely challenging as plants’ sensitivity to lack of water varies across species and ecosystems. Terrer will leverage a new generation of remote sensing satellites to provide high-resolution assessments of plant water stress at regional to global scales. The aim is to provide a plant drought monitoring product with farmland-specific services for water and socioeconomic management.

    Michael Triantafyllou is the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. He is developing a web-based system for natural resources management that will deploy geospatial analysis, visualization, and reporting to better manage and facilitate aquaculture data.  By providing value to commercial fisheries’ permit holders who employ significant numbers of people and also to recreational shellfish permit holders who contribute to local economies, the project has attracted support from the Massachusetts Division of Marine Fisheries as well as a number of local resource management departments.

    Massachusetts shell fisheries generated roughly $339 million in 2020, accounting for 17 percent of U.S. East Coast production. Managing such a large industry is a time-consuming process, given there are thousands of acres of coastal areas grouped within over 800 classified shellfish growing areas. Extreme climate events present additional challenges. Triantafyllou’s research will help efforts to enforce environmental regulations, support habitat restoration efforts, and prevent shellfish-related food safety issues. More

  • in

    Energy storage important to creating affordable, reliable, deeply decarbonized electricity systems

    In deeply decarbonized energy systems utilizing high penetrations of variable renewable energy (VRE), energy storage is needed to keep the lights on and the electricity flowing when the sun isn’t shining and the wind isn’t blowing — when generation from these VRE resources is low or demand is high. The MIT Energy Initiative’s Future of Energy Storage study makes clear the need for energy storage and explores pathways using VRE resources and storage to reach decarbonized electricity systems efficiently by 2050.

    “The Future of Energy Storage,” a new multidisciplinary report from the MIT Energy Initiative (MITEI), urges government investment in sophisticated analytical tools for planning, operation, and regulation of electricity systems in order to deploy and use storage efficiently. Because storage technologies will have the ability to substitute for or complement essentially all other elements of a power system, including generation, transmission, and demand response, these tools will be critical to electricity system designers, operators, and regulators in the future. The study also recommends additional support for complementary staffing and upskilling programs at regulatory agencies at the state and federal levels. 

    Play video

    Why is energy storage so important?

    The MITEI report shows that energy storage makes deep decarbonization of reliable electric power systems affordable. “Fossil fuel power plant operators have traditionally responded to demand for electricity — in any given moment — by adjusting the supply of electricity flowing into the grid,” says MITEI Director Robert Armstrong, the Chevron Professor of Chemical Engineering and chair of the Future of Energy Storage study. “But VRE resources such as wind and solar depend on daily and seasonal variations as well as weather fluctuations; they aren’t always available to be dispatched to follow electricity demand. Our study finds that energy storage can help VRE-dominated electricity systems balance electricity supply and demand while maintaining reliability in a cost-effective manner — that in turn can support the electrification of many end-use activities beyond the electricity sector.”

    The three-year study is designed to help government, industry, and academia chart a path to developing and deploying electrical energy storage technologies as a way of encouraging electrification and decarbonization throughout the economy, while avoiding excessive or inequitable burdens.

    Focusing on three distinct regions of the United States, the study shows the need for a varied approach to energy storage and electricity system design in different parts of the country. Using modeling tools to look out to 2050, the study team also focuses beyond the United States, to emerging market and developing economy (EMDE) countries, particularly as represented by India. The findings highlight the powerful role storage can play in EMDE nations. These countries are expected to see massive growth in electricity demand over the next 30 years, due to rapid overall economic expansion and to increasing adoption of electricity-consuming technologies such as air conditioning. In particular, the study calls attention to the pivotal role battery storage can play in decarbonizing grids in EMDE countries that lack access to low-cost gas and currently rely on coal generation.

    The authors find that investment in VRE combined with storage is favored over new coal generation over the medium and long term in India, although existing coal plants may linger unless forced out by policy measures such as carbon pricing. 

    “Developing countries are a crucial part of the global decarbonization challenge,” says Robert Stoner, the deputy director for science and technology at MITEI and one of the report authors. “Our study shows how they can take advantage of the declining costs of renewables and storage in the coming decades to become climate leaders without sacrificing economic development and modernization.”

    The study examines four kinds of storage technologies: electrochemical, thermal, chemical, and mechanical. Some of these technologies, such as lithium-ion batteries, pumped storage hydro, and some thermal storage options, are proven and available for commercial deployment. The report recommends that the government focus R&D efforts on other storage technologies, which will require further development to be available by 2050 or sooner — among them, projects to advance alternative electrochemical storage technologies that rely on earth-abundant materials. It also suggests government incentives and mechanisms that reward success but don’t interfere with project management. The report calls for the federal government to change some of the rules governing technology demonstration projects to enable more projects on storage. Policies that require cost-sharing in exchange for intellectual property rights, the report argues, discourage the dissemination of knowledge. The report advocates for federal requirements for demonstration projects that share information with other U.S. entities.

    The report says many existing power plants that are being shut down can be converted to useful energy storage facilities by replacing their fossil fuel boilers with thermal storage and new steam generators. This retrofit can be done using commercially available technologies and may be attractive to plant owners and communities — using assets that would otherwise be abandoned as electricity systems decarbonize.  

    The study also looks at hydrogen and concludes that its use for storage will likely depend on the extent to which hydrogen is used in the overall economy. That broad use of hydrogen, the report says, will be driven by future costs of hydrogen production, transportation, and storage — and by the pace of innovation in hydrogen end-use applications. 

    The MITEI study predicts the distribution of hourly wholesale prices or the hourly marginal value of energy will change in deeply decarbonized power systems — with many more hours of very low prices and more hours of high prices compared to today’s wholesale markets. So the report recommends systems adopt retail pricing and retail load management options that reward all consumers for shifting electricity use away from times when high wholesale prices indicate scarcity, to times when low wholesale prices signal abundance. 

    The Future of Energy Storage study is the ninth in MITEI’s “Future of” series, exploring complex and vital issues involving energy and the environment. Previous studies have focused on nuclear power, solar energy, natural gas, geothermal energy, and coal (with capture and sequestration of carbon dioxide emissions), as well as on systems such as the U.S. electric power grid. The Alfred P. Sloan Foundation and the Heising-Simons Foundation provided core funding for MITEI’s Future of Energy Storage study. MITEI members Equinor and Shell provided additional support.  More

  • in

    MIT expands research collaboration with Commonwealth Fusion Systems to build net energy fusion machine, SPARC

    MIT’s Plasma Science and Fusion Center (PSFC) will substantially expand its fusion energy research and education activities under a new five-year agreement with Institute spinout Commonwealth Fusion Systems (CFS).

    “This expanded relationship puts MIT and PSFC in a prime position to be an even stronger academic leader that can help deliver the research and education needs of the burgeoning fusion energy industry, in part by utilizing the world’s first burning plasma and net energy fusion machine, SPARC,” says PSFC director Dennis Whyte. “CFS will build SPARC and develop a commercial fusion product, while MIT PSFC will focus on its core mission of cutting-edge research and education.”

    Commercial fusion energy has the potential to play a significant role in combating climate change, and there is a concurrent increase in interest from the energy sector, governments, and foundations. The new agreement, administered by the MIT Energy Initiative (MITEI), where CFS is a startup member, will help PSFC expand its fusion technology efforts with a wider variety of sponsors. The collaboration enables rapid execution at scale and technology transfer into the commercial sector as soon as possible.

    This new agreement doubles CFS’ financial commitment to PSFC, enabling greater recruitment and support of students, staff, and faculty. “We’ll significantly increase the number of graduate students and postdocs, and just as important they will be working on a more diverse set of fusion science and technology topics,” notes Whyte. It extends the collaboration between PSFC and CFS that resulted in numerous advances toward fusion power plants, including last fall’s demonstration of a high-temperature superconducting (HTS) fusion electromagnet with record-setting field strength of 20 tesla.

    The combined magnetic fusion efforts at PSFC will surpass those in place during the operations of the pioneering Alcator C-Mod tokamak device that operated from 1993 to 2016. This increase in activity reflects a moment when multiple fusion energy technologies are seeing rapidly accelerating development worldwide, and the emergence of a new fusion energy industry that would require thousands of trained people.

    MITEI director Robert Armstrong adds, “Our goal from the beginning was to create a membership model that would allow startups who have specific research challenges to leverage the MITEI ecosystem, including MIT faculty, students, and other MITEI members. The team at the PSFC and MITEI have worked seamlessly to support CFS, and we are excited for this next phase of the relationship.”

    PSFC is supporting CFS’ efforts toward realizing the SPARC fusion platform, which facilitates rapid development and refinement of elements (including HTS magnets) needed to build ARC, a compact, modular, high-field fusion power plant that would set the stage for commercial fusion energy production. The concepts originated in Whyte’s nuclear science and engineering class 22.63 (Principles of Fusion Engineering) and have been carried forward by students and PSFC staff, many of whom helped found CFS; the new activity will expand research into advanced technologies for the envisioned pilot plant.

    “This has been an incredibly effective collaboration that has resulted in a major breakthrough for commercial fusion with the successful demonstration of revolutionary fusion magnet technology that will enable the world’s first commercially relevant net energy fusion device, SPARC, currently under construction,” says Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems. “We look forward to this next phase in the collaboration with MIT as we tackle the critical research challenges ahead for the next steps toward fusion power plant development.”

    In the push for commercial fusion energy, the next five years are critical, requiring intensive work on materials longevity, heat transfer, fuel recycling, maintenance, and other crucial aspects of power plant development. It will need innovation from almost every engineering discipline. “Having great teams working now, it will cut the time needed to move from SPARC to ARC, and really unleash the creativity. And the thing MIT does so well is cut across disciplines,” says Whyte.

    “To address the climate crisis, the world needs to deploy existing clean energy solutions as widely and as quickly as possible, while at the same time developing new technologies — and our goal is that those new technologies will include fusion power,” says Maria T. Zuber, MIT’s vice president for research. “To make new climate solutions a reality, we need focused, sustained collaborations like the one between MIT and Commonwealth Fusion Systems. Delivering fusion power onto the grid is a monumental challenge, and the combined capabilities of these two organizations are what the challenge demands.”

    On a strategic level, climate change and the imperative need for widely implementable carbon-free energy have helped orient the PSFC team toward scalability. “Building one or 10 fusion plants doesn’t make a difference — we have to build thousands,” says Whyte. “The design decisions we make will impact the ability to do that down the road. The real enemy here is time, and we want to remove as many impediments as possible and commit to funding a new generation of scientific leaders. Those are critically important in a field with as much interdisciplinary integration as fusion.” More

  • in

    Team creates map for production of eco-friendly metals

    In work that could usher in more efficient, eco-friendly processes for producing important metals like lithium, iron, and cobalt, researchers from MIT and the SLAC National Accelerator Laboratory have mapped what is happening at the atomic level behind a particularly promising approach called metal electrolysis.

    By creating maps for a wide range of metals, they not only determined which metals should be easiest to produce using this approach, but also identified fundamental barriers behind the efficient production of others. As a result, the researchers’ map could become an important design tool for optimizing the production of all these metals.

    The work could also aid the development of metal-air batteries, cousins of the lithium-ion batteries used in today’s electric vehicles.

    Most of the metals key to society today are produced using fossil fuels. These fuels generate the high temperatures necessary to convert the original ore into its purified metal. But that process is a significant source of greenhouse gases — steel alone accounts for some 7 percent of carbon dioxide emissions globally. As a result, researchers from around the world are working to identify more eco-friendly ways for the production of metals.

    One promising approach is metal electrolysis, in which a metal oxide, the ore, is zapped with electricity to create pure metal with oxygen as the byproduct. That is the reaction explored at the atomic level in new research reported in the April 8 issue of the journal Chemistry of Materials.

    Donald Siegel is department chair and professor of mechanical engineering at the University of Texas at Austin. Says Siegel, who was not involved in the Chemistry of Materials study: “This work is an important contribution to improving the efficiency of metal production from metal oxides. It clarifies our understanding of low-carbon electrolysis processes by tracing the underlying thermodynamics back to elementary metal-oxygen interactions. I expect that this work will aid in the creation of design rules that will make these industrially important processes less reliant on fossil fuels.”

    Yang Shao-Horn, the JR East Professor of Engineering in MIT’s Department of Materials Science and Engineering (DMSE) and Department of Mechanical Engineering, is a leader of the current work, with Michal Bajdich of SLAC.

    “Here we aim to establish some basic understanding to predict the efficiency of electrochemical metal production and metal-air batteries from examining computed thermodynamic barriers for the conversion between metal and metal oxides,” says Shao-Horn, who is on the research team for MIT’s new Center for Electrification and Decarbonization of Industry, a winner of the Institute’s first-ever Climate Grand Challenges competition. Shao-Horn is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics.

    In addition to Shao-Horn and Bajdich, other authors of the Chemistry of Materials paper are Jaclyn R. Lunger, first author and a DMSE graduate student; mechanical engineering senior Naomi Lutz; and DMSE graduate student Jiayu Peng.

    Other applications

    The work could also aid in developing metal-air batteries such as lithium-air, aluminum-air, and zinc-air batteries. These cousins of the lithium-ion batteries used in today’s electric vehicles have the potential to electrify aviation because their energy densities are much higher. However, they are not yet on the market due to a variety of problems including inefficiency.

    Charging metal-air batteries also involves electrolysis. As a result, the new atomic-level understanding of these reactions could not only help engineers develop efficient electrochemical routes for metal production, but also design more efficient metal-air batteries.

    Learning from water splitting

    Electrolysis is also used to split water into oxygen and hydrogen, which stores the resulting energy. That hydrogen, in turn, could become an eco-friendly alternative to fossil fuels. Since much more is known about water electrolysis, the focus of Bajdich’s work at SLAC, than the electrolysis of metal oxides, the team compared the two processes for the first time.

    The result: “Slowly, we uncovered the elementary steps involved in metal electrolysis,” says Bajdich. The work was challenging, says Lunger, because “it was unclear to us what those steps are. We had to figure out how to get from A to B,” or from a metal oxide to metal and oxygen.

    All of the work was conducted with supercomputer simulations. “It’s like a sandbox of atoms, and then we play with them. It’s a little like Legos,” says Bajdich. More specifically, the team explored different scenarios for the electrolysis of several metals. Each involved different catalysts, molecules that boost the speed of a reaction.

    Says Lunger, “To optimize the reaction, you want to find the catalyst that makes it most efficient.” The team’s map is essentially a guide for designing the best catalysts for each different metal.

    What’s next? Lunger noted that the current work focused on the electrolysis of pure metals. “I’m interested in seeing what happens in more complex systems involving multiple metals. Can you make the reaction more efficient if there’s sodium and lithium present, or cadmium and cesium?”

    This work was supported by a U.S. Department of Energy Office of Science Graduate Student Research award. It was also supported by an MIT Energy Initiative fellowship, the Toyota Research Institute through the Accelerated Materials Design and Discovery Program, the Catalysis Science Program of Department of Energy, Office of Basic Energy Sciences, and by the Differentiate Program through the U.S. Advanced Research Projects Agency — Energy.  More