More stories

  • in

    Study: Smoke particles from wildfires can erode the ozone layer

    A wildfire can pump smoke up into the stratosphere, where the particles drift for over a year. A new MIT study has found that while suspended there, these particles can trigger chemical reactions that erode the protective ozone layer shielding the Earth from the sun’s damaging ultraviolet radiation.

    The study, which appears today in Nature, focuses on the smoke from the “Black Summer” megafire in eastern Australia, which burned from December 2019 into January 2020. The fires — the country’s most devastating on record — scorched tens of millions of acres and pumped more than 1 million tons of smoke into the atmosphere.

    The MIT team identified a new chemical reaction by which smoke particles from the Australian wildfires made ozone depletion worse. By triggering this reaction, the fires likely contributed to a 3-5 percent depletion of total ozone at mid-latitudes in the Southern Hemisphere, in regions overlying Australia, New Zealand, and parts of Africa and South America.

    The researchers’ model also indicates the fires had an effect in the polar regions, eating away at the edges of the ozone hole over Antarctica. By late 2020, smoke particles from the Australian wildfires widened the Antarctic ozone hole by 2.5 million square kilometers — 10 percent of its area compared to the previous year.

    It’s unclear what long-term effect wildfires will have on ozone recovery. The United Nations recently reported that the ozone hole, and ozone depletion around the world, is on a recovery track, thanks to a sustained international effort to phase out ozone-depleting chemicals. But the MIT study suggests that as long as these chemicals persist in the atmosphere, large fires could spark a reaction that temporarily depletes ozone.

    “The Australian fires of 2020 were really a wake-up call for the science community,” says Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at MIT and a leading climate scientist who first identified the chemicals responsible for the Antarctic ozone hole. “The effect of wildfires was not previously accounted for in [projections of] ozone recovery. And I think that effect may depend on whether fires become more frequent and intense as the planet warms.”

    The study is led by Solomon and MIT research scientist Kane Stone, along with collaborators from the Institute for Environmental and Climate Research in Guangzhou, China; the U.S. National Oceanic and Atmospheric Administration; the U.S. National Center for Atmospheric Research; and Colorado State University.

    Chlorine cascade

    The new study expands on a 2022 discovery by Solomon and her colleagues, in which they first identified a chemical link between wildfires and ozone depletion. The researchers found that chlorine-containing compounds, originally emitted by factories in the form of chlorofluorocarbons (CFCs), could react with the surface of fire aerosols. This interaction, they found, set off a chemical cascade that produced chlorine monoxide — the ultimate ozone-depleting molecule. Their results showed that the Australian wildfires likely depleted ozone through this newly identified chemical reaction.

    “But that didn’t explain all the changes that were observed in the stratosphere,” Solomon says. “There was a whole bunch of chlorine-related chemistry that was totally out of whack.”

    In the new study, the team took a closer look at the composition of molecules in the stratosphere following the Australian wildfires. They combed through three independent sets of satellite data and observed that in the months following the fires, concentrations of hydrochloric acid dropped significantly at mid-latitudes, while chlorine monoxide spiked.

    Hydrochloric acid (HCl) is present in the stratosphere as CFCs break down naturally over time. As long as chlorine is bound in the form of HCl, it doesn’t have a chance to destroy ozone. But if HCl breaks apart, chlorine can react with oxygen to form ozone-depleting chlorine monoxide.

    In the polar regions, HCl can break apart when it interacts with the surface of cloud particles at frigid temperatures of about 155 kelvins. However, this reaction was not expected to occur at mid-latitudes, where temperatures are much warmer.

    “The fact that HCl at mid-latitudes dropped by this unprecedented amount was to me kind of a danger signal,” Solomon says.

    She wondered: What if HCl could also interact with smoke particles, at warmer temperatures and in a way that released chlorine to destroy ozone? If such a reaction was possible, it would explain the imbalance of molecules and much of the ozone depletion observed following the Australian wildfires.

    Smoky drift

    Solomon and her colleagues dug through the chemical literature to see what sort of organic molecules could react with HCl at warmer temperatures to break it apart.

    “Lo and behold, I learned that HCl is extremely soluble in a whole broad range of organic species,” Solomon says. “It likes to glom on to lots of compounds.”

    The question then, was whether the Australian wildfires released any of those compounds that could have triggered HCl’s breakup and any subsequent depletion of ozone. When the team looked at the composition of smoke particles in the first days after the fires, the picture was anything but clear.

    “I looked at that stuff and threw up my hands and thought, there’s so much stuff in there, how am I ever going to figure this out?” Solomon recalls. “But then I realized it had actually taken some weeks before you saw the HCl drop, so you really need to look at the data on aged wildfire particles.”

    When the team expanded their search, they found that smoke particles persisted over months, circulating in the stratosphere at mid-latitudes, in the same regions and times when concentrations of HCl dropped.

    “It’s the aged smoke particles that really take up a lot of the HCl,” Solomon says. “And then you get, amazingly, the same reactions that you get in the ozone hole, but over mid-latitudes, at much warmer temperatures.”

    When the team incorporated this new chemical reaction into a model of atmospheric chemistry, and simulated the conditions of the Australian wildfires, they observed a 5 percent depletion of ozone throughout the stratosphere at mid-latitudes, and a 10 percent widening of the ozone hole over Antarctica.

    The reaction with HCl is likely the main pathway by which wildfires can deplete ozone. But Solomon guesses there may be other chlorine-containing compounds drifting in the stratosphere, that wildfires could unlock.

    “There’s now sort of a race against time,” Solomon says. “Hopefully, chlorine-containing compounds will have been destroyed, before the frequency of fires increases with climate change. This is all the more reason to be vigilant about global warming and these chlorine-containing compounds.”

    This research was supported, in part, by NASA and the U.S. National Science Foundation. More

  • in

    Nanotube sensors are capable of detecting and distinguishing gibberellin plant hormones

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their collaborators from Temasek Life Sciences Laboratory have developed the first-ever nanosensor that can detect and distinguish gibberellins (GAs), a class of hormones in plants that are important for growth. The novel nanosensors are nondestructive, unlike conventional collection methods, and have been successfully tested in living plants. Applied in the field for early-stage plant stress monitoring, the sensors could prove transformative for agriculture and plant biotechnology, giving farmers interested in high-tech precision agriculture and crop management a valuable tool to optimize yield.

    The researchers designed near-infrared fluorescent carbon nanotube sensors that are capable of detecting and distinguishing two plant hormones, GA3 and GA4. Belonging to a class of plant hormones known as gibberellins, GA3 and GA4 are diterpenoid phytohormones produced by plants that play an important role in modulating diverse processes involved in plant growth and development. GAs are thought to have played a role in the driving forces behind the “green revolution” of the 1960s, which was in turn credited with averting famine and saving the lives of many worldwide. The continued study of gibberellins could lead to further breakthroughs in agricultural science and have implications for food security.

    Climate change, global warming, and rising sea levels cause farming soil to get contaminated by saltwater, raising soil salinity. In turn, high soil salinity is known to negatively regulate GA biosynthesis and promote GA metabolism, resulting in the reduction of GA content in plants. The new nanosensors developed by the SMART researchers allow for the study of GA dynamics in living plants under salinity stress at a very early stage, potentially enabling farmers to make early interventions when eventually applied in the field. This forms the basis of early-stage stress detection.

    Currently, methods to detect GA3 and GA4 typically require mass spectroscopy-based analysis, a time-consuming and destructive process. In contrast, the new sensors developed by the researchers are highly selective for the respective GAs and offer real-time, in vivo monitoring of changes in GA levels across a broad range of plant species.

    Described in a paper titled “Near-Infrared Fluorescent Carbon Nanotube Sensors for the Plant Hormone Family Gibberellins” published in the journal Nano Letters, the research represents a breakthrough for early-stage plant stress detection and holds tremendous potential to advance plant biotechnology and agriculture. This paper builds on previous research by the team at SMART DiSTAP on single-walled carbon nanotube-based nanosensors using the corona phase molecular recognition (CoPhMoRe) platform.

    Based on the CoPhMoRe concept introduced by the lab of MIT Professor Professor Michael Strano, the novel sensors are able to detect GA kinetics in the roots of a variety of model and non-model plant species, including Arabidopsis, lettuce, and basil, as well as GA accumulation during lateral root emergence, highlighting the importance of GA in root system architecture. This was made possible by the researchers’ related development of a new coupled Raman/near infrared fluorimeter that enables self-referencing of nanosensor near infrared fluorescence with its Raman G-band, a new hardware innovation that removes the need for a separate reference nanosensor and greatly simplifies the instrumentation requirements by using a single optical channel to measure hormone concentration.

    Using the reversible GA nanosensors, the researchers detected increased endogenous GA levels in mutant plants producing greater amounts of GA20ox1, a key enzyme in GA biosynthesis, as well as decreased GA levels in plants under salinity stress. When exposed to salinity stress, researchers also found that lettuce growth was severely stunted — an indication that only became apparent after 10 days. In contrast, the GA nanosensors reported decreased GA levels after just six hours, demonstrating their efficacy as a much earlier indicator of salinity stress.

    “Our CoPhMoRe technique allows us to create nanoparticles that act like natural antibodies in that they can recognize and lock onto specific molecules. But they tend to be far more stable than alternatives. We have used this method to successfully create nanosensors for plant signals such as hydrogen peroxide and heavy-metal pollutants like arsenic in plants and soil,” says Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT who is co-corresponding author and DiSTAP co-lead principal investigator. “The method works to create sensors for organic molecules like synthetic auxin — an important plant hormone — as we have shown. This latest breakthrough now extends this success to a plant hormone family called gibberellins — an exceedingly difficult one to recognize.”

    Strano adds: “The resulting technology offers a rapid, real-time, and in vivo method to monitor changes in GA levels in virtually any plant, and can replace current sensing methods which are laborious, destructive, species-specific, and much less efficient.”

    Mervin Chun-Yi Ang, associate scientific director at DiSTAP and co-first author of the paper, says, “More than simply a breakthrough in plant stress detection, we have also demonstrated a hardware innovation in the form of a new coupled Raman/NIR fluorimeter that enabled self-referencing of SWNT sensor fluorescence with its Raman G-band, representing a major advance in the translation of our nanosensing tool sets to the field. In the near future, our sensors can be combined with low-cost electronics, portable optodes, or microneedle interfaces for industrial use, transforming how the industry screens for and mitigates plant stress in food crops and potentially improving growth and yield.”

    The new sensors could yet have a variety of industrial applications and use cases. Daisuke Urano, a Temasek Life Sciences Laboratory principal investigator, National University of Singapore (NUS) adjunct assistant professor, and co-corresponding author of the paper, explains, “GAs are known to regulate a wide range of plant development processes, from shoot, root, and flower development, to seed germination and plant stress responses. With the commercialization of GAs, these plant hormones are also sold to growers and farmers as plant growth regulators to promote plant growth and seed germination. Our novel GA nanosensors could be applied in the field for early-stage plant stress monitoring, and also be used by growers and farmers to track the uptake or metabolism of GA in their crops.”

    The design and development of the nanosensors, creation and validation of the coupled Raman/near infrared fluorimeter and related image/data processing algorithms, as well as statistical analysis of readouts from plant sensors for this study were performed by SMART and MIT. The Temasek Life Sciences Laboratory was responsible for the design, execution, and analysis of plant-related studies, including validation of nanosensors in living plants.

    This research was carried out by SMART and supported by the National Research Foundation of Singapore under its Campus for Research Excellence And Technological Enterprise (CREATE) program. The DiSTAP program, led by Strano and Singapore co-lead principal investigator Professor Chua Nam Hai, addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biomaterial technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients. Scientists from MIT, Temasek Life Sciences Laboratory, Nanyang Technological University (NTU) and NUS are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, and drought and pathogen resistance, and applying these technologies to improve urban farming.

    SMART was established by MIT and the National Research Foundation of Singapore in 2007. SMART serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both Singapore and MIT. SMART currently comprises an Innovation Center and five interdisciplinary research groups: Antimicrobial Resistance, Critical Analytics for Manufacturing Personalized-Medicine, DiSTAP, Future Urban Mobility, and Low Energy Electronic Systems. More

  • in

    3 Questions: Antje Danielson on energy education and its role in climate action

    The MIT Energy Initiative (MITEI) leads energy education at MIT, developing and implementing a robust educational toolkit for MIT graduate and undergraduate students, online learners around the world, and high school students who want to contribute to the energy transition. As MITEI’s director of education, Antje Danielson manages a team devoted to training the next generation of energy innovators, entrepreneurs, and policymakers. Here, she discusses new initiatives in MITEI’s education program and how they are preparing students to take an active role in climate action.

    Q: What role are MITEI’s education efforts playing in climate action initiatives at MIT, and what more could we be doing?

    A: This is a big question. The carbon emissions from energy are such an important factor in climate mitigation; therefore, what we do in energy education is practically synonymous with climate education. This is well illustrated in a 2018 Nature Energy paper by Fuso Nerini, which outlines that affordable, clean energy is related to many of the United Nations Sustainable Development Goals (SDGs) — not just SDG 7, which specifically calls for “affordable, reliable, sustainable, and modern energy for all” by 2030. There are 17 SDGs containing 169 targets, of which 113 (65 percent) require actions to be taken concerning energy systems.

    Now, can we equate education with action? The answer is yes, but only if it is done correctly. From the behavioral change literature, we know that knowledge alone is not enough to change behavior. So, one important part of our education program is practice and experience through research, internships, stakeholder engagement, and other avenues. At a minimum, education must give the learner the knowledge, skills, and courage to be ready to jump into action, but ideally, practice is a part of the offering. We also want our learners to go out into the world and share what they know and do. If done right, education is an energy transition accelerator.

    At MITEI, our learners are not just MIT students. We are creating online offerings based on residential MIT courses to train global professionals, policymakers, and students in research methods and tools to support and accelerate the energy transition. These are free and open to learners worldwide. We have five courses available now, with more to come.

    Our latest program is a collaboration with MIT’s Center for Energy and Environmental Policy Research (CEEPR): Climate Action through Education, or CATE. This is a teach-the-teacher program for high school curriculum and is a part of the MIT Climate Action Plan. The aim is to develop interdisciplinary, solutions-focused climate change curricula for U.S. high school teachers with components in history/social science, English/language arts, math, science, and computer science.

    We are rapidly expanding our programming. In the online space, for our global learners, we are bundling courses for professional development certificates; for our undergraduates, we are redesigning the energy studies minor to reflect what we have learned over the past 12 years; and for our graduate students, we are adding a new program that allows them to garner industry experience related to the energy transition. Meanwhile, CATE is creating a support network for the teachers who adopt the curriculum. We are also working on creating an energy and climate alliance with other universities around the world.

    On the Institute level, I am a member of the Climate Education Working Group, a subgroup of the Climate Nucleus, where we discuss and will soon recommend further climate action the Institute can take. Stay tuned for that.

    Q: You mentioned that you are leading an effort to create a consortium of energy and climate education programs at universities around the world. How does this effort fit into MITEI’s educational mission?

    A: Yes, we are currently calling it the “Energy and Climate Education Alliance.” The background to this is that the problem we are facing — transitioning the entire global energy system from high carbon emissions to low, no, and negative carbon emissions — is global, huge, and urgent. Following the proverbial “many hands make light work,” we believe that the success of this very complex task is accomplished quicker with more participants. There is, of course, more to this as well. The complexity of the problem is such that (1) MIT doesn’t have all the expertise needed to accomplish the educational needs of the climate and energy crisis, (2) there is a definite local and regional component to capacity building, and (3) collaborations with universities around the world will make our mission-driven work more efficient. Finally, these collaborations will be advantageous for our students as they will be able to learn from real-world case studies that are not U.S.-based and maybe even visit other universities abroad, do internships, and engage in collaborative research projects. Also, students from those universities will be able to come here and experience MIT’s unique intellectual environment.

    Right now, we are very much in the beginning stages of creating the alliance. We have signed a collaboration agreement with the Technical University of Berlin, Germany, and are engaged in talks with other European and Southeast Asian universities. Some of the collaborations we are envisioning relate to course development, student exchange, collaborative research, and course promotion. We are very excited about this collaboration. It fits well into MIT’s ambition to take climate action outside of the university, while still staying within our educational mission.

    Q: It is clear to me from this conversation that MITEI’s education program is undertaking a number of initiatives to prepare MIT students and interested learners outside of the Institute to take an active role in climate action. But, the reality is that despite our rapidly changing climate and the immediate need to decarbonize our global economy, climate denialism and a lack of climate and energy understanding persist in the greater global population. What do you think must be done, and what can MITEI do, to increase climate and energy literacy broadly?

    A: I think the basic problem is not necessarily a lack of understanding but an abundance of competing issues that people are dealing with every day. Poverty, personal health, unemployment, inflation, pandemics, housing, wars — all are very immediate problems people have. And climate change is perceived to be in the future.

    The United States is a very bottom-up country, where corporations offer what people buy, and politicians advocate for what voters want and what money buys. Of course, this is overly simplified, but as long as we don’t come up with mechanisms to achieve a monumental shift in consumer and voter behavior, we are up against these immediate pressures. However, we are seeing some movement in this area due to rising gas and heating oil prices and the many natural disasters we are encountering now. People are starting to understand that climate change will hit their pocketbook, whether or not we have a carbon tax. The recent Florida hurricane damage, wildfires in the west, extreme summer temperatures, frequent droughts, increasing numbers of poisonous and disease-carrying insects — they all illustrate the relationship between climate change, health, and financial damage. Fewer and fewer people will be able to deny the existence of climate change because they will either be directly affected or know someone who is.

    The question is one of speed and scale. The more we can help to make the connections even more visible and understood, the faster we get to the general acceptance that this is real. Research projects like CEEPR’s Roosevelt Project, which develops action plans to help communities deal with industrial upheaval in the context of the energy transition, are contributing to this effect, as are studies related to climate change and national security. This is a fast-moving world, and our research findings need to be translated as we speak. A real problem in education is that we have the tendency to teach the tried and true. Our education programs have to become much nimbler, which means curricula have to be updated frequently, and that is expensive. And of course, the speed and magnitude of our efforts are dependent on the funding we can attract, and fundraising for education is more difficult than fundraising for research.

    However, let me pivot: You alluded to the fact that this is a global problem. The immediate pressures of poverty and hunger are a matter of survival in many parts of the world, and when it comes to surviving another day, who cares if climate change will render your fields unproductive in 20 years? Or if the weather turns your homeland into a lake, will you think about lobbying your government to reduce carbon emissions, or will you ask for help to rebuild your existence? On the flip side, politicians and government authorities in those areas have to deal with extremely complex situations, balancing local needs with global demands. We should learn from them. What we need is to listen. What do these areas of the world need most, and how can climate action be included in the calculations? The Global Commission to End Energy Poverty, a collaboration between MITEI and the Rockefeller Foundation to bring electricity to the billion people across the globe who currently live without it, is a good example of what we are already doing. Both our online education program and the Energy and Climate Education Alliance aim to go in this direction.

    The struggle and challenge to solve climate change can be pretty depressing, and there are many days when I feel despondent about the speed and progress we are making in saving the future of humanity. But, the prospect of contributing to such a large mission, even if the education team can only nudge us a tiny bit away from the business-as-usual scenario, is exciting. In particular, working on an issue like this at MIT is amazing. So much is happening here, and there don’t seem to be intellectual limits; in fact, thinking big is encouraged. It is very refreshing when one has encountered the old “you can’t do this” too often in the past. I want our students to take this attitude with them and go out there and think big. More

  • in

    Creating the steps to make organizational sustainability work

    Sustainability is a hot topic. Companies throw around their carbon or recycling initiatives, and competing executives feel the need to follow suit. But aside from the external pressure, there are also bottom-line benefits. Becoming more efficient can save money. Creating a new product might make money; customers care about a company’s practices and will spend their money based on that.

    The work is in getting there, because becoming sustainable can seem simple: Establish a goal for five years down the road, and everything will fall into place — but it’s easy for things to get upended. “There is so much confusion and noise in this space,” says Jason Jay, senior lecturer and director of the Sustainability Initiative at MIT’s Sloan School of Management.

    His work is to help companies break through the confusion and figure out what they want to actually do, not merely what sounds good. It means doing research and listening to science. Mostly, it requires discipline, and because something new — be it a product, process or technology — is being asked for, it also takes ambition. “It’s a tricky dance,” he says, but one that can result in “doing well and doing good at the same time.”

    Play video

    It’s about taking steps

    Three steps, to be exact. The first, which is the crux, Jay says, is for a company to focus on a small set of issues that it can take the lead on. It sounds obvious, but it’s often missed. The problem is that companies will do either one of two things. They’ll take an outside-in approach in which they end up listening to too many stakeholders, “get pulled in a million different directions,” and try to solve all of society’s problems, which means solving none of them, he says.

    Or they’ll go inside-out and have one executive in charge of sustainability who will do some internal research and come up with an initiative. It might be a good idea, but it doesn’t take into account how it will affect the facilities, supply chains, and the people who work with them. And without that consideration, “It’s going to be very difficult to get the necessary traction inside the company,” Jay says.

    What’s needed is a combination of the two — outside perspectives coupled with insider knowledge — in order to find an initiative that resonates for that company. It starts with looking at what the company already does. That might show where it’s making a negative impact and, in turn, where it could make a positive one. It also involves the C-suite executives asking themselves, “What do we want this company to stand for?” and then, “What do I want my legacy to be?”

    Still, it can be hard to envision what change can look like or what actions might have an impact. Jay says this is where a simulation tool like En-ROADS, developed by MIT Sloan and Climate Interactive, can help explore scenarios.

    But it’s ultimately about making a commitment and allowing an iterative process to play out. A company then discovers its true focus might be something less flashy. Nike early on, for example, found that a huge source of greenhouse gas emissions was sulfur hexafluoride gas in the Nike Air bladder. When they re-engineered it, they ended up with inert nitrogen and a stronger material that was aesthetically cool and lightweight for the athlete. That didn’t come in one brainstorming meeting. It meant doing research and looking at what the science says is possible. It’s not quick, but it also shouldn’t be, if the goal is to take real, measurable action.

    “Cheap talk leads to cheap things,” Jay says. 

    Play video

    The next two

    Deciding what matters is key, but nothing materializes without establishing concrete goals. This is where a company “shows the world you’re serious.” But it’s a place where companies slip up. They either set weak goals, ones they know they can easily reach, so there’s no challenge, no accomplishment, “no stretch,” Jay says. Or they set goals that are too ambitious and/or aren’t backed by science. It could be, “We’re going to be net zero by 2050,” but how exactly is never answered.

    Jay says it’s about finding the sweet spot of having a reasonable amount of goals — like two to four — and then have those goals feel like a reach, yet possible. When that balance is right, it becomes a self-fulfilling prophecy. People stay motivated because they experience progress. But if it’s off, it won’t happen.

    “You need that optimal creative tension,” he says.

    And then there’s the third step. Companies need to find partners to make their sustainability programs succeed. It’s the one part that’s most overlooked because executives continually believe that they can do it alone. But they can’t, because big initiatives require help and expertise outside of a company’s realm.

    Maersk, the global shipping company, has a goal of replacing fossil fuel with green fuels for ocean freight, Jay says. It discovered that green ammonia could make that happen, and it was Yara, a fertilizer company, which best understood ammonia production. But it could also be a startup that’s working on a promising technology. Sometimes, as with moving to electric cars, what’s needed are political partners to enact policy and offer tax breaks and incentives. And it might be that the answer is collaborating with activists who have been pushing a company to change its ways.

    “There are strange bedfellows all around,” Jay says.

    Know how to tap the brake

    All the steps circle back to the essential point that becoming sustainable takes a committed investment of time, money, and patience. Starting small helps, especially in a corporate culture that tends to move slowly. Jay says there’s nothing wrong with going from zero projects to one, even if it’s a small one in a specific department. It allows people to become accustomed to the idea of change. It also lets the company establish a framework, analyze results, and build momentum, making it easier to ramp up.

    The patience part can be hard since there’s a rightful sense of urgency involved. Companies want to show that they’re doing something, and want to affect climate change sooner rather than later. But Jay likens it to building a skyscraper. The desire is to get it up fast, but if the foundation is shaky, everything will crumble.

    “What we’re trying to do is strengthen that foundation so it can reach the height we need,” he says. More

  • in

    Aviva Intveld named 2023 Gates Cambridge Scholar

    MIT senior Aviva Intveld has won the prestigious Gates Cambridge Scholarship, which offers students an opportunity to pursue graduate study in the field of their choice at Cambridge University in the U.K. Intveld will join the other 23 U.S. citizens selected for the 2023 class of scholars.

    Intveld, from Los Angeles, is majoring in earth, atmospheric, and planetary sciences, and minoring in materials science and engineering with concentrations in geology, geochemistry, and archaeology. Her research interests span the intersections among those fields to better understand how the natural environments of the past have shaped human movement and decision-making.

    At Cambridge, Intveld will undertake a research MPhil in earth sciences at the Godwin Lab for Paleoclimate Research, where she will investigate the impact of past climate on the ancient Maya in northwest Yucatán via cave sediment records. She hopes to pursue an impact-oriented research career in paleoclimate and paleoenvironment reconstruction and ultimately apply the lessons learned from her research to inform modern climate policy. She is particularly passionate about sustainable mining of energy-critical elements and addressing climate change inequality in her home state of California.

    Intveld’s work at Cambridge will build upon her extensive research experience at MIT. She currently works in the McGee Lab reconstructing the Late Pleistocene-Early Holocene paleoclimate of northeastern Mexico to provide a climatic background to the first peopling of the Americas. Previously, she explored the influence of mountain plate tectonics on biodiversity in the Perron Lab. During a summer research position at the University of Haifa in Israel she analyzed the microfossil assemblage of an offshore sediment core for paleo-coastal reconstruction.

    Last summer, Intveld interned at the National Oceanic and Atmospheric Administration in Homer, Alaska, to identify geologic controls on regional groundwater chemistry. She has also interned with the World Wildlife Fund and with the Natural History Museum of Los Angeles. During her the spring semester of her junior year, Intveld studied abroad through MISTI at Imperial College London’s Royal School of Mines and completed geology field work in Sardinia, Italy.

    Intveld has been a strong presence on MIT’s campus, serving as the undergraduate representative on the EAPS Diversity, Equity, and Inclusion Committee. She leads tours for the MIT List Visual Arts Center, is a member of and associate advisor for the Terrascope Learning Community, and is a participant in the Addir Interfaith Dialogue Fellowship.

    Inveld was advised in her application by Kim Benard, associate dean of the Distinguished Fellowships team in Career Advising and Professional Development, who says, “Aviva’s work is at a fascinating crossroads of archeology, geology, and sustainability. She has already done extraordinary work, and this opportunity will prepare her even more to be influential in the fight for climate mitigation.”

    Established by the Bill and Melinda Gates Foundation in 2000, the Gates Cambridge Scholarship provides full funding for talented students from outside the United Kingdom to pursue postgraduate study in any subject at Cambridge University. Since the program’s inception in 2001, there have been 33 Gates Cambridge Scholars from MIT. More

  • in

    Improving health outcomes by targeting climate and air pollution simultaneously

    Climate policies are typically designed to reduce greenhouse gas emissions that result from human activities and drive climate change. The largest source of these emissions is the combustion of fossil fuels, which increases atmospheric concentrations of ozone, fine particulate matter (PM2.5) and other air pollutants that pose public health risks. While climate policies may result in lower concentrations of health-damaging air pollutants as a “co-benefit” of reducing greenhouse gas emissions-intensive activities, they are most effective at improving health outcomes when deployed in tandem with geographically targeted air-quality regulations.

    Yet the computer models typically used to assess the likely air quality/health impacts of proposed climate/air-quality policy combinations come with drawbacks for decision-makers. Atmospheric chemistry/climate models can produce high-resolution results, but they are expensive and time-consuming to run. Integrated assessment models can produce results for far less time and money, but produce results at global and regional scales, rendering them insufficiently precise to obtain accurate assessments of air quality/health impacts at the subnational level.

    To overcome these drawbacks, a team of researchers at MIT and the University of California at Davis has developed a climate/air-quality policy assessment tool that is both computationally efficient and location-specific. Described in a new study in the journal ACS Environmental Au, the tool could enable users to obtain rapid estimates of combined policy impacts on air quality/health at more than 1,500 locations around the globe — estimates precise enough to reveal the equity implications of proposed policy combinations within a particular region.

    “The modeling approach described in this study may ultimately allow decision-makers to assess the efficacy of multiple combinations of climate and air-quality policies in reducing the health impacts of air pollution, and to design more effective policies,” says Sebastian Eastham, the study’s lead author and a principal research scientist at the MIT Joint Program on the Science and Policy of Global Change. “It may also be used to determine if a given policy combination would result in equitable health outcomes across a geographical area of interest.”

    To demonstrate the efficiency and accuracy of their policy assessment tool, the researchers showed that outcomes projected by the tool within seconds were consistent with region-specific results from detailed chemistry/climate models that took days or even months to run. While continuing to refine and develop their approaches, they are now working to embed the new tool into integrated assessment models for direct use by policymakers.

    “As decision-makers implement climate policies in the context of other sustainability challenges like air pollution, efficient modeling tools are important for assessment — and new computational techniques allow us to build faster and more accurate tools to provide credible, relevant information to a broader range of users,” says Noelle Selin, a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences, and supervising author of the study. “We are looking forward to further developing such approaches, and to working with stakeholders to ensure that they provide timely, targeted and useful assessments.”

    The study was funded, in part, by the U.S. Environmental Protection Agency and the Biogen Foundation. More

  • in

    Study: Carbon-neutral pavements are possible by 2050, but rapid policy and industry action are needed

    Almost 2.8 million lane-miles, or about 4.6 million lane-kilometers, of the United States are paved.

    Roads and streets form the backbone of our built environment. They take us to work or school, take goods to their destinations, and much more.

    However, a new study by MIT Concrete Sustainability Hub (CSHub) researchers shows that the annual greenhouse gas (GHG) emissions of all construction materials used in the U.S. pavement network are 11.9 to 13.3 megatons. This is equivalent to the emissions of a gasoline-powered passenger vehicle driving about 30 billion miles in a year.

    As roads are built, repaved, and expanded, new approaches and thoughtful material choices are necessary to dampen their carbon footprint. 

    The CSHub researchers found that, by 2050, mixtures for pavements can be made carbon-neutral if industry and governmental actors help to apply a range of solutions — like carbon capture — to reduce, avoid, and neutralize embodied impacts. (A neutralization solution is any compensation mechanism in the value chain of a product that permanently removes the global warming impact of the processes after avoiding and reducing the emissions.) Furthermore, nearly half of pavement-related greenhouse gas (GHG) savings can be achieved in the short term with a negative or nearly net-zero cost.

    The research team, led by Hessam AzariJafari, MIT CSHub’s deputy director, closed gaps in our understanding of the impacts of pavements decisions by developing a dynamic model quantifying the embodied impact of future pavements materials demand for the U.S. road network. 

    The team first split the U.S. road network into 10-mile (about 16 kilometer) segments, forecasting the condition and performance of each. They then developed a pavement management system model to create benchmarks helping to understand the current level of emissions and the efficacy of different decarbonization strategies. 

    This model considered factors such as annual traffic volume and surface conditions, budget constraints, regional variation in pavement treatment choices, and pavement deterioration. The researchers also used a life-cycle assessment to calculate annual state-level emissions from acquiring pavement construction materials, considering future energy supply and materials procurement.

    The team considered three scenarios for the U.S. pavement network: A business-as-usual scenario in which technology remains static, a projected improvement scenario aligned with stated industry and national goals, and an ambitious improvement scenario that intensifies or accelerates projected strategies to achieve carbon neutrality. 

    If no steps are taken to decarbonize pavement mixtures, the team projected that GHG emissions of construction materials used in the U.S. pavement network would increase by 19.5 percent by 2050. Under the projected scenario, there was an estimated 38 percent embodied impact reduction for concrete and 14 percent embodied impact reduction for asphalt by 2050.

    The keys to making the pavement network carbon neutral by 2050 lie in multiple places. Fully renewable energy sources should be used for pavement materials production, transportation, and other processes. The federal government must contribute to the development of these low-carbon energy sources and carbon capture technologies, as it would be nearly impossible to achieve carbon neutrality for pavements without them. 

    Additionally, increasing pavements’ recycled content and improving their design and production efficiency can lower GHG emissions to an extent. Still, neutralization is needed to achieve carbon neutrality.

    Making the right pavement construction and repair choices would also contribute to the carbon neutrality of the network. For instance, concrete pavements can offer GHG savings across the whole life cycle as they are stiffer and stay smoother for longer, meaning they require less maintenance and have a lesser impact on the fuel efficiency of vehicles. 

    Concrete pavements have other use-phase benefits including a cooling effect through an intrinsically high albedo, meaning they reflect more sunlight than regular pavements. Therefore, they can help combat extreme heat and positively affect the earth’s energy balance through positive radiative forcing, making albedo a potential neutralization mechanism.

    At the same time, a mix of fixes, including using concrete and asphalt in different contexts and proportions, could produce significant GHG savings for the pavement network; decision-makers must consider scenarios on a case-by-case basis to identify optimal solutions. 

    In addition, it may appear as though the GHG emissions of materials used in local roads are dwarfed by the emissions of interstate highway materials. However, the study found that the two road types have a similar impact. In fact, all road types contribute heavily to the total GHG emissions of pavement materials in general. Therefore, stakeholders at the federal, state, and local levels must be involved if our roads are to become carbon neutral. 

    The path to pavement network carbon-neutrality is, therefore, somewhat of a winding road. It demands regionally specific policies and widespread investment to help implement decarbonization solutions, just as renewable energy initiatives have been supported. Providing subsidies and covering the costs of premiums, too, are vital to avoid shifts in the market that would derail environmental savings.

    When planning for these shifts, we must recall that pavements have impacts not just in their production, but across their entire life cycle. As pavements are used, maintained, and eventually decommissioned, they have significant impacts on the surrounding environment.

    If we are to meet climate goals such as the Paris Agreement, which demands that we reach carbon-neutrality by 2050 to avoid the worst impacts of climate change, we — as well as industry and governmental stakeholders — must come together to take a hard look at the roads we use every day and work to reduce their life cycle emissions. 

    The study was published in the International Journal of Life Cycle Assessment. In addition to AzariJafari, the authors include Fengdi Guo of the MIT Department of Civil and Environmental Engineering; Jeremy Gregory, executive director of the MIT Climate and Sustainability Consortium; and Randolph Kirchain, director of the MIT CSHub. More

  • in

    Using combustion to make better batteries

    For more than a century, much of the world has run on the combustion of fossil fuels. Now, to avert the threat of climate change, the energy system is changing. Notably, solar and wind systems are replacing fossil fuel combustion for generating electricity and heat, and batteries are replacing the internal combustion engine for powering vehicles. As the energy transition progresses, researchers worldwide are tackling the many challenges that arise.

    Sili Deng has spent her career thinking about combustion. Now an assistant professor in the MIT Department of Mechanical Engineering and the Class of 1954 Career Development Professor, Deng leads a group that, among other things, develops theoretical models to help understand and control combustion systems to make them more efficient and to control the formation of emissions, including particles of soot.

    “So we thought, given our background in combustion, what’s the best way we can contribute to the energy transition?” says Deng. In considering the possibilities, she notes that combustion refers only to the process — not to what’s burning. “While we generally think of fossil fuels when we think of combustion, the term ‘combustion’ encompasses many high-temperature chemical reactions that involve oxygen and typically emit light and large amounts of heat,” she says.

    Given that definition, she saw another role for the expertise she and her team have developed: They could explore the use of combustion to make materials for the energy transition. Under carefully controlled conditions, combusting flames can be used to produce not polluting soot, but rather valuable materials, including some that are critical in the manufacture of lithium-ion batteries.

    Improving the lithium-ion battery by lowering costs

    The demand for lithium-ion batteries is projected to skyrocket in the coming decades. Batteries will be needed to power the growing fleet of electric cars and to store the electricity produced by solar and wind systems so it can be delivered later when those sources aren’t generating. Some experts project that the global demand for lithium-ion batteries may increase tenfold or more in the next decade.

    Given such projections, many researchers are looking for ways to improve the lithium-ion battery technology. Deng and her group aren’t materials scientists, so they don’t focus on making new and better battery chemistries. Instead, their goal is to find a way to lower the high cost of making all of those batteries. And much of the cost of making a lithium-ion battery can be traced to the manufacture of materials used to make one of its two electrodes — the cathode.

    The MIT researchers began their search for cost savings by considering the methods now used to produce cathode materials. The raw materials are typically salts of several metals, including lithium, which provides ions — the electrically charged particles that move when the battery is charged and discharged. The processing technology aims to produce tiny particles, each one made up of a mixture of those ingredients, with the atoms arranged in the specific crystalline structure that will deliver the best performance in the finished battery.

    For the past several decades, companies have manufactured those cathode materials using a two-stage process called coprecipitation. In the first stage, the metal salts — excluding the lithium — are dissolved in water and thoroughly mixed inside a chemical reactor. Chemicals are added to change the acidity (the pH) of the mixture, and particles made up of the combined salts precipitate out of the solution. The particles are then removed, dried, ground up, and put through a sieve.

    A change in pH won’t cause lithium to precipitate, so it is added in the second stage. Solid lithium is ground together with the particles from the first stage until lithium atoms permeate the particles. The resulting material is then heated, or “annealed,” to ensure complete mixing and to achieve the targeted crystalline structure. Finally, the particles go through a “deagglomerator” that separates any particles that have joined together, and the cathode material emerges.

    Coprecipitation produces the needed materials, but the process is time-consuming. The first stage takes about 10 hours, and the second stage requires about 13 hours of annealing at a relatively low temperature (750 degrees Celsius). In addition, to prevent cracking during annealing, the temperature is gradually “ramped” up and down, which takes another 11 hours. The process is thus not only time-consuming but also energy-intensive and costly.

    For the past two years, Deng and her group have been exploring better ways to make the cathode material. “Combustion is very effective at oxidizing things, and the materials for lithium-ion batteries are generally mixtures of metal oxides,” says Deng. That being the case, they thought this could be an opportunity to use a combustion-based process called flame synthesis.

    A new way of making a high-performance cathode material

    The first task for Deng and her team — mechanical engineering postdoc Jianan Zhang, Valerie L. Muldoon ’20, SM ’22, and current graduate students Maanasa Bhat and Chuwei Zhang — was to choose a target material for their study. They decided to focus on a mixture of metal oxides consisting of nickel, cobalt, and manganese plus lithium. Known as “NCM811,” this material is widely used and has been shown to produce cathodes for batteries that deliver high performance; in an electric vehicle, that means a long driving range, rapid discharge and recharge, and a long lifetime. To better define their target, the researchers examined the literature to determine the composition and crystalline structure of NCM811 that has been shown to deliver the best performance as a cathode material.

    They then considered three possible approaches to improving on the coprecipitation process for synthesizing NCM811: They could simplify the system (to cut capital costs), speed up the process, or cut the energy required.

    “Our first thought was, what if we can mix together all of the substances — including the lithium — at the beginning?” says Deng. “Then we would not need to have the two stages” — a clear simplification over coprecipitation.

    Introducing FASP

    One process widely used in the chemical and other industries to fabricate nanoparticles is a type of flame synthesis called flame-assisted spray pyrolysis, or FASP. Deng’s concept for using FASP to make their targeted cathode powders proceeds as follows.

    The precursor materials — the metal salts (including the lithium) — are mixed with water, and the resulting solution is sprayed as fine droplets by an atomizer into a combustion chamber. There, a flame of burning methane heats up the mixture. The water evaporates, leaving the precursor materials to decompose, oxidize, and solidify to form the powder product. The cyclone separates particles of different sizes, and the baghouse filters out those that aren’t useful. The collected particles would then be annealed and deagglomerated.

    To investigate and optimize this concept, the researchers developed a lab-scale FASP setup consisting of a homemade ultrasonic nebulizer, a preheating section, a burner, a filter, and a vacuum pump that withdraws the powders that form. Using that system, they could control the details of the heating process: The preheating section replicates conditions as the material first enters the combustion chamber, and the burner replicates conditions as it passes the flame. That setup allowed the team to explore operating conditions that would give the best results.

    Their experiments showed marked benefits over coprecipitation. The nebulizer breaks up the liquid solution into fine droplets, ensuring atomic-level mixing. The water simply evaporates, so there’s no need to change the pH or to separate the solids from a liquid. As Deng notes, “You just let the gas go, and you’re left with the particles, which is what you want.” With lithium included at the outset, there’s no need for mixing solids with solids, which is neither efficient 
nor effective.

    They could even control the structure, or “morphology,” of the particles that formed. In one series of experiments, they tried exposing the incoming spray to different rates of temperature change over time. They found that the temperature “history” has a direct impact on morphology. With no preheating, the particles burst apart; and with rapid preheating, the particles were hollow. The best outcomes came when they used temperatures ranging from 175-225 C. Experiments with coin-cell batteries (laboratory devices used for testing battery materials) confirmed that by adjusting the preheating temperature, they could achieve a particle morphology that would optimize the performance of their materials.

    Best of all, the particles formed in seconds. Assuming the time needed for conventional annealing and deagglomerating, the new setup could synthesize the finished cathode material in half the total time needed for coprecipitation. Moreover, the first stage of the coprecipitation system is replaced by a far simpler setup — a savings in capital costs.

    “We were very happy,” says Deng. “But then we thought, if we’ve changed the precursor side so the lithium is mixed well with the salts, do we need to have the same process for the second stage? Maybe not!”

    Improving the second stage

    The key time- and energy-consuming step in the second stage is the annealing. In today’s coprecipitation process, the strategy is to anneal at a low temperature for a long time, giving the operator time to manipulate and control the process. But running a furnace for some 20 hours — even at a low temperature — consumes a lot of energy.

    Based on their studies thus far, Deng thought, “What if we slightly increase the temperature but reduce the annealing time by orders of magnitude? Then we could cut energy consumption, and we might still achieve the desired crystal structure.”

    However, experiments at slightly elevated temperatures and short treatment times didn’t bring the results they had hoped for. In transmission electron microscope (TEM) images, the particles that formed had clouds of light-looking nanoscale particles attached to their surfaces. When the researchers performed the same experiments without adding the lithium, those nanoparticles didn’t appear. Based on that and other tests, they concluded that the nanoparticles were pure lithium. So, it seemed like long-duration annealing would be needed to ensure that the lithium made its way inside the particles.

    But they then came up with a different solution to the lithium-distribution problem. They added a small amount — just 1 percent by weight — of an inexpensive compound called urea to their mixture. In TEM images of the particles formed, the “undesirable nanoparticles were largely gone,” says Deng.

    Experiments in the laboratory coin cells showed that the addition of urea significantly altered the response to changes in the annealing temperature. When the urea was absent, raising the annealing temperature led to a dramatic decline in performance of the cathode material that formed. But with the urea present, the performance of the material that formed was unaffected by any temperature change.

    That result meant that — as long as the urea was added with the other precursors — they could push up the temperature, shrink the annealing time, and omit the gradual ramp-up and cool-down process. Further imaging studies confirmed that their approach yields the desired crystal structure and the homogeneous elemental distribution of the cobalt, nickel, manganese, and lithium within the particles. Moreover, in tests of various performance measures, their materials did as well as materials produced by coprecipitation or by other methods using long-time heat treatment. Indeed, the performance was comparable to that of commercial batteries with cathodes made of NCM811.

    So now the long and expensive second stage required in standard coprecipitation could be replaced by just 20 minutes of annealing at about 870 C plus 20 minutes of cooling down at room temperature.

    Theory, continuing work, and planning for scale-up

    While experimental evidence supports their approach, Deng and her group are now working to understand why it works. “Getting the underlying physics right will help us design the process to control the morphology and to scale up the process,” says Deng. And they have a hypothesis for why the lithium nanoparticles in their flame synthesis process end up on the surfaces of the larger particles — and why the presence of urea solves that problem.

    According to their theory, without the added urea, the metal and lithium atoms are initially well-mixed within the droplet. But as heating progresses, the lithium diffuses to the surface and ends up as nanoparticles attached to the solidified particle. As a result, a long annealing process is needed to move the lithium in among the other atoms.

    When the urea is present, it starts out mixed with the lithium and other atoms inside the droplet. As temperatures rise, the urea decomposes, forming bubbles. As heating progresses, the bubbles burst, increasing circulation, which keeps the lithium from diffusing to the surface. The lithium ends up uniformly distributed, so the final heat treatment can be very short.

    The researchers are now designing a system to suspend a droplet of their mixture so they can observe the circulation inside it, with and without the urea present. They’re also developing experiments to examine how droplets vaporize, employing tools and methods they have used in the past to study how hydrocarbons vaporize inside internal combustion engines.

    They also have ideas about how to streamline and scale up their process. In coprecipitation, the first stage takes 10 to 20 hours, so one batch at a time moves on to the second stage to be annealed. In contrast, the novel FASP process generates particles in 20 minutes or less — a rate that’s consistent with continuous processing. In their design for an “integrated synthesis system,” the particles coming out of the baghouse are deposited on a belt that carries them for 10 or 20 minutes through a furnace. A deagglomerator then breaks any attached particles apart, and the cathode powder emerges, ready to be fabricated into a high-performance cathode for a lithium-ion battery. The cathode powders for high-performance lithium-ion batteries would thus be manufactured at unprecedented speed, low cost, and low energy use.

    Deng notes that every component in their integrated system is already used in industry, generally at a large scale and high flow-through rate. “That’s why we see great potential for our technology to be commercialized and scaled up,” she says. “Where our expertise comes into play is in designing the combustion chamber to control the temperature and heating rate so as to produce particles with the desired morphology.” And while a detailed economic analysis has yet to be performed, it seems clear that their technique will be faster, the equipment simpler, and the energy use lower than other methods of manufacturing cathode materials for lithium-ion batteries — potentially a major contribution to the ongoing energy transition.

    This research was supported by the MIT Department of Mechanical Engineering.

    This article appears in the Winter 2023 issue of Energy Futures, the magazine of the MIT Energy Initiative. More