More stories

  • in

    Getting the carbon out of India’s heavy industries

    The world’s third largest carbon emitter after China and the United States, India ranks seventh in a major climate risk index. Unless India, along with the nearly 200 other signatory nations of the Paris Agreement, takes aggressive action to keep global warming well below 2 degrees Celsius relative to preindustrial levels, physical and financial losses from floods, droughts, and cyclones could become more severe than they are today. So, too, could health impacts associated with the hazardous air pollution levels now affecting more than 90 percent of its population.  

    To address both climate and air pollution risks and meet its population’s escalating demand for energy, India will need to dramatically decarbonize its energy system in the coming decades. To that end, its initial Paris Agreement climate policy pledge calls for a reduction in carbon dioxide intensity of GDP by 33-35 percent by 2030 from 2005 levels, and an increase in non-fossil-fuel-based power to about 40 percent of cumulative installed capacity in 2030. At the COP26 international climate change conference, India announced more aggressive targets, including the goal of achieving net-zero emissions by 2070.

    Meeting its climate targets will require emissions reductions in every economic sector, including those where emissions are particularly difficult to abate. In such sectors, which involve energy-intensive industrial processes (production of iron and steel; nonferrous metals such as copper, aluminum, and zinc; cement; and chemicals), decarbonization options are limited and more expensive than in other sectors. Whereas replacing coal and natural gas with solar and wind could lower carbon dioxide emissions in electric power generation and transportation, no easy substitutes can be deployed in many heavy industrial processes that release CO2 into the air as a byproduct.

    However, other methods could be used to lower the emissions associated with these processes, which draw upon roughly 50 percent of India’s natural gas, 25 percent of its coal, and 20 percent of its oil. Evaluating the potential effectiveness of such methods in the next 30 years, a new study in the journal Energy Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change is the first to explicitly explore emissions-reduction pathways for India’s hard-to-abate sectors.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model, the study assesses existing emissions levels in these sectors and projects how much they can be reduced by 2030 and 2050 under different policy scenarios. Aimed at decarbonizing industrial processes, the scenarios include the use of subsidies to increase electricity use, incentives to replace coal with natural gas, measures to improve industrial resource efficiency, policies to put a price on carbon, carbon capture and storage (CCS) technology, and hydrogen in steel production.

    The researchers find that India’s 2030 Paris Agreement pledge may still drive up fossil fuel use and associated greenhouse gas emissions, with projected carbon dioxide emissions from hard-to-abate sectors rising by about 2.6 times from 2020 to 2050. But scenarios that also promote electrification, natural gas support, and resource efficiency in hard-to-abate sectors can lower their CO2 emissions by 15-20 percent.

    While appearing to move the needle in the right direction, those reductions are ultimately canceled out by increased demand for the products that emerge from these sectors. So what’s the best path forward?

    The researchers conclude that only the incentive of carbon pricing or the advance of disruptive technology can move hard-to-abate sector emissions below their current levels. To achieve significant emissions reductions, they maintain, the price of carbon must be high enough to make CCS economically viable. In that case, reductions of 80 percent below current levels could be achieved by 2050.

    “Absent major support from the government, India will be unable to reduce carbon emissions in its hard-to-abate sectors in alignment with its climate targets,” says MIT Joint Program deputy director Sergey Paltsev, the study’s lead author. “A comprehensive government policy could provide robust incentives for the private sector in India and generate favorable conditions for foreign investments and technology advances. We encourage decision-makers to use our findings to design efficient pathways to reduce emissions in those sectors, and thereby help lower India’s climate and air pollution-related health risks.” More

  • in

    Better living through multicellular life cycles

    Cooperation is a core part of life for many organisms, ranging from microbes to complex multicellular life. It emerges when individuals share resources or partition a task in such a way that each derives a greater benefit when acting together than they could on their own. For example, birds and fish flock to evade predators, slime mold swarms to hunt for food and reproduce, and bacteria form biofilms to resist stress.

    Individuals must live in the same “neighborhood” to cooperate. For bacteria, this neighborhood can be as small as tens of microns. But in environments like the ocean, it’s rare for cells with the same genetic makeup to co-occur in the same neighborhood on their own. And this necessity poses a puzzle to scientists: In environments where survival hinges on cooperation, how do bacteria build their neighborhood?

    To study this problem, MIT professor Otto X. Cordero and colleagues took inspiration from nature: They developed a model system around a common coastal seawater bacterium that requires cooperation to eat sugars from brown algae. In the system, single cells were initially suspended in seawater too far away from other cells to cooperate. To share resources and grow, the cells had to find a mechanism of creating a neighborhood. “Surprisingly, each cell was able to divide and create its own neighborhood of clones by forming tightly packed clusters,” says Cordero, associate professor in the Department of Civil and Environmental Engineering.

    A new paper, published today in Current Biology, demonstrates how an algae-eating bacterium solves the engineering challenge of creating local cell density starting from a single-celled state.

    “A key discovery was the importance of phenotypic heterogeneity in supporting this surprising mechanism of clonal cooperation,” says Cordero, lead author of the new paper.

    Using a combination of microscopy, transcriptomics, and labeling experiments to profile a cellular metabolic state, the researchers found that cells phenotypically differentiate into a sticky “shell” population and a motile, carbon-storing “core.” The researchers propose that shell cells create the cellular neighborhood needed to sustain cooperation while core cells accumulate stores of carbon that support further clonal reproduction when the multicellular structure ruptures.

    This work addresses a key piece in the bigger challenge of understanding the bacterial processes that shape our earth, such as the cycling of carbon from dead organic matter back into food webs and the atmosphere. “Bacteria are fundamentally single cells, but often what they accomplish in nature is done through cooperation. We have much to uncover about what bacteria can accomplish together and how that differs from their capacity as individuals,” adds Cordero.

    Co-authors include Julia Schwartzman and Ali Ebrahimi, former postdocs in the Cordero Lab. Other co-authors are Gray Chadwick, a former graduate student at Caltech; Yuya Sato, a senior researcher at Japan’s National Institute of Advanced Industrial Science and Technology; Benjamin Roller, a current postdoc at the University of Vienna; and Victoria Orphan of Caltech.

    Funding was provided by the Simons Foundation. Individual authors received support from the Swiss National Science Foundation, Japan Society for the Promotion of Science, the U.S. National Science Foundation, the Kavli Institute of Theoretical Physics, and the National Institutes of Health. More

  • in

    Kerry Emanuel: A climate scientist and meteorologist in the eye of the storm

    Kerry Emanuel once joked that whenever he retired, he would start a “hurricane safari” so other people could experience what it’s like to fly into the eye of a hurricane.

    “All of a sudden, the turbulence stops, the sun comes out, bright sunshine, and it’s amazingly calm. And you’re in this grand stadium [of clouds miles high],” he says. “It’s quite an experience.”

    While the hurricane safari is unlikely to come to fruition — “You can’t just conjure up a hurricane,” he explains — Emanuel, a world-leading expert on links between hurricanes and climate change, is retiring from teaching in the Department of Earth Atmospheric and Planetary Sciences (EAPS) at MIT after a more than 40-year career.

    Best known for his foundational contributions to the science of tropical cyclones, climate, and links between them, Emanuel has also been a prominent voice in public debates on climate change, and what we should do about it.

    “Kerry has had an enormous effect on the world through the students and junior scientists he has trained,” says William Boos PhD ’08, an atmospheric scientist at the University of California at Berkeley. “He’s a brilliant enough scientist and theoretician that he didn’t need any of us to accomplish what he has, but he genuinely cares about educating new generations of scientists and helping to launch their careers.”

    In recognition of Emanuel’s teaching career and contributions to science, a symposium was held in his honor at MIT on June 21 and 22, organized by several of his former students and collaborators, including Boos. Research presented at the symposium focused on the many fields influenced by Emanuel’s more than 200 published research papers — on everything from forecasting the risks posed by tropical cyclones to understanding how rainfall is produced by continent-sized patterns of atmospheric circulation.

    Emanuel’s career observing perturbations of Earth’s atmosphere started earlier than he can remember. “According to my older brother, from the age of 2, I would crawl to the window whenever there was a thunderstorm,” he says. At first, those were the rolling thunderheads of the Midwest where he grew up, then it was the edges of hurricanes during a few teenage years in Florida. Eventually, he would find himself watching from the very eye of the storm, both physically and mathematically.

    Emanuel attended MIT both as an undergraduate studying Earth and planetary sciences, and for his PhD in meteorology, writing a dissertation on thunderstorms that form ahead of cold fronts. Within the department, he worked with some of the central figures of modern meteorology such as Jule Charney, Fred Sanders, and Edward Lorenz — the founder of chaos theory.

    After receiving his PhD in 1978, Emanuel joined the faculty of the University of California at Los Angeles. During this period, he also took a semester sabbatical to film the wind speeds of tornadoes in Texas and Oklahoma. After three years, he returned to MIT and joined the Department of Meteorology in 1981. Two years later, the department merged with Earth and Planetary Sciences to form EAPS as it is known today, and where Emanuel has remained ever since.

    At MIT, he shifted scales. The thunderstorms and tornadoes that had been the focus of Emanuel’s research up to then were local atmospheric phenomena, or “mesoscale” in the language of meteorologists. The larger “synoptic scale” storms that are hurricanes blew into Emanuel’s research when as a young faculty member he was asked to teach a class in tropical meteorology; in prepping for the class, Emanuel found his notes on hurricanes from graduate school no longer made sense.

    “I realized I didn’t understand them because they couldn’t have been correct,” he says. “And so I set out to try to find a much better theoretical formulation for hurricanes.”

    He soon made two important contributions. In 1986, his paper “An Air-Sea Interaction Theory for Tropical Cyclones. Part 1: Steady-State Maintenance” developed a new theory for upper limits of hurricane intensity given atmospheric conditions. This work in turn led to even larger-scale questions to address. “That upper bound had to be dependent on climate, and it was likely to go up if we were to warm the climate,” Emanuel says — a phenomenon he explored in another paper, “The Dependence of Hurricane Intensity on Climate,” which showed how warming sea surface temperatures and changing atmospheric conditions from a warming climate would make hurricanes more destructive.

    “In my view, this is among the most remarkable achievements in theoretical geophysics,” says Adam Sobel PhD ’98, an atmospheric scientist at Columbia University who got to know Emanuel after he graduated and became interested in tropical meteorology. “From first principles, using only pencil-and-paper analysis and physical reasoning, he derives a quantitative bound on hurricane intensity that has held up well over decades of comparison to observations” and underpins current methods of predicting hurricane intensity and how it changes with climate.

    This and diverse subsequent work led to numerous honors, including membership to the American Philosophical Society, the National Academy of Sciences, and the American Academy of Arts and Sciences.

    Emanuel’s research was never confined to academic circles, however; when politicians and industry leaders voiced loud opposition to the idea that human-caused climate change posed a threat, he spoke up.

    “I felt kind of a duty to try to counter that,” says Emanuel. “I thought it was an interesting challenge to see if you could go out and convince what some people call climate deniers, skeptics, that this was a serious risk and we had to treat it as such.”

    In addition to many public lectures and media appearances discussing climate change, Emanuel penned a book for general audiences titled “What We Know About Climate Change,” in addition to a widely-read primer on climate change and risk assessment designed to influence business leaders.

    “Kerry has an unmatched physical understanding of tropical climate phenomena,” says Emanuel’s colleague, Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies at EAPS. “But he’s also a great communicator and has generously given his time to public outreach. His book ‘What We Know About Climate Change’ is a beautiful piece of work that is readily understandable and has captivated many a non-expert reader.”

    Along with a number of other prominent climate scientists, Emanuel also began advocating for expanding nuclear power as the most rapid path to decarbonizing the world’s energy systems.

    “I think the impediment to nuclear is largely irrational in the United States,” he says. “So, I’ve been trying to fight that just like I’ve been trying to fight climate denial.”

    One lesson Emanuel has taken from his public work on climate change is that skeptical audiences often respond better to issues framed in positive terms than to doom and gloom; he’s found emphasizing the potential benefits rather than the sacrifices involved in the energy transition can engage otherwise wary audiences.

    “It’s really not opposition to science, per se,” he says. “It’s fear of the societal changes they think are required to do something about it.”

    He has also worked to raise awareness about how insurance companies significantly underestimate climate risks in their policies, in particular by basing hurricane risk on unreliable historical data. One recent practical result has been a project by the First Street Foundation to assess the true flood risk of every property in the United States using hurricane models Emanuel developed.

    “I think it’s transformative,” Emanuel says of the project with First Street. “That may prove to be the most substantive research I’ve done.”

    Though Emanuel is retiring from teaching, he has no plans to stop working. “When I say ‘retire’ it’s in quotes,” he says. In 2011, Emanuel and Professor of Geophysics Daniel Rothman founded the Lorenz Center, a climate research center at MIT in honor of Emanuel’s mentor and friend Edward Lorenz. Emanuel will continue to participate in work at the center, which aims to counter what Emanuel describes as a trend away from “curiosity-driven” work in climate science.

    “Even if there were no such thing as global warming, [climate science] would still be a really, really exciting field,” says Emanuel. “There’s so much to understand about climate, about the climates of the past, about the climates of other planets.”

    In addition to work with the Lorenz Center, he’s become interested once again in tornadoes and severe local storms, and understanding whether climate also controls such local phenomena. He’s also involved in two of MIT’s Climate Grand Challenges projects focused on translating climate hazards to explicit financial and health risks — what will bring the dangers of climate change home to people, he says, is for the public to understand more concrete risks, like agricultural failure, water shortages, electricity shortages, and severe weather events. Capturing that will drive the next few years of his work.

    “I’m going to be stepping up research in some respects,” he says, now living full-time at his home in Maine.

    Of course, “retiring” does mean a bit more free time for new pursuits, like learning a language or an instrument, and “rediscovering the art of sailing,” says Emanuel. He’s looking forward to those days on the water, whatever storms are to come. More

  • in

    Making hydrogen power a reality

    For decades, government and industry have looked to hydrogen as a potentially game-changing tool in the quest for clean energy. As far back as the early days of the Clinton administration, energy sector observers and public policy experts have extolled the virtues of hydrogen — to the point that some people have joked that hydrogen is the energy of the future, “and always will be.”

    Even as wind and solar power have become commonplace in recent years, hydrogen has been held back by high costs and other challenges. But the fuel may finally be poised to have its moment. At the MIT Energy Initiative Spring Symposium — entitled “Hydrogen’s role in a decarbonized energy system” — experts discussed hydrogen production routes, hydrogen consumption markets, the path to a robust hydrogen infrastructure, and policy changes needed to achieve a “hydrogen future.”

    During one panel, “Options for producing low-carbon hydrogen at scale,” four experts laid out existing and planned efforts to leverage hydrogen for decarbonization. 

    “The race is on”

    Huyen N. Dinh, a senior scientist and group manager at the National Renewable Energy Laboratory (NREL), is the director of HydroGEN, a consortium of several U.S. Department of Energy (DOE) national laboratories that accelerates research and development of innovative and advanced water splitting materials and technologies for clean, sustainable, and low-cost hydrogen production.

    For the past 14 years, Dinh has worked on fuel cells and hydrogen production for NREL. “We think that the 2020s is the decade of hydrogen,” she said. Dinh believes that the energy carrier is poised to come into its own over the next few years, pointing to several domestic and international activities surrounding the fuel and citing a Hydrogen Council report that projected the future impacts of hydrogen — including 30 million jobs and $2.5 trillion in global revenue by 2050.

    “Now is the time for hydrogen, and the global race is on,” she said.

    Dinh also explained the parameters of the Hydrogen Shot — the first of the DOE’s “Energy Earthshots” aimed at accelerating breakthroughs for affordable and reliable clean energy solutions. Hydrogen fuel currently costs around $5 per kilogram to produce, and the Hydrogen Shot’s stated goal is to bring that down by 80 percent to $1 per kilogram within a decade.

    The Hydrogen Shot will be facilitated by $9.5 billion in funding for at least four clean hydrogen hubs located in different parts of the United States, as well as extensive research and development, manufacturing, and recycling from last year’s bipartisan infrastructure law. Still, Dinh noted that it took more than 40 years for solar and wind power to become cost competitive, and now industry, government, national lab, and academic leaders are hoping to achieve similar reductions in hydrogen fuel costs over a much shorter time frame. In the near term, she said, stakeholders will need to improve the efficiency, durability, and affordability of hydrogen production through electrolysis (using electricity to split water) using today’s renewable and nuclear power sources. Over the long term, the focus may shift to splitting water more directly through heat or solar energy, she said.

    “The time frame is short, the competition is intense, and a coordinated effort is critical for domestic competitiveness,” Dinh said.

    Hydrogen across continents

    Wambui Mutoru, principal engineer for international commercial development, exploration, and production international at the Norwegian global energy company Equinor, said that hydrogen is an important component in the company’s ambitions to be carbon-neutral by 2050. The company, in collaboration with partners, has several hydrogen projects in the works, and Mutoru laid out the company’s Hydrogen to Humber project in Northern England. Currently, the Humber region emits more carbon dioxide than any other industrial cluster in the United Kingdom — 50 percent more, in fact, than the next-largest carbon emitter.   

    “The ambition here is for us to deploy the world’s first at-scale hydrogen value chain to decarbonize the Humber industrial cluster,” Mutoru said.

    The project consists of three components: a clean hydrogen production facility, an onshore hydrogen and carbon dioxide transmission network, and offshore carbon dioxide transportation and storage operations. Mutoru highlighted the importance of carbon capture and storage in hydrogen production. Equinor, she said, has captured and sequestered carbon offshore for more than 25 years, storing more than 25 million tons of carbon dioxide during that time.

    Mutoru also touched on Equinor’s efforts to build a decarbonized energy hub in the Appalachian region of the United States, covering territory in Ohio, West Virginia, and Pennsylvania. By 2040, she said, the company’s ambition is to produce about 1.5 million tons of clean hydrogen per year in the region — roughly equivalent to 6.8 gigawatts of electricity — while also storing 30 million tons of carbon dioxide.

    Mutoru acknowledged that the biggest challenge facing potential hydrogen producers is the current lack of viable business models. “Resolving that challenge requires cross-industry collaboration, and supportive policy frameworks so that the market for hydrogen can be built and sustained over the long term,” she said.

    Confronting barriers

    Gretchen Baier, executive external strategy and communications leader for Dow, noted that the company already produces hydrogen in multiple ways. For one, Dow operates the world’s largest ethane cracker, in Texas. An ethane cracker heats ethane to break apart molecular bonds to form ethylene, with hydrogen one of the byproducts of the process. Also, Baier showed a slide of the 1891 patent for the electrolysis of brine water, which also produces hydrogen. The company still engages in this practice, but Dow does not have an effective way of utilizing the resulting hydrogen for their own fuel.

    “Just take a moment to think about that,” Baier said. “We’ve been talking about hydrogen production and the cost of it, and this is basically free hydrogen. And it’s still too much of a barrier to somewhat recycle that and use it for ourselves. The environment is clearly changing, and we do have plans for that, but I think that kind of sets some of the challenges that face industry here.”

    However, Baier said, hydrogen is expected to play a significant role in Dow’s future as the company attempts to decarbonize by 2050. The company, she said, plans to optimize hydrogen allocation and production, retrofit turbines for hydrogen fueling, and purchase clean hydrogen. By 2040, Dow expects more than 60 percent of its sites to be hydrogen-ready.

    Baier noted that hydrogen fuel is not a “panacea,” but rather one among many potential contributors as industry attempts to reduce or eliminate carbon emissions in the coming decades. “Hydrogen has an important role, but it’s not the only answer,” she said.

    “This is real”

    Colleen Wright is vice president of corporate strategy for Constellation, which recently separated from Exelon Corporation. (Exelon now owns the former company’s regulated utilities, such as Commonwealth Edison and Baltimore Gas and Electric, while Constellation owns the competitive generation and supply portions of the business.) Wright stressed the advantages of nuclear power in hydrogen production, which she said include superior economics, low barriers to implementation, and scalability.

    “A quarter of emissions in the world are currently from hard-to-decarbonize sectors — the industrial sector, steel making, heavy-duty transportation, aviation,” she said. “These are really challenging decarbonization sectors, and as we continue to expand and electrify, we’re going to need more supply. We’re also going to need to produce clean hydrogen using emissions-free power.”

    “The scale of nuclear power plants is uniquely suited to be able to scale hydrogen production,” Wright added. She mentioned Constellation’s Nine Mile Point site in the State of New York, which received a DOE grant for a pilot program that will see a proton exchange membrane electrolyzer installed at the site.

    “We’re very excited to see hydrogen go from a [research and development] conversation to a commercial conversation,” she said. “We’ve been calling it a little bit of a ‘middle-school dance.’ Everybody is standing around the circle, waiting to see who’s willing to put something at stake. But this is real. We’re not dancing around the edges. There are a lot of people who are big players, who are willing to put skin in the game today.” More

  • in

    Evan Leppink: Seeking a way to better stabilize the fusion environment

    “Fusion energy was always one of those kind-of sci-fi technologies that you read about,” says nuclear science and engineering PhD candidate Evan Leppink. He’s recalling the time before fusion became a part of his daily hands-on experience at MIT’s Plasma Science and Fusion Center, where he is studying a unique way to drive current in a tokamak plasma using radiofrequency (RF) waves. 

    Now, an award from the U.S. Department of Energy’s (DOE) Office of Science Graduate Student Research (SCGSR) Program will support his work with a 12-month residency at the DIII-D National Fusion Facility in San Diego, California.

    Like all tokamaks, DIII-D generates hot plasma inside a doughnut-shaped vacuum chamber wrapped with magnets. Because plasma will follow magnetic field lines, tokamaks are able to contain the turbulent plasma fuel as it gets hotter and denser, keeping it away from the edges of the chamber where it could damage the wall materials. A key part of the tokamak concept is that part of the magnetic field is created by electrical currents in the plasma itself, which helps to confine and stabilize the configuration. Researchers often launch high-power RF waves into tokamaks to drive that current.

    Leppink will be contributing to research, led by his MIT advisor Steve Wukitch, that pursues launching RF waves in DIII-D using a unique compact antenna placed on the tokamak center column. Typically, antennas are placed inside the tokamak on the outer edge of the doughnut, farthest from the central hole (or column), primarily because access and installation are easier there. This is known as the “low-field side,” because the magnetic field is lower there than at the central column, the “high-field side.” This MIT-led experiment, for the first time, will mount an antenna on the high-field side. There is some theoretical evidence that placing the wave launcher there could improve power penetration and current drive efficiency. And because the plasma environment is less harsh on this side, the antenna will survive longer, a factor important for any future power-producing tokamak.

    Leppink’s work on DIII-D focuses specifically on measuring the density of plasmas generated in the tokamak, for which he developed a “reflectometer.” This small antenna launches microwaves into the plasma, which reflect back to the antenna to be measured. The time that it takes for these microwaves to traverse the plasma provides information about the plasma density, allowing researchers to build up detailed density profiles, data critical for injecting RF power into the plasma.

    “Research shows that when we try to inject these waves into the plasma to drive the current, they can lose power as they travel through the edge region of the tokamak, and can even have problems entering the core of the plasma, where we would most like to direct them,” says Leppink. “My diagnostic will measure that edge region on the high-field side near the launcher in great detail, which provides us a way to directly verify calculations or compare actual results with simulation results.”

    Although focused on his own research, Leppink has excelled at priming other students for success in their studies and research. In 2021 he received the NSE Outstanding Teaching Assistant and Mentorship Award.

    “The highlights of TA’ing for me were the times when I could watch students go from struggling with a difficult topic to fully understanding it, often with just a nudge in the right direction and then allowing them to follow their own intuition the rest of the way,” he says.

    The right direction for Leppink points toward San Diego and RF current drive experiments on DIII-D. He is grateful for the support from the SCGSR, a program created to prepare graduate students like him for science, technology, engineering, or mathematics careers important to the DOE Office of Science mission. It provides graduate thesis research opportunities through extended residency at DOE national laboratories. He has already made several trips to DIII-D, in part to install his reflectometer, and has been impressed with the size of the operation.

    “It takes a little while to kind of compartmentalize everything and say, ‘OK, well, here’s my part of the machine. This is what I’m doing.’ It can definitely be overwhelming at times. But I’m blessed to be able to work on what has been the workhorse tokamak of the United States for the past few decades.” More

  • in

    Could used beer yeast be the solution to heavy metal contamination in water?

    A new analysis by researchers at MIT’s Center for Bits and Atoms (CBA) has found that inactive yeast could be effective as an inexpensive, abundant, and simple material for removing lead contamination from drinking water supplies. The study shows that this approach can be efficient and economic, even down to part-per-billion levels of contamination. Serious damage to human health is known to occur even at these low levels.

    The method is so efficient that the team has calculated that waste yeast discarded from a single brewery in Boston would enough to treat the city’s entire water supply. Such a fully sustainable system would not only purify the water but also divert what would otherwise be a waste stream needing disposal.

    The findings are detailed today in the journal Nature Communications Earth and Environment, in a paper by MIT Research Scientist Patritsia Statathou; Brown University postdoc and MIT Visiting Scholar Christos Athanasiou; MIT Professor Neil Gershenfeld, the director of CBA; and nine others at MIT, Brown, Wellesley College, Nanyang Technological University, and National Technical University of Athens.

    Lead and other heavy metals in water are a significant global problem that continues to grow because of electronic waste and discharges from mining operations. In the U.S. alone, more than 12,000 miles of waterways are impacted by acidic mine-drainage-water rich in heavy metals, the country’s leading source of water pollution. And unlike organic pollutants, most of which can be eventually broken down, heavy metals don’t biodegrade, but persist indefinitely and bioaccumulate. They are either impossible or very expensive to completely remove by conventional methods such as chemical precipitation or membrane filtration.

    Lead is highly toxic, even at tiny concentrations, especially affecting children as they grow. The European Union has reduced its standard for allowable lead in drinking water from 10 parts per billion to 5 parts per billion. In the U.S., the Environmental Protection Agency has declared that no level at all in water supplies is safe. And average levels in bodies of surface water globally are 10 times higher than they were 50 years ago, ranging from 10 parts per billion in Europe to hundreds of parts per billion in South America.

    “We don’t just need to minimize the existence of lead; we need to eliminate it in drinking water,” says Stathatou. “And the fact is that the conventional treatment processes are not doing this effectively when the initial concentrations they have to remove are low, in the parts-per-billion scale and below. They either fail to completely remove these trace amounts, or in order to do so they consume a lot of energy and they produce toxic byproducts.”

    The solution studied by the MIT team is not a new one — a process called biosorption, in which inactive biological material is used to remove heavy metals from water, has been known for a few decades. But the process has been studied and characterized only at much higher concentrations, at more than one part-per-million levels. “Our study demonstrates that the process can indeed work efficiently at the much lower concentrations of typical real-world water supplies, and investigates in detail the mechanisms involved in the process,” Athanasiou says.

    The team studied the use of a type of yeast widely used in brewing and in industrial processes, called S. cerevisiae, on pure water spiked with trace amounts of lead. They demonstrated that a single gram of the inactive, dried yeast cells can remove up to 12 milligrams of lead in aqueous solutions with initial lead concentrations below 1 part per million. They also showed that the process is very rapid, taking less than five minutes to complete.

    Because the yeast cells used in the process are inactive and desiccated, they require no particular care, unlike other processes that rely on living biomass to perform such functions which require nutrients and sunlight to keep the materials active. What’s more, yeast is abundantly available already, as a waste product from beer brewing and from various other fermentation-based industrial processes.

    Stathatou has estimated that to clean a water supply for a city the size of Boston, which uses about 200 million gallons a day, would require about 20 tons of yeast per day, or about 7,000 tons per year. By comparison, one single brewery, the Boston Beer Company, generates 20,000 tons a year of surplus yeast that is no longer useful for fermentation.

    The researchers also performed a series of tests to determine that the yeast cells are responsible for biosorption. Athanasiou says that “exploring biosorption mechanisms at such challenging concentrations is a tough problem. We were the first to use a mechanics perspective to unravel biosorption mechanisms, and we discovered that the mechanical properties of the yeast cells change significantly after lead uptake. This provides fundamentally new insights for the process.”

    Devising a practical system for processing the water and retrieving the yeast, which could then be separated from the lead for reuse, is the next stage of the team’s research, they say.

    “To scale up the process and actually put it in place, you need to embed these cells in a kind of filter, and this is the work that’s currently ongoing,” Stathatou says. They are also looking at ways of recovering both the cells and the lead. “We need to conduct further experiments, but there is the option to get both back,” she says.

    The same material can potentially be used to remove other heavy metals, such as cadmium and copper, but that will require further research to quantify the effective rates for those processes, the researchers say.

    “This research revealed a very promising, inexpensive, and environmentally friendly solution for lead removal,” says Sivan Zamir, vice president of Xylem Innovation Labs, a water technology research firm, who was not associated with this research. “It also deepened our understanding of the biosorption process, paving the way for the development of materials tailored to removal of other heavy metals.”

    The team also included Marios Tsezos at the National Technical University of Athens, in Greece; John Gross at Wellesley College; Camron Blackburn, Filippos Tourlomousis, and Andreas Mershin at MIT’s CBA; Brian Sheldon, Nitin Padture, Eric Darling at Brown University; and Huajian Gao at Brown University and Nanyang Technological University, in Singapore. More

  • in

    Study finds natural sources of air pollution exceed air quality guidelines in many regions

    Alongside climate change, air pollution is one of the biggest environmental threats to human health. Tiny particles known as particulate matter or PM2.5 (named for their diameter of just 2.5 micrometers or less) are a particularly hazardous type of pollutant. These particles are produced from a variety of sources, including wildfires and the burning of fossil fuels, and can enter our bloodstream, travel deep into our lungs, and cause respiratory and cardiovascular damage. Exposure to particulate matter is responsible for millions of premature deaths globally every year.

    In response to the increasing body of evidence on the detrimental effects of PM2.5, the World Health Organization (WHO) recently updated its air quality guidelines, lowering its recommended annual PM2.5 exposure guideline by 50 percent, from 10 micrograms per meter cubed (μm3) to 5 μm3. These updated guidelines signify an aggressive attempt to promote the regulation and reduction of anthropogenic emissions in order to improve global air quality.

    A new study by researchers in the MIT Department of Civil and Environmental Engineering explores if the updated air quality guideline of 5 μm3 is realistically attainable across different regions of the world, particularly if anthropogenic emissions are aggressively reduced. 

    The first question the researchers wanted to investigate was to what degree moving to a no-fossil-fuel future would help different regions meet this new air quality guideline.

    “The answer we found is that eliminating fossil-fuel emissions would improve air quality around the world, but while this would help some regions come into compliance with the WHO guidelines, for many other regions high contributions from natural sources would impede their ability to meet that target,” says senior author Colette Heald, the Germeshausen Professor in the MIT departments of Civil and Environmental Engineering, and Earth, Atmospheric and Planetary Sciences. 

    The study by Heald, Professor Jesse Kroll, and graduate students Sidhant Pai and Therese Carter, published June 6 in the journal Environmental Science and Technology Letters, finds that over 90 percent of the global population is currently exposed to average annual concentrations that are higher than the recommended guideline. The authors go on to demonstrate that over 50 percent of the world’s population would still be exposed to PM2.5 concentrations that exceed the new air quality guidelines, even in the absence of all anthropogenic emissions.

    This is due to the large natural sources of particulate matter — dust, sea salt, and organics from vegetation — that still exist in the atmosphere when anthropogenic emissions are removed from the air. 

    “If you live in parts of India or northern Africa that are exposed to large amounts of fine dust, it can be challenging to reduce PM2.5 exposures below the new guideline,” says Sidhant Pai, co-lead author and graduate student. “This study challenges us to rethink the value of different emissions abatement controls across different regions and suggests the need for a new generation of air quality metrics that can enable targeted decision-making.”

    The researchers conducted a series of model simulations to explore the viability of achieving the updated PM2.5 guidelines worldwide under different emissions reduction scenarios, using 2019 as a representative baseline year. 

    Their model simulations used a suite of different anthropogenic sources that could be turned on and off to study the contribution of a particular source. For instance, the researchers conducted a simulation that turned off all human-based emissions in order to determine the amount of PM2.5 pollution that could be attributed to natural and fire sources. By analyzing the chemical composition of the PM2.5 aerosol in the atmosphere (e.g., dust, sulfate, and black carbon), the researchers were also able to get a more accurate understanding of the most important PM2.5 sources in a particular region. For example, elevated PM2.5 concentrations in the Amazon were shown to predominantly consist of carbon-containing aerosols from sources like deforestation fires. Conversely, nitrogen-containing aerosols were prominent in Northern Europe, with large contributions from vehicles and fertilizer usage. The two regions would thus require very different policies and methods to improve their air quality. 

    “Analyzing particulate pollution across individual chemical species allows for mitigation and adaptation decisions that are specific to the region, as opposed to a one-size-fits-all approach, which can be challenging to execute without an understanding of the underlying importance of different sources,” says Pai. 

    When the WHO air quality guidelines were last updated in 2005, they had a significant impact on environmental policies. Scientists could look at an area that was not in compliance and suggest high-level solutions to improve the region’s air quality. But as the guidelines have tightened, globally-applicable solutions to manage and improve air quality are no longer as evident. 

    “Another benefit of speciating is that some of the particles have different toxicity properties that are correlated to health outcomes,” says Therese Carter, co-lead author and graduate student. “It’s an important area of research that this work can help motivate. Being able to separate out that piece of the puzzle can provide epidemiologists with more insights on the different toxicity levels and the impact of specific particles on human health.”

    The authors view these new findings as an opportunity to expand and iterate on the current guidelines.  

    “Routine and global measurements of the chemical composition of PM2.5 would give policymakers information on what interventions would most effectively improve air quality in any given location,” says Jesse Kroll, a professor in the MIT departments of Civil and Environmental Engineering and Chemical Engineering. “But it would also provide us with new insights into how different chemical species in PM2.5 affect human health.”

    “I hope that as we learn more about the health impacts of these different particles, our work and that of the broader atmospheric chemistry community can help inform strategies to reduce the pollutants that are most harmful to human health,” adds Heald. More

  • in

    Cracking the case of Arctic sea ice breakup

    Despite its below-freezing temperatures, the Arctic is warming twice as fast as the rest of the planet. As Arctic sea ice melts, fewer bright surfaces are available to reflect sunlight back into space. When fractures open in the ice cover, the water underneath gets exposed. Dark, ice-free water absorbs the sun’s energy, heating the ocean and driving further melting — a vicious cycle. This warming in turn melts glacial ice, contributing to rising sea levels.

    Warming climate and rising sea levels endanger the nearly 40 percent of the U.S. population living in coastal areas, the billions of people who depend on the ocean for food and their livelihoods, and species such as polar bears and Artic foxes. Reduced ice coverage is also making the once-impassable region more accessible, opening up new shipping lanes and ports. Interest in using these emerging trans-Arctic routes for product transit, extraction of natural resources (e.g., oil and gas), and military activity is turning an area traditionally marked by low tension and cooperation into one of global geopolitical competition.

    As the Arctic opens up, predicting when and where the sea ice will fracture becomes increasingly important in strategic decision-making. However, huge gaps exist in our understanding of the physical processes contributing to ice breakup. Researchers at MIT Lincoln Laboratory seek to help close these gaps by turning a data-sparse environment into a data-rich one. They envision deploying a distributed set of unattended sensors across the Arctic that will persistently detect and geolocate ice fracturing events. Concurrently, the network will measure various environmental conditions, including water temperature and salinity, wind speed and direction, and ocean currents at different depths. By correlating these fracturing events and environmental conditions, they hope to discover meaningful insights about what is causing the sea ice to break up. Such insights could help predict the future state of Arctic sea ice to inform climate modeling, climate change planning, and policy decision-making at the highest levels.

    “We’re trying to study the relationship between ice cracking, climate change, and heat flow in the ocean,” says Andrew March, an assistant leader of Lincoln Laboratory’s Advanced Undersea Systems and Technology Group. “Do cracks in the ice cause warm water to rise and more ice to melt? Do undersea currents and waves cause cracking? Does cracking cause undersea waves? These are the types of questions we aim to investigate.”

    Arctic access

    In March 2022, Ben Evans and Dave Whelihan, both researchers in March’s group, traveled for 16 hours across three flights to Prudhoe Bay, located on the North Slope of Alaska. From there, they boarded a small specialized aircraft and flew another 90 minutes to a three-and-a-half-mile-long sheet of ice floating 160 nautical miles offshore in the Arctic Ocean. In the weeks before their arrival, the U.S. Navy’s Arctic Submarine Laboratory had transformed this inhospitable ice floe into a temporary operating base called Ice Camp Queenfish, named after the first Sturgeon-class submarine to operate under the ice and the fourth to reach the North Pole. The ice camp featured a 2,500-foot-long runway, a command center, sleeping quarters to accommodate up to 60 personnel, a dining tent, and an extremely limited internet connection.

    At Queenfish, for the next four days, Evans and Whelihan joined U.S. Navy, Army, Air Force, Marine Corps, and Coast Guard members, and members of the Royal Canadian Air Force and Navy and United Kingdom Royal Navy, who were participating in Ice Exercise (ICEX) 2022. Over the course of about three weeks, more than 200 personnel stationed at Queenfish, Prudhoe Bay, and aboard two U.S. Navy submarines participated in this biennial exercise. The goals of ICEX 2022 were to assess U.S. operational readiness in the Arctic; increase our country’s experience in the region; advance our understanding of the Arctic environment; and continue building relationships with other services, allies, and partner organizations to ensure a free and peaceful Arctic. The infrastructure provided for ICEX concurrently enables scientists to conduct research in an environment — either in person or by sending their research equipment for exercise organizers to deploy on their behalf — that would be otherwise extremely difficult and expensive to access.

    In the Arctic, windchill temperatures can plummet to as low as 60 degrees Fahrenheit below zero, cold enough to freeze exposed skin within minutes. Winds and ocean currents can drift the entire camp beyond the reach of nearby emergency rescue aircraft, and the ice can crack at any moment. To ensure the safety of participants, a team of Navy meteorological specialists continually monitors the ever-changing conditions. The original camp location for ICEX 2022 had to be evacuated and relocated after a massive crack formed in the ice, delaying Evans’ and Whelihan’s trip. Even the newly selected site had a large crack form behind the camp and another crack that necessitated moving a number of tents.

    “Such cracking events are only going to increase as the climate warms, so it’s more critical now than ever to understand the physical processes behind them,” Whelihan says. “Such an understanding will require building technology that can persist in the environment despite these incredibly harsh conditions. So, it’s a challenge not only from a scientific perspective but also an engineering one.”

    “The weather always gets a vote, dictating what you’re able to do out here,” adds Evans. “The Arctic Submarine Laboratory does a lot of work to construct the camp and make it a safe environment where researchers like us can come to do good science. ICEX is really the only opportunity we have to go onto the sea ice in a place this remote to collect data.”

    A legacy of sea ice experiments

    Though this trip was Whelihan’s and Evans’ first to the Arctic region, staff from the laboratory’s Advanced Undersea Systems and Technology Group have been conducting experiments at ICEX since 2018. However, because of the Arctic’s remote location and extreme conditions, data collection has rarely been continuous over long periods of time or widespread across large areas. The team now hopes to change that by building low-cost, expendable sensing platforms consisting of co-located devices that can be left unattended for automated, persistent, near-real-time monitoring. 

    “The laboratory’s extensive expertise in rapid prototyping, seismo-acoustic signal processing, remote sensing, and oceanography make us a natural fit to build this sensor network,” says Evans.

    In the months leading up to the Arctic trip, the team collected seismometer data at Firepond, part of the laboratory’s Haystack Observatory site in Westford, Massachusetts. Through this local data collection, they aimed to gain a sense of what anthropogenic (human-induced) noise would look like so they could begin to anticipate the kinds of signatures they might see in the Arctic. They also collected ice melting/fracturing data during a thaw cycle and correlated these data with the weather conditions (air temperature, humidity, and pressure). Through this analysis, they detected an increase in seismic signals as the temperature rose above 32 F — an indication that air temperature and ice cracking may be related.

    A sensing network

    At ICEX, the team deployed various commercial off-the-shelf sensors and new sensors developed by the laboratory and University of New Hampshire (UNH) to assess their resiliency in the frigid environment and to collect an initial dataset.

    “One aspect that differentiates these experiments from those of the past is that we concurrently collected seismo-acoustic data and environmental parameters,” says Evans.

    The commercial technologies were seismometers to detect the vibrational energy released when sea ice fractures or collides with other ice floes; a hydrophone (underwater microphone) array to record the acoustic energy created by ice-fracturing events; a sound speed profiler to measure the speed of sound through the water column; and a conductivity, temperature, and depth (CTD) profiler to measure the salinity (related to conductivity), temperature, and pressure (related to depth) throughout the water column. The speed of sound in the ocean primarily depends on these three quantities. 

    To precisely measure the temperature across the entire water column at one location, they deployed an array of transistor-based temperature sensors developed by the laboratory’s Advanced Materials and Microsystems Group in collaboration with the Advanced Functional Fabrics of America Manufacturing Innovation Institute. The small temperature sensors run along the length of a thread-like polymer fiber embedded with multiple conductors. This fiber platform, which can support a broad range of sensors, can be unspooled hundreds of feet below the water’s surface to concurrently measure temperature or other water properties — the fiber deployed in the Arctic also contained accelerometers to measure depth — at many points in the water column. Traditionally, temperature profiling has required moving a device up and down through the water column.

    The team also deployed a high-frequency echosounder supplied by Anthony Lyons and Larry Mayer, collaborators at UNH’s Center for Coastal and Ocean Mapping. This active sonar uses acoustic energy to detect internal waves, or waves occurring beneath the ocean’s surface.

    “You may think of the ocean as a homogenous body of water, but it’s not,” Evans explains. “Different currents can exist as you go down in depth, much like how you can get different winds when you go up in altitude. The UNH echosounder allows us to see the different currents in the water column, as well as ice roughness when we turn the sensor to look upward.”

    “The reason we care about currents is that we believe they will tell us something about how warmer water from the Atlantic Ocean is coming into contact with sea ice,” adds Whelihan. “Not only is that water melting ice but it also has lower salt content, resulting in oceanic layers and affecting how long ice lasts and where it lasts.”

    Back home, the team has begun analyzing their data. For the seismic data, this analysis involves distinguishing any ice events from various sources of anthropogenic noise, including generators, snowmobiles, footsteps, and aircraft. Similarly, the researchers know their hydrophone array acoustic data are contaminated by energy from a sound source that another research team participating in ICEX placed in the water. Based on their physics, icequakes — the seismic events that occur when ice cracks — have characteristic signatures that can be used to identify them. One approach is to manually find an icequake and use that signature as a guide for finding other icequakes in the dataset.

    From their water column profiling sensors, they identified an interesting evolution in the sound speed profile 30 to 40 meters below the ocean surface, related to a mass of colder water moving in later in the day. The group’s physical oceanographer believes this change in the profile is due to water coming up from the Bering Sea, water that initially comes from the Atlantic Ocean. The UNH-supplied echosounder also generated an interesting signal at a similar depth.

    “Our supposition is that this result has something to do with the large sound speed variation we detected, either directly because of reflections off that layer or because of plankton, which tend to rise on top of that layer,” explains Evans.  

    A future predictive capability

    Going forward, the team will continue mining their collected data and use these data to begin building algorithms capable of automatically detecting and localizing — and ultimately predicting — ice events correlated with changes in environmental conditions. To complement their experimental data, they have initiated conversations with organizations that model the physical behavior of sea ice, including the National Oceanic and Atmospheric Administration and the National Ice Center. Merging the laboratory’s expertise in sensor design and signal processing with their expertise in ice physics would provide a more complete understanding of how the Arctic is changing.

    The laboratory team will also start exploring cost-effective engineering approaches for integrating the sensors into packages hardened for deployment in the harsh environment of the Arctic.

    “Until these sensors are truly unattended, the human factor of usability is front and center,” says Whelihan. “Because it’s so cold, equipment can break accidentally. For example, at ICEX 2022, our waterproof enclosure for the seismometers survived, but the enclosure for its power supply, which was made out of a cheaper plastic, shattered in my hand when I went to pick it up.”

    The sensor packages will not only need to withstand the frigid environment but also be able to “phone home” over some sort of satellite data link and sustain their power. The team plans to investigate whether waste heat from processing can keep the instruments warm and how energy could be harvested from the Arctic environment.

    Before the next ICEX scheduled for 2024, they hope to perform preliminary testing of their sensor packages and concepts in Arctic-like environments. While attending ICEX 2022, they engaged with several other attendees — including the U.S. Navy, Arctic Submarine Laboratory, National Ice Center, and University of Alaska Fairbanks (UAF) — and identified cold room experimentation as one area of potential collaboration. Testing can also be performed at outdoor locations a bit closer to home and more easily accessible, such as the Great Lakes in Michigan and a UAF-maintained site in Barrow, Alaska. In the future, the laboratory team may have an opportunity to accompany U.S. Coast Guard personnel on ice-breaking vessels traveling from Alaska to Greenland. The team is also thinking about possible venues for collecting data far removed from human noise sources.

    “Since I’ve told colleagues, friends, and family I was going to the Arctic, I’ve had a lot of interesting conversations about climate change and what we’re doing there and why we’re doing it,” Whelihan says. “People don’t have an intrinsic, automatic understanding of this environment and its impact because it’s so far removed from us. But the Arctic plays a crucial role in helping to keep the global climate in balance, so it’s imperative we understand the processes leading to sea ice fractures.”

    This work is funded through Lincoln Laboratory’s internally administered R&D portfolio on climate. More