More stories

  • in

    In a surprising finding, light can make water evaporate without heat

    Evaporation is happening all around us all the time, from the sweat cooling our bodies to the dew burning off in the morning sun. But science’s understanding of this ubiquitous process may have been missing a piece all this time.

    In recent years, some researchers have been puzzled upon finding that water in their experiments, which was held in a sponge-like material known as a hydrogel, was evaporating at a higher rate than could be explained by the amount of heat, or thermal energy, that the water was receiving. And the excess has been significant — a doubling, or even a tripling or more, of the theoretical maximum rate.

    After carrying out a series of new experiments and simulations, and reexamining some of the results from various groups that claimed to have exceeded the thermal limit, a team of researchers at MIT has reached a startling conclusion: Under certain conditions, at the interface where water meets air, light can directly bring about evaporation without the need for heat, and it actually does so even more efficiently than heat. In these experiments, the water was held in a hydrogel material, but the researchers suggest that the phenomenon may occur under other conditions as well.

    The findings are published this week in a paper in PNAS, by MIT postdoc Yaodong Tu, professor of mechanical engineering Gang Chen, and four others.

    The phenomenon might play a role in the formation and evolution of fog and clouds, and thus would be important to incorporate into climate models to improve their accuracy, the researchers say. And it might play an important part in many industrial processes such as solar-powered desalination of water, perhaps enabling alternatives to the step of converting sunlight to heat first.

    The new findings come as a surprise because water itself does not absorb light to any significant degree. That’s why you can see clearly through many feet of clean water to the surface below. So, when the team initially began exploring the process of solar evaporation for desalination, they first put particles of a black, light-absorbing material in a container of water to help convert the sunlight to heat.

    Then, the team came across the work of another group that had achieved an evaporation rate double the thermal limit — which is the highest possible amount of evaporation that can take place for a given input of heat, based on basic physical principles such as the conservation of energy. It was in these experiments that the water was bound up in a hydrogel. Although they were initially skeptical, Chen and Tu starting their own experiments with hydrogels, including a piece of the material from the other group. “We tested it under our solar simulator, and it worked,” confirming the unusually high evaporation rate, Chen says. “So, we believed them now.” Chen and Tu then began making and testing their own hydrogels.

    They began to suspect that the excess evaporation was being caused by the light itself —that photons of light were actually knocking bundles of water molecules loose from the water’s surface. This effect would only take place right at the boundary layer between water and air, at the surface of the hydrogel material — and perhaps also on the sea surface or the surfaces of droplets in clouds or fog.

    In the lab, they monitored the surface of a hydrogel, a JELL-O-like matrix consisting mostly of water bound by a sponge-like lattice of thin membranes. They measured its responses to simulated sunlight with precisely controlled wavelengths.

    The researchers subjected the water surface to different colors of light in sequence and measured the evaporation rate. They did this by placing a container of water-laden hydrogel on a scale and directly measuring the amount of mass lost to evaporation, as well as monitoring the temperature above the hydrogel surface. The lights were shielded to prevent them from introducing extra heat. The researchers found that the effect varied with color and peaked at a particular wavelength of green light. Such a color dependence has no relation to heat, and so supports the idea that it is the light itself that is causing at least some of the evaporation.

    The puffs of white condensation on glass is water being evaporated from a hydrogel using green light, without heat.Image: Courtesy of the researchers

    The researchers tried to duplicate the observed evaporation rate with the same setup but using electricity to heat the material, and no light. Even though the thermal input was the same as in the other test, the amount of water that evaporated never exceeded the thermal limit. However, it did so when the simulated sunlight was on, confirming that light was the cause of the extra evaporation.

    Though water itself does not absorb much light, and neither does the hydrogel material itself, when the two combine they become strong absorbers, Chen says. That allows the material to harness the energy of the solar photons efficiently and exceed the thermal limit, without the need for any dark dyes for absorption.

    Having discovered this effect, which they have dubbed the photomolecular effect, the researchers are now working on how to apply it to real-world needs. They have a grant from the Abdul Latif Jameel Water and Food Systems Lab to study the use of this phenomenon to improve the efficiency of solar-powered desalination systems, and a Bose Grant to explore the phenomenon’s effects on climate change modeling.

    Tu explains that in standard desalination processes, “it normally has two steps: First we evaporate the water into vapor, and then we need to condense the vapor to liquify it into fresh water.” With this discovery, he says, potentially “we can achieve high efficiency on the evaporation side.” The process also could turn out to have applications in processes that require drying a material.

    Chen says that in principle, he thinks it may be possible to increase the limit of water produced by solar desalination, which is currently 1.5 kilograms per square meter, by as much as three- or fourfold using this light-based approach. “This could potentially really lead to cheap desalination,” he says.

    Tu adds that this phenomenon could potentially also be leveraged in evaporative cooling processes, using the phase change to provide a highly efficient solar cooling system.

    Meanwhile, the researchers are also working closely with other groups who are attempting to replicate the findings, hoping to overcome skepticism that has faced the unexpected findings and the hypothesis being advanced to explain them.

    The research team also included Jiawei Zhou, Shaoting Lin, Mohammed Alshrah, and Xuanhe Zhao, all in MIT’s Department of Mechanical Engineering. More

  • in

    Engineers develop an efficient process to make fuel from carbon dioxide

    The search is on worldwide to find ways to extract carbon dioxide from the air or from power plant exhaust and then make it into something useful. One of the more promising ideas is to make it into a stable fuel that can replace fossil fuels in some applications. But most such conversion processes have had problems with low carbon efficiency, or they produce fuels that can be hard to handle, toxic, or flammable.

    Now, researchers at MIT and Harvard University have developed an efficient process that can convert carbon dioxide into formate, a liquid or solid material that can be used like hydrogen or methanol to power a fuel cell and generate electricity. Potassium or sodium formate, already produced at industrial scales and commonly used as a de-icer for roads and sidewalks, is nontoxic, nonflammable, easy to store and transport, and can remain stable in ordinary steel tanks to be used months, or even years, after its production.

    The new process, developed by MIT doctoral students Zhen Zhang, Zhichu Ren, and Alexander H. Quinn; Harvard University doctoral student Dawei Xi; and MIT Professor Ju Li, is described this week in an open-access paper in Cell Reports Physical Science. The whole process — including capture and electrochemical conversion of the gas to a solid formate powder, which is then used in a fuel cell to produce electricity — was demonstrated at a small, laboratory scale. However, the researchers expect it to be scalable so that it could provide emissions-free heat and power to individual homes and even be used in industrial or grid-scale applications.

    Other approaches to converting carbon dioxide into fuel, Li explains, usually involve a two-stage process: First the gas is chemically captured and turned into a solid form as calcium carbonate, then later that material is heated to drive off the carbon dioxide and convert it to a fuel feedstock such as carbon monoxide. That second step has very low efficiency, typically converting less than 20 percent of the gaseous carbon dioxide into the desired product, Li says.

    By contrast, the new process achieves a conversion of well over 90 percent and eliminates the need for the inefficient heating step by first converting the carbon dioxide into an intermediate form, liquid metal bicarbonate. That liquid is then electrochemically converted into liquid potassium or sodium formate in an electrolyzer that uses low-carbon electricity, e.g. nuclear, wind, or solar power. The highly concentrated liquid potassium or sodium formate solution produced can then be dried, for example by solar evaporation, to produce a solid powder that is highly stable and can be stored in ordinary steel tanks for up to years or even decades, Li says.

    Several steps of optimization developed by the team made all the difference in changing an inefficient chemical-conversion process into a practical solution, says Li, who holds joint appointments in the departments of Nuclear Science and Engineering and of Materials Science and Engineering.

    The process of carbon capture and conversion involves first an alkaline solution-based capture that concentrates carbon dioxide, either from concentrated streams such as from power plant emissions or from very low-concentration sources, even open air, into the form of a liquid metal-bicarbonate solution. Then, through the use of a cation-exchange membrane electrolyzer, this bicarbonate is electrochemically converted into solid formate crystals with a carbon efficiency of greater than 96 percent, as confirmed in the team’s lab-scale experiments.

    These crystals have an indefinite shelf life, remaining so stable that they could be stored for years, or even decades, with little or no loss. By comparison, even the best available practical hydrogen storage tanks allow the gas to leak out at a rate of about 1 percent per day, precluding any uses that would require year-long storage, Li says. Methanol, another widely explored alternative for converting carbon dioxide into a fuel usable in fuel cells, is a toxic substance that cannot easily be adapted to use in situations where leakage could pose a health hazard. Formate, on the other hand, is widely used and considered benign, according to national safety standards.

    Several improvements account for the greatly improved efficiency of this process. First, a careful design of the membrane materials and their configuration overcomes a problem that previous attempts at such a system have encountered, where a buildup of certain chemical byproducts changes the pH, causing the system to steadily lose efficiency over time. “Traditionally, it is difficult to achieve long-term, stable, continuous conversion of the feedstocks,” Zhang says. “The key to our system is to achieve a pH balance for steady-state conversion.”

    To achieve that, the researchers carried out thermodynamic modeling to design the new process so that it is chemically balanced and the pH remains at a steady state with no shift in acidity over time. It can therefore continue operating efficiently over long periods. In their tests, the system ran for over 200 hours with no significant decrease in output. The whole process can be done at ambient temperatures and relatively low pressures (about five times atmospheric pressure).

    Another issue was that unwanted side reactions produced other chemical products that were not useful, but the team figured out a way to prevent these side reactions by the introduction of an extra “buffer” layer of bicarbonate-enriched fiberglass wool that blocked these reactions.

    The team also built a fuel cell specifically optimized for the use of this formate fuel to produce electricity. The stored formate particles are simply dissolved in water and pumped into the fuel cell as needed. Although the solid fuel is much heavier than pure hydrogen, when the weight and volume of the high-pressure gas tanks needed to store hydrogen is considered, the end result is an electricity output near parity for a given storage volume, Li says.

    The formate fuel can potentially be adapted for anything from home-sized units to large scale industrial uses or grid-scale storage systems, the researchers say. Initial household applications might involve an electrolyzer unit about the size of a refrigerator to capture and convert the carbon dioxide into formate, which could be stored in an underground or rooftop tank. Then, when needed, the powdered solid would be mixed with water and fed into a fuel cell to provide power and heat. “This is for community or household demonstrations,” Zhang says, “but we believe that also in the future it may be good for factories or the grid.”

    “The formate economy is an intriguing concept because metal formate salts are very benign and stable, and a compelling energy carrier,” says Ted Sargent, a professor of chemistry and of electrical and computer engineering at Northwestern University, who was not associated with this work. “The authors have demonstrated enhanced efficiency in liquid-to-liquid conversion from bicarbonate feedstock to formate, and have demonstrated these fuels can be used later to produce electricity,” he says.

    The work was supported by the U.S. Department of Energy Office of Science. More

  • in

    MIT design would harness 40 percent of the sun’s heat to produce clean hydrogen fuel

    MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

    In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce “solar thermochemical hydrogen.” The system harnesses the sun’s heat to directly split water and generate hydrogen — a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

    Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a “grey” energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

    In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun’s heat to generate that much more hydrogen. The increase in efficiency could drive down the system’s overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

    “We’re thinking of hydrogen as the fuel of the future, and there’s a need to generate it cheaply and at scale,” says the study’s lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. “We’re trying to achieve the Department of Energy’s goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen.”

    Ghoniem’s study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

    Solar stations

    Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) — a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver’s heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

    At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal “oxidation” is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

    The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

    Each reactor would first pass through a hot station, where it would be exposed to the sun’s heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor’s metal. That metal would then be in a “reduced” state — ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

    Rust and rails

    Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system’s efficiency is too low to be practical.

    A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

    To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

    These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would  run continuously and would enerate separate streams of pure hydrogen and oxygen.

    The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

    “We have to think of every bit of energy in the system, and how to use it, to minimize the cost,” Ghoniem says. “And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun’s heat to produce hydrogen.”

    “If this can be realized, it could drastically change our energy future — namely, enabling hydrogen production, 24/7,” says Christopher Muhich, an assistant professor of chemical engineering at Arizona State University, who was not involved in the research. “The ability to make hydrogen is the linchpin to producing liquid fuels from sunlight.”

    In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

    “When fully implemented, this system would be housed in a little building in the middle of a solar field,” Patankar explains. “Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production.”

    This work was supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. More

  • in

    Desalination system could produce freshwater that is cheaper than tap water

    Engineers at MIT and in China are aiming to turn seawater into drinking water with a completely passive device that is inspired by the ocean, and powered by the sun.

    In a paper appearing today in the journal Joule, the team outlines the design for a new solar desalination system that takes in saltwater and heats it with natural sunlight.

    The configuration of the device allows water to circulate in swirling eddies, in a manner similar to the much larger “thermohaline” circulation of the ocean. This circulation, combined with the sun’s heat, drives water to evaporate, leaving salt behind. The resulting water vapor can then be condensed and collected as pure, drinkable water. In the meantime, the leftover salt continues to circulate through and out of the device, rather than accumulating and clogging the system.

    The new system has a higher water-production rate and a higher salt-rejection rate than all other passive solar desalination concepts currently being tested.

    The researchers estimate that if the system is scaled up to the size of a small suitcase, it could produce about 4 to 6 liters of drinking water per hour and last several years before requiring replacement parts. At this scale and performance, the system could produce drinking water at a rate and price that is cheaper than tap water.

    “For the first time, it is possible for water, produced by sunlight, to be even cheaper than tap water,” says Lenan Zhang, a research scientist in MIT’s Device Research Laboratory.

    The team envisions a scaled-up device could passively produce enough drinking water to meet the daily requirements of a small family. The system could also supply off-grid, coastal communities where seawater is easily accessible.

    Zhang’s study co-authors include MIT graduate student Yang Zhong and Evelyn Wang, the Ford Professor of Engineering, along with Jintong Gao, Jinfang You, Zhanyu Ye, Ruzhu Wang, and Zhenyuan Xu of Shanghai Jiao Tong University in China.

    A powerful convection

    The team’s new system improves on their previous design — a similar concept of multiple layers, called stages. Each stage contained an evaporator and a condenser that used heat from the sun to passively separate salt from incoming water. That design, which the team tested on the roof of an MIT building, efficiently converted the sun’s energy to evaporate water, which was then condensed into drinkable water. But the salt that was left over quickly accumulated as crystals that clogged the system after a few days. In a real-world setting, a user would have to place stages on a frequent basis, which would significantly increase the system’s overall cost.

    In a follow-up effort, they devised a solution with a similar layered configuration, this time with an added feature that helped to circulate the incoming water as well as any leftover salt. While this design prevented salt from settling and accumulating on the device, it desalinated water at a relatively low rate.

    In the latest iteration, the team believes it has landed on a design that achieves both a high water-production rate, and high salt rejection, meaning that the system can quickly and reliably produce drinking water for an extended period. The key to their new design is a combination of their two previous concepts: a multistage system of evaporators and condensers, that is also configured to boost the circulation of water — and salt — within each stage.

    “We introduce now an even more powerful convection, that is similar to what we typically see in the ocean, at kilometer-long scales,” Xu says.

    The small circulations generated in the team’s new system is similar to the “thermohaline” convection in the ocean — a phenomenon that drives the movement of water around the world, based on differences in sea temperature (“thermo”) and salinity (“haline”).

    “When seawater is exposed to air, sunlight drives water to evaporate. Once water leaves the surface, salt remains. And the higher the salt concentration, the denser the liquid, and this heavier water wants to flow downward,” Zhang explains. “By mimicking this kilometer-wide phenomena in small box, we can take advantage of this feature to reject salt.”

    Tapping out

    The heart of the team’s new design is a single stage that resembles a thin box, topped with a dark material that efficiently absorbs the heat of the sun. Inside, the box is separated into a top and bottom section. Water can flow through the top half, where the ceiling is lined with an evaporator layer that uses the sun’s heat to warm up and evaporate any water in direct contact. The water vapor is then funneled to the bottom half of the box, where a condensing layer air-cools the vapor into salt-free, drinkable liquid. The researchers set the entire box at a tilt within a larger, empty vessel, then attached a tube from the top half of the box down through the bottom of the vessel, and floated the vessel in saltwater.

    In this configuration, water can naturally push up through the tube and into the box, where the tilt of the box, combined with the thermal energy from the sun, induces the water to swirl as it flows through. The small eddies help to bring water in contact with the upper evaporating layer while keeping salt circulating, rather than settling and clogging.

    The team built several prototypes, with one, three, and 10 stages, and tested their performance in water of varying salinity, including natural seawater and water that was seven times saltier.

    From these tests, the researchers calculated that if each stage were scaled up to a square meter, it would produce up to 5 liters of drinking water per hour, and that the system could desalinate water without accumulating salt for several years. Given this extended lifetime, and the fact that the system is entirely passive, requiring no electricity to run, the team estimates that the overall cost of running the system would be cheaper than what it costs to produce tap water in the United States.

    “We show that this device is capable of achieving a long lifetime,” Zhong says. “That means that, for the first time, it is possible for drinking water produced by sunlight to be cheaper than tap water. This opens up the possibility for solar desalination to address real-world problems.”

    “This is a very innovative approach that effectively mitigates key challenges in the field of desalination,” says Guihua Yu, who develops sustainable water and energy storage systems at the University of Texas at Austin, and was not involved in the research. “The design is particularly beneficial for regions struggling with high-salinity water. Its modular design makes it highly suitable for household water production, allowing for scalability and adaptability to meet individual needs.”

    Funding for the research at Shanghai Jiao Tong University was supported by the Natural Science Foundation of China. More

  • in

    Improving US air quality, equitably

    Decarbonization of national economies will be key to achieving global net-zero emissions by 2050, a major stepping stone to the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius (and ideally 1.5 C), and thereby averting the worst consequences of climate change. Toward that end, the United States has pledged to reduce its greenhouse gas emissions by 50-52 percent from 2005 levels by 2030, backed by its implementation of the 2022 Inflation Reduction Act. This strategy is consistent with a 50-percent reduction in carbon dioxide (CO2) by the end of the decade.

    If U.S. federal carbon policy is successful, the nation’s overall air quality will also improve. Cutting CO2 emissions reduces atmospheric concentrations of air pollutants that lead to the formation of fine particulate matter (PM2.5), which causes more than 200,000 premature deaths in the United States each year. But an average nationwide improvement in air quality will not be felt equally; air pollution exposure disproportionately harms people of color and lower-income populations.

    How effective are current federal decarbonization policies in reducing U.S. racial and economic disparities in PM2.5 exposure, and what changes will be needed to improve their performance? To answer that question, researchers at MIT and Stanford University recently evaluated a range of policies which, like current U.S. federal carbon policies, reduce economy-wide CO2 emissions by 40-60 percent from 2005 levels by 2030. Their findings appear in an open-access article in the journal Nature Communications.

    First, they show that a carbon-pricing policy, while effective in reducing PM2.5 exposure for all racial/ethnic groups, does not significantly mitigate relative disparities in exposure. On average, the white population undergoes far less exposure than Black, Hispanic, and Asian populations. This policy does little to reduce exposure disparities because the CO2 emissions reductions that it achieves primarily occur in the coal-fired electricity sector. Other sectors, such as industry and heavy-duty diesel transportation, contribute far more PM2.5-related emissions.

    The researchers then examine thousands of different reduction options through an optimization approach to identify whether any possible combination of carbon dioxide reductions in the range of 40-60 percent can mitigate disparities. They find that that no policy scenario aligned with current U.S. carbon dioxide emissions targets is likely to significantly reduce current PM2.5 exposure disparities.

    “Policies that address only about 50 percent of CO2 emissions leave many polluting sources in place, and those that prioritize reductions for minorities tend to benefit the entire population,” says Noelle Selin, supervising author of the study and a professor at MIT’s Institute for Data, Systems and Society and Department of Earth, Atmospheric and Planetary Sciences. “This means that a large range of policies that reduce CO2 can improve air quality overall, but can’t address long-standing inequities in air pollution exposure.”

    So if climate policy alone cannot adequately achieve equitable air quality results, what viable options remain? The researchers suggest that more ambitious carbon policies could narrow racial and economic PM2.5 exposure disparities in the long term, but not within the next decade. To make a near-term difference, they recommend interventions designed to reduce PM2.5 emissions resulting from non-CO2 sources, ideally at the economic sector or community level.

    “Achieving improved PM2.5 exposure for populations that are disproportionately exposed across the United States will require thinking that goes beyond current CO2 policy strategies, most likely involving large-scale structural changes,” says Selin. “This could involve changes in local and regional transportation and housing planning, together with accelerated efforts towards decarbonization.” More

  • in

    Ancient Amazonians intentionally created fertile “dark earth”

    The Amazon river basin is known for its immense and lush tropical forests, so one might assume that the Amazon’s land is equally rich. In fact, the soils underlying the forested vegetation, particularly in the hilly uplands, are surprisingly infertile. Much of the Amazon’s soil is acidic and low in nutrients, making it notoriously difficult to farm.

    But over the years, archaeologists have dug up mysteriously black and fertile patches of ancient soils in hundreds of sites across the Amazon. This “dark earth” has been found in and around human settlements dating back hundreds to thousands of years. And it has been a matter of some debate as to whether the super-rich soil was purposefully created or a coincidental byproduct of these ancient cultures.

    Now, a study led by researchers at MIT, the University of Florida, and in Brazil aims to settle the debate over dark earth’s origins. The team has pieced together results from soil analyses, ethnographic observations, and interviews with modern Indigenous communities, to show that dark earth was intentionally produced by ancient Amazonians as a way to improve the soil and sustain large and complex societies.

    “If you want to have large settlements, you need a nutritional base. But the soil in the Amazon is extensively leached of nutrients, and naturally poor for growing most crops,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “We argue here that people played a role in creating dark earth, and intentionally modified the ancient environment to make it a better place for human populations.”

    And as it turns out, dark earth contains huge amounts of stored carbon. As generations worked the soil, for instance by enriching it with scraps of food, charcoal, and waste, the earth accumulated the carbon-rich detritus and kept it locked up for hundreds to thousands of years. By purposely producing dark earth, then, early Amazonians may have also unintentionally created a powerful, carbon-sequestering soil.

    “The ancient Amazonians put a lot of carbon in the soil, and a lot of that is still there today,” says co-author Samuel Goldberg, who performed the data analysis as a graduate student at MIT and is now an assistant professor at the University of Miami. “That’s exactly what we want for climate change mitigation efforts. Maybe we could adapt some of their indigenous strategies on a larger scale, to lock up carbon in soil, in ways that we now know would stay there for a long time.”

    The team’s study appears today in Science Advances. Other authors include former MIT postdoc and lead author Morgan Schmidt, anthropologist Michael Heckenberger of the University of Florida, and collaborators from multiple institutions across Brazil.

    Modern intent

    In their current study, the team synthesized observations and data that Schmidt, Heckenberger, and others had previously gathered, while working with Indigenous communities in the Amazon since the early 2000s,  with new data collected in 2018-19. The scientists focused their fieldwork in the Kuikuro Indigenous Territory in the Upper Xingu River basin in the southeastern Amazon. This region is home to modern Kuikuro villages as well as archaeological sites where the ancestors of the Kuikuro are thought to have lived. Over multiple visits to the region, Schmidt, then a graduate student at the University of Florida, was struck by the darker soil around some archaeological sites.

    “When I saw this dark earth and how fertile it was, and started digging into what was known about it, I found it was a mysterious thing — no one really knew where it came from,” he says.

    Schmidt and his colleagues began making observations of the modern Kuikuro’s practices of managing the soil. These practices include generating “middens” — piles of waste and food scraps, similar to compost heaps, that are maintained in certain locations around the center of a village. After some time, these waste piles decompose and mix with the soil to form a dark and fertile earth, that residents then use to plant crops. The researchers also observed that Kuikuro farmers spread organic waste and ash on farther fields, which also generates dark earth, where they can then grow more crops.

    “We saw activities they did to modify the soil and increase the elements, like spreading ash on the ground, or spreading charcoal around the base of the tree, which were obviously intentional actions,” Schmidt says.

    In addition to these observations, they also conducted interviews with villagers to document the Kuikuro’s beliefs and practices relating to dark earth. In some of these interviews, villagers referred to dark earth as “eegepe,” and described their daily practices in creating and cultivating the rich soil to improve its agricultural potential.

    Based on these observations and interviews with the Kuikuro, it was clear that Indigenous communities today intentionally produce dark earth, through their practices to improve the soil. But could the dark earth found in nearby archaeological sites have been made through similar intentional practices?

    A bridge in soil

    In search of a connection, Schmidt joined Perron’s group as a postdoc at MIT. Together, he, Perron, and Goldberg carried out a meticulous analysis of soils in both archaeological and modern sites in the Upper Xingu region. They discovered similarities in dark earth’s spatial structure: Deposits of dark earth were found in a radial pattern, concentrating mostly in the center of both modern and ancient settlements, and stretching, like spokes of a wheel, out to the edges. Modern and ancient dark earth was also similar in composition, and was enriched in the same elements, such as carbon, phosphorus, and other nutrients.

    “These are all the elements that are in humans, animals, and plants, and they’re the ones that reduce the aluminum toxicity in soil, which is a notorious problem in the Amazon,” Schmidt says. “All these elements make the soil better for plant growth.”

    “The key bridge between the modern and ancient times is the soil,” Goldberg adds. “Because we see this correspondence between the two time periods, we can infer that these practices that we can observe and ask people about today, were also happening in the past.”

    In other words, the team was able to show for the first time that ancient Amazonians intentionally worked the soil, likely through practices similar to today’s, in order to grow enough crops to sustain large communities.

    Going a step further, the team calculated the amount of carbon in ancient dark earth. They combined their measurements of soil samples, with maps of where dark earth has been found through several ancient settlements. Their estimates revealed that each ancient village contains several thousand tons of carbon that has been sequestered in the soil for hundreds of years as a result of Indigenous, human activities.

    As the team concludes in their paper, “modern sustainable agriculture and climate change mitigation efforts, inspired by the persistent fertility of ancient dark earth, can draw on traditional methods practiced to this day by Indigenous Amazonians.”

    This research at MIT was supported, in part, by the Abdul Latif Jameel Water and Food Systems Lab and the Department of the Air Force Artificial Intelligence Accelerator. Field research was supported by grants to the University of Florida from the National Science Foundation, the Wenner-Gren Foundation and the William Talbott Hillman Foundation, and was sponsored in Brazil by the Museu Goeldi and Museu Nacional. More

  • in

    How to tackle the global deforestation crisis

    Imagine if France, Germany, and Spain were completely blanketed in forests — and then all those trees were quickly chopped down. That’s nearly the amount of deforestation that occurred globally between 2001 and 2020, with profound consequences.

    Deforestation is a major contributor to climate change, producing between 6 and 17 percent of global greenhouse gas emissions, according to a 2009 study. Meanwhile, because trees also absorb carbon dioxide, removing it from the atmosphere, they help keep the Earth cooler. And climate change aside, forests protect biodiversity.

    “Climate change and biodiversity make this a global problem, not a local problem,” says MIT economist Ben Olken. “Deciding to cut down trees or not has huge implications for the world.”

    But deforestation is often financially profitable, so it continues at a rapid rate. Researchers can now measure this trend closely: In the last quarter-century, satellite-based technology has led to a paradigm change in charting deforestation. New deforestation datasets, based on the Landsat satellites, for instance, track forest change since 2000 with resolution at 30 meters, while many other products now offer frequent imaging at close resolution.

    “Part of this revolution in measurement is accuracy, and the other part is coverage,” says Clare Balboni, an assistant professor of economics at the London School of Economics (LSE). “On-site observation is very expensive and logistically challenging, and you’re talking about case studies. These satellite-based data sets just open up opportunities to see deforestation at scale, systematically, across the globe.”

    Balboni and Olken have now helped write a new paper providing a road map for thinking about this crisis. The open-access article, “The Economics of Tropical Deforestation,” appears this month in the Annual Review of Economics. The co-authors are Balboni, a former MIT faculty member; Aaron Berman, a PhD candidate in MIT’s Department of Economics; Robin Burgess, an LSE professor; and Olken, MIT’s Jane Berkowitz Carlton and Dennis William Carlton Professor of Microeconomics. Balboni and Olken have also conducted primary research in this area, along with Burgess.

    So, how can the world tackle deforestation? It starts with understanding the problem.

    Replacing forests with farms

    Several decades ago, some thinkers, including the famous MIT economist Paul Samuelson in the 1970s, built models to study forests as a renewable resource; Samuelson calculated the “maximum sustained yield” at which a forest could be cleared while being regrown. These frameworks were designed to think about tree farms or the U.S. national forest system, where a fraction of trees would be cut each year, and then new trees would be grown over time to take their place.

    But deforestation today, particularly in tropical areas, often looks very different, and forest regeneration is not common.

    Indeed, as Balboni and Olken emphasize, deforestation is now rampant partly because the profits from chopping down trees come not just from timber, but from replacing forests with agriculture. In Brazil, deforestation has increased along with agricultural prices; in Indonesia, clearing trees accelerated as the global price of palm oil went up, leading companies to replace forests with palm tree orchards.

    All this tree-clearing creates a familiar situation: The globally shared costs of climate change from deforestation are “externalities,” as economists say, imposed on everyone else by the people removing forest land. It is akin to a company that pollutes into a river, affecting the water quality of residents.

    “Economics has changed the way it thinks about this over the last 50 years, and two things are central,” Olken says. “The relevance of global externalities is very important, and the conceptualization of alternate land uses is very important.” This also means traditional forest-management guidance about regrowth is not enough. With the economic dynamics in mind, which policies might work, and why?

    The search for solutions

    As Balboni and Olken note, economists often recommend “Pigouvian” taxes (named after the British economist Arthur Pigou) in these cases, levied against people imposing externalities on others. And yet, it can be hard to identify who is doing the deforesting.

    Instead of taxing people for clearing forests, governments can pay people to keep forests intact. The UN uses Payments for Environmental Services (PES) as part of its REDD+ (Reducing Emissions from Deforestation and forest Degradation) program. However, it is similarly tough to identify the optimal landowners to subsidize, and these payments may not match the quick cash-in of deforestation. A 2017 study in Uganda showed PES reduced deforestation somewhat; a 2022 study in Indonesia found no reduction; another 2022 study, in Brazil, showed again that some forest protection resulted.

    “There’s mixed evidence from many of these [studies],” Balboni says. These policies, she notes, must reach people who would otherwise clear forests, and a key question is, “How can we assess their success compared to what would have happened anyway?”

    Some places have tried cash transfer programs for larger populations. In Indonesia, a 2020 study found such subsidies reduced deforestation near villages by 30 percent. But in Mexico, a similar program meant more people could afford milk and meat, again creating demand for more agriculture and thus leading to more forest-clearing.

    At this point, it might seem that laws simply banning deforestation in key areas would work best — indeed, about 16 percent of the world’s land overall is protected in some way. Yet the dynamics of protection are tricky. Even with protected areas in place, there is still “leakage” of deforestation into other regions. 

    Still more approaches exist, including “nonstate agreements,” such as the Amazon Soy Moratorium in Brazil, in which grain traders pledged not to buy soy from deforested lands, and reduced deforestation without “leakage.”

    Also, intriguingly, a 2008 policy change in the Brazilian Amazon made agricultural credit harder to obtain by requiring recipients to comply with environmental and land registration rules. The result? Deforestation dropped by up to 60 percent over nearly a decade. 

    Politics and pulp

    Overall, Balboni and Olken observe, beyond “externalities,” two major challenges exist. One, it is often unclear who holds property rights in forests. In these circumstances, deforestation seems to increase. Two, deforestation is subject to political battles.

    For instance, as economist Bard Harstad of Stanford University has observed, environmental lobbying is asymmetric. Balboni and Olken write: “The conservationist lobby must pay the government in perpetuity … while the deforestation-oriented lobby need pay only once to deforest in the present.” And political instability leads to more deforestation because “the current administration places lower value on future conservation payments.”

    Even so, national political measures can work. In the Amazon from 2001 to 2005, Brazilian deforestation rates were three to four times higher than on similar land across the border, but that imbalance vanished once the country passed conservation measures in 2006. However, deforestation ramped up again after a 2014 change in government. Looking at particular monitoring approaches, a study of Brazil’s satellite-based Real-Time System for Detection of Deforestation (DETER), launched in 2004, suggests that a 50 percent annual increase in its use in municipalities created a 25 percent reduction in deforestation from 2006 to 2016.

    How precisely politics matters may depend on the context. In a 2021 paper, Balboni and Olken (with three colleagues) found that deforestation actually decreased around elections in Indonesia. Conversely, in Brazil, one study found that deforestation rates were 8 to 10 percent higher where mayors were running for re-election between 2002 and 2012, suggesting incumbents had deforestation industry support.

    “The research there is aiming to understand what the political economy drivers are,” Olken says, “with the idea that if you understand those things, reform in those countries is more likely.”

    Looking ahead, Balboni and Olken also suggest that new research estimating the value of intact forest land intact could influence public debates. And while many scholars have studied deforestation in Brazil and Indonesia, fewer have examined the Democratic Republic of Congo, another deforestation leader, and sub-Saharan Africa.

    Deforestation is an ongoing crisis. But thanks to satellites and many recent studies, experts know vastly more about the problem than they did a decade or two ago, and with an economics toolkit, can evaluate the incentives and dynamics at play.

    “To the extent that there’s ambuiguity across different contexts with different findings, part of the point of our review piece is to draw out common themes — the important considerations in determining which policy levers can [work] in different circumstances,” Balboni says. “That’s a fast-evolving area. We don’t have all the answers, but part of the process is bringing together growing evidence about [everything] that affects how successful those choices can be.” More

  • in

    Pixel-by-pixel analysis yields insights into lithium-ion batteries

    By mining data from X-ray images, researchers at MIT, Stanford University, SLAC National Accelerator, and the Toyota Research Institute have made significant new discoveries about the reactivity of lithium iron phosphate, a material used in batteries for electric cars and in other rechargeable batteries.

    The new technique has revealed several phenomena that were previously impossible to see, including variations in the rate of lithium intercalation reactions in different regions of a lithium iron phosphate nanoparticle.

    The paper’s most significant practical finding — that these variations in reaction rate are correlated with differences in the thickness of the carbon coating on the surface of the particles — could lead to improvements in the efficiency of charging and discharging such batteries.

    “What we learned from this study is that it’s the interfaces that really control the dynamics of the battery, especially in today’s modern batteries made from nanoparticles of the active material. That means that our focus should really be on engineering that interface,” says Martin Bazant, the E.G. Roos Professor of Chemical Engineering and a professor of mathematics at MIT, who is the senior author of the study.

    This approach to discovering the physics behind complex patterns in images could also be used to gain insights into many other materials, not only other types of batteries but also biological systems, such as dividing cells in a developing embryo.

    “What I find most exciting about this work is the ability to take images of a system that’s undergoing the formation of some pattern, and learning the principles that govern that,” Bazant says.

    Hongbo Zhao PhD ’21, a former MIT graduate student who is now a postdoc at Princeton University, is the lead author of the new study, which appears today in Nature. Other authors include Richard Bratz, the Edwin R. Gilliland Professor of Chemical Engineering at MIT; William Chueh, an associate professor of materials science and engineering at Stanford and director of the SLAC-Stanford Battery Center; and Brian Storey, senior director of Energy and Materials at the Toyota Research Institute.

    “Until now, we could make these beautiful X-ray movies of battery nanoparticles at work, but it was challenging to measure and understand subtle details of how they function because the movies were so information-rich,” Chueh says. “By applying image learning to these nanoscale movies, we can extract insights that were not previously possible.”

    Modeling reaction rates

    Lithium iron phosphate battery electrodes are made of many tiny particles of lithium iron phosphate, surrounded by an electrolyte solution. A typical particle is about 1 micron in diameter and about 100 nanometers thick. When the battery discharges, lithium ions flow from the electrolyte solution into the material by an electrochemical reaction known as ion intercalation. When the battery charges, the intercalation reaction is reversed, and ions flow in the opposite direction.

    “Lithium iron phosphate (LFP) is an important battery material due to low cost, a good safety record, and its use of abundant elements,” Storey says. “We are seeing an increased use of LFP in the EV market, so the timing of this study could not be better.”

    Before the current study, Bazant had done a great deal of theoretical modeling of patterns formed by lithium-ion intercalation. Lithium iron phosphate prefers to exist in one of two stable phases: either full of lithium ions or empty. Since 2005, Bazant has been working on mathematical models of this phenomenon, known as phase separation, which generates distinctive patterns of lithium-ion flow driven by intercalation reactions. In 2015, while on sabbatical at Stanford, he began working with Chueh to try to interpret images of lithium iron phosphate particles from scanning tunneling X-ray microscopy.

    Using this type of microscopy, the researchers can obtain images that reveal the concentration of lithium ions, pixel-by-pixel, at every point in the particle. They can scan the particles several times as the particles charge or discharge, allowing them to create movies of how lithium ions flow in and out of the particles.

    In 2017, Bazant and his colleagues at SLAC received funding from the Toyota Research Institute to pursue further studies using this approach, along with other battery-related research projects.

    By analyzing X-ray images of 63 lithium iron phosphate particles as they charged and discharged, the researchers found that the movement of lithium ions within the material could be nearly identical to the computer simulations that Bazant had created earlier. Using all 180,000 pixels as measurements, the researchers trained the computational model to produce equations that accurately describe the nonequilibrium thermodynamics and reaction kinetics of the battery material.
    By analyzing X-ray images of lithium iron phosphate particles as they charged and discharged, researchers have shown that the movement of lithium ions within the material was nearly identical to computer simulations they had created earlier.  In each pair, the actual particles are on the left and the simulations are on the right.Courtesy of the researchers

    “Every little pixel in there is jumping from full to empty, full to empty. And we’re mapping that whole process, using our equations to understand how that’s happening,” Bazant says.

    The researchers also found that the patterns of lithium-ion flow that they observed could reveal spatial variations in the rate at which lithium ions are absorbed at each location on the particle surface.

    “It was a real surprise to us that we could learn the heterogeneities in the system — in this case, the variations in surface reaction rate — simply by looking at the images,” Bazant says. “There are regions that seem to be fast and others that seem to be slow.”

    Furthermore, the researchers showed that these differences in reaction rate were correlated with the thickness of the carbon coating on the surface of the lithium iron phosphate particles. That carbon coating is applied to lithium iron phosphate to help it conduct electricity — otherwise the material would conduct too slowly to be useful as a battery.

    “We discovered at the nano scale that variation of the carbon coating thickness directly controls the rate, which is something you could never figure out if you didn’t have all of this modeling and image analysis,” Bazant says.

    The findings also offer quantitative support for a hypothesis Bazant formulated several years ago: that the performance of lithium iron phosphate electrodes is limited primarily by the rate of coupled ion-electron transfer at the interface between the solid particle and the carbon coating, rather than the rate of lithium-ion diffusion in the solid.

    Optimized materials

    The results from this study suggest that optimizing the thickness of the carbon layer on the electrode surface could help researchers to design batteries that would work more efficiently, the researchers say.

    “This is the first study that’s been able to directly attribute a property of the battery material with a physical property of the coating,” Bazant says. “The focus for optimizing and designing batteries should be on controlling reaction kinetics at the interface of the electrolyte and electrode.”

    “This publication is the culmination of six years of dedication and collaboration,” Storey says. “This technique allows us to unlock the inner workings of the battery in a way not previously possible. Our next goal is to improve battery design by applying this new understanding.”  

    In addition to using this type of analysis on other battery materials, Bazant anticipates that it could be useful for studying pattern formation in other chemical and biological systems.

    This work was supported by the Toyota Research Institute through the Accelerated Materials Design and Discovery program. More