More stories

  • in

    Engineers find a new way to convert carbon dioxide into useful products

    MIT chemical engineers have devised an efficient way to convert carbon dioxide to carbon monoxide, a chemical precursor that can be used to generate useful compounds such as ethanol and other fuels.

    If scaled up for industrial use, this process could help to remove carbon dioxide from power plants and other sources, reducing the amount of greenhouse gases that are released into the atmosphere.

    “This would allow you to take carbon dioxide from emissions or dissolved in the ocean, and convert it into profitable chemicals. It’s really a path forward for decarbonization because we can take CO2, which is a greenhouse gas, and turn it into things that are useful for chemical manufacture,” says Ariel Furst, the Paul M. Cook Career Development Assistant Professor of Chemical Engineering and the senior author of the study.

    The new approach uses electricity to perform the chemical conversion, with help from a catalyst that is tethered to the electrode surface by strands of DNA. This DNA acts like Velcro to keep all the reaction components in close proximity, making the reaction much more efficient than if all the components were floating in solution.

    Furst has started a company called Helix Carbon to further develop the technology. Former MIT postdoc Gang Fan is the lead author of the paper, which appears in the Journal of the American Chemical Society Au. Other authors include Nathan Corbin PhD ’21, Minju Chung PhD ’23, former MIT postdocs Thomas Gill and Amruta Karbelkar, and Evan Moore ’23.

    Breaking down CO2

    Converting carbon dioxide into useful products requires first turning it into carbon monoxide. One way to do this is with electricity, but the amount of energy required for that type of electrocatalysis is prohibitively expensive.

    To try to bring down those costs, researchers have tried using electrocatalysts, which can speed up the reaction and reduce the amount of energy that needs to be added to the system. One type of catalyst used for this reaction is a class of molecules known as porphyrins, which contain metals such as iron or cobalt and are similar in structure to the heme molecules that carry oxygen in blood. 

    During this type of electrochemical reaction, carbon dioxide is dissolved in water within an electrochemical device, which contains an electrode that drives the reaction. The catalysts are also suspended in the solution. However, this setup isn’t very efficient because the carbon dioxide and the catalysts need to encounter each other at the electrode surface, which doesn’t happen very often.

    To make the reaction occur more frequently, which would boost the efficiency of the electrochemical conversion, Furst began working on ways to attach the catalysts to the surface of the electrode. DNA seemed to be the ideal choice for this application.

    “DNA is relatively inexpensive, you can modify it chemically, and you can control the interaction between two strands by changing the sequences,” she says. “It’s like a sequence-specific Velcro that has very strong but reversible interactions that you can control.”

    To attach single strands of DNA to a carbon electrode, the researchers used two “chemical handles,” one on the DNA and one on the electrode. These handles can be snapped together, forming a permanent bond. A complementary DNA sequence is then attached to the porphyrin catalyst, so that when the catalyst is added to the solution, it will bind reversibly to the DNA that’s already attached to the electrode — just like Velcro.

    Once this system is set up, the researchers apply a potential (or bias) to the electrode, and the catalyst uses this energy to convert carbon dioxide in the solution into carbon monoxide. The reaction also generates a small amount of hydrogen gas, from the water. After the catalysts wear out, they can be released from the surface by heating the system to break the reversible bonds between the two DNA strands, and replaced with new ones.

    An efficient reaction

    Using this approach, the researchers were able to boost the Faradaic efficiency of the reaction to 100 percent, meaning that all of the electrical energy that goes into the system goes directly into the chemical reactions, with no energy wasted. When the catalysts are not tethered by DNA, the Faradaic efficiency is only about 40 percent.

    This technology could be scaled up for industrial use fairly easily, Furst says, because the carbon electrodes the researchers used are much less expensive than conventional metal electrodes. The catalysts are also inexpensive, as they don’t contain any precious metals, and only a small concentration of the catalyst is needed on the electrode surface.

    By swapping in different catalysts, the researchers plan to try making other products such as methanol and ethanol using this approach. Helix Carbon, the company started by Furst, is also working on further developing the technology for potential commercial use.

    The research was funded by the U.S. Army Research Office, the CIFAR Azrieli Global Scholars Program, the MIT Energy Initiative, and the MIT Deshpande Center. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    Artificial reef designed by MIT engineers could protect marine life, reduce storm damage

    The beautiful, gnarled, nooked-and-crannied reefs that surround tropical islands serve as a marine refuge and natural buffer against stormy seas. But as the effects of climate change bleach and break down coral reefs around the world, and extreme weather events become more common, coastal communities are left increasingly vulnerable to frequent flooding and erosion.

    An MIT team is now hoping to fortify coastlines with “architected” reefs — sustainable, offshore structures engineered to mimic the wave-buffering effects of natural reefs while also providing pockets for fish and other marine life.

    The team’s reef design centers on a cylindrical structure surrounded by four rudder-like slats. The engineers found that when this structure stands up against a wave, it efficiently breaks the wave into turbulent jets that ultimately dissipate most of the wave’s total energy. The team has calculated that the new design could reduce as much wave energy as existing artificial reefs, using 10 times less material.

    The researchers plan to fabricate each cylindrical structure from sustainable cement, which they would mold in a pattern of “voxels” that could be automatically assembled, and would provide pockets for fish to explore and other marine life to settle in. The cylinders could be connected to form a long, semipermeable wall, which the engineers could erect along a coastline, about half a mile from shore. Based on the team’s initial experiments with lab-scale prototypes, the architected reef could reduce the energy of incoming waves by more than 95 percent.

    “This would be like a long wave-breaker,” says Michael Triantafyllou, the Henry L. and Grace Doherty Professor in Ocean Science and Engineering in the Department of Mechanical Engineering. “If waves are 6 meters high coming toward this reef structure, they would be ultimately less than a meter high on the other side. So, this kills the impact of the waves, which could prevent erosion and flooding.”

    Details of the architected reef design are reported today in a study appearing in the open-access journal PNAS Nexus. Triantafyllou’s MIT co-authors are Edvard Ronglan SM ’23; graduate students Alfonso Parra Rubio, Jose del Auila Ferrandis, and Erik Strand; research scientists Patricia Maria Stathatou and Carolina Bastidas; and Professor Neil Gershenfeld, director of the Center for Bits and Atoms; along with Alexis Oliveira Da Silva at the Polytechnic Institute of Paris, Dixia Fan of Westlake University, and Jeffrey Gair Jr. of Scinetics, Inc.

    Leveraging turbulence

    Some regions have already erected artificial reefs to protect their coastlines from encroaching storms. These structures are typically sunken ships, retired oil and gas platforms, and even assembled configurations of concrete, metal, tires, and stones. However, there’s variability in the types of artificial reefs that are currently in place, and no standard for engineering such structures. What’s more, the designs that are deployed tend to have a low wave dissipation per unit volume of material used. That is, it takes a huge amount of material to break enough wave energy to adequately protect coastal communities.

    The MIT team instead looked for ways to engineer an artificial reef that would efficiently dissipate wave energy with less material, while also providing a refuge for fish living along any vulnerable coast.

    “Remember, natural coral reefs are only found in tropical waters,” says Triantafyllou, who is director of the MIT Sea Grant. “We cannot have these reefs, for instance, in Massachusetts. But architected reefs don’t depend on temperature, so they can be placed in any water, to protect more coastal areas.”

    MIT researchers test the wave-breaking performance of two artificial reef structures in the MIT Towing Tank.Credit: Courtesy of the researchers

    The new effort is the result of a collaboration between researchers in MIT Sea Grant, who developed the reef structure’s hydrodynamic design, and researchers at the Center for Bits and Atoms (CBA), who worked to make the structure modular and easy to fabricate on location. The team’s architected reef design grew out of two seemingly unrelated problems. CBA researchers were developing ultralight cellular structures for the aerospace industry, while Sea Grant researchers were assessing the performance of blowout preventers in offshore oil structures — cylindrical valves that are used to seal off oil and gas wells and prevent them from leaking.

    The team’s tests showed that the structure’s cylindrical arrangement generated a high amount of drag. In other words, the structure appeared to be especially efficient in dissipating high-force flows of oil and gas. They wondered: Could the same arrangement dissipate another type of flow, in ocean waves?

    The researchers began to play with the general structure in simulations of water flow, tweaking its dimensions and adding certain elements to see whether and how waves changed as they crashed against each simulated design. This iterative process ultimately landed on an optimized geometry: a vertical cylinder flanked by four long slats, each attached to the cylinder in a way that leaves space for water to flow through the resulting structure. They found this setup essentially breaks up any incoming wave energy, causing parts of the wave-induced flow to spiral to the sides rather than crashing ahead.

    “We’re leveraging this turbulence and these powerful jets to ultimately dissipate wave energy,” Ferrandis says.

    Standing up to storms

    Once the researchers identified an optimal wave-dissipating structure, they fabricated a laboratory-scale version of an architected reef made from a series of the cylindrical structures, which they 3D-printed from plastic. Each test cylinder measured about 1 foot wide and 4 feet tall. They assembled a number of cylinders, each spaced about a foot apart, to form a fence-like structure, which they then lowered into a wave tank at MIT. They then generated waves of various heights and measured them before and after passing through the architected reef.

    “We saw the waves reduce substantially, as the reef destroyed their energy,” Triantafyllou says.

    The team has also looked into making the structures more porous, and friendly to fish. They found that, rather than making each structure from a solid slab of plastic, they could use a more affordable and sustainable type of cement.

    “We’ve worked with biologists to test the cement we intend to use, and it’s benign to fish, and ready to go,” he adds.

    They identified an ideal pattern of “voxels,” or microstructures, that cement could be molded into, in order to fabricate the reefs while creating pockets in which fish could live. This voxel geometry resembles individual egg cartons, stacked end to end, and appears to not affect the structure’s overall wave-dissipating power.

    “These voxels still maintain a big drag while allowing fish to move inside,” Ferrandis says.

    The team is currently fabricating cement voxel structures and assembling them into a lab-scale architected reef, which they will test under various wave conditions. They envision that the voxel design could be modular, and scalable to any desired size, and easy to transport and install in various offshore locations. “Now we’re simulating actual sea patterns, and testing how these models will perform when we eventually have to deploy them,” says Anjali Sinha, a graduate student at MIT who recently joined the group.

    Going forward, the team hopes to work with beach towns in Massachusetts to test the structures on a pilot scale.

    “These test structures would not be small,” Triantafyllou emphasizes. “They would be about a mile long, and about 5 meters tall, and would cost something like 6 million dollars per mile. So it’s not cheap. But it could prevent billions of dollars in storm damage. And with climate change, protecting the coasts will become a big issue.”

    This work was funded, in part, by the U.S. Defense Advanced Research Projects Agency. More

  • in

    A new way to quantify climate change impacts: “Outdoor days”

    For most people, reading about the difference between a global average temperature rise of 1.5 C versus 2 C doesn’t conjure up a clear image of how their daily lives will actually be affected. So, researchers at MIT have come up with a different way of measuring and describing what global climate change patterns, in specific regions around the world, will mean for people’s daily activities and their quality of life.

    The new measure, called “outdoor days,” describes the number of days per year that outdoor temperatures are neither too hot nor too cold for people to go about normal outdoor activities, whether work or leisure, in reasonable comfort. Describing the impact of rising temperatures in those terms reveals some significant global disparities, the researchers say.

    The findings are described in a research paper written by MIT professor of civil and environmental engineering Elfatih Eltahir and postdocs Yeon-Woo Choi and Muhammad Khalifa, and published in the Journal of Climate.

    Eltahir says he got the idea for this new system during his hourlong daily walks in the Boston area. “That’s how I interface with the temperature every day,” he says. He found that there have been more winter days recently when he could walk comfortably than in past years. Originally from Sudan, he says that when he returned there for visits, the opposite was the case: In winter, the weather tends to be relatively comfortable, but the number of these clement winter days has been declining. “There are fewer days that are really suitable for outdoor activity,” Eltahir says.

    Rather than predefine what constitutes an acceptable outdoor day, Eltahir and his co-authors created a website where users can set their own definition of the highest and lowest temperatures they consider comfortable for their outside activities, then click on a country within a world map, or a state within the U.S., and get a forecast of how the number of days meeting those criteria will change between now and the end of this century. The website is freely available for anyone to use.

    “This is actually a new feature that’s quite innovative,” he says. “We don’t tell people what an outdoor day should be; we let the user define an outdoor day. Hence, we invite them to participate in defining how future climate change will impact their quality of life, and hopefully, this will facilitate deeper understanding of how climate change will impact individuals directly.”

    After deciding that this was a way of looking at the issue of climate change that might be useful, Eltahir says, “we started looking at the data on this, and we made several discoveries that I think are pretty significant.”

    First of all, there will be winners and losers, and the losers tend to be concentrated in the global south. “In the North, in a place like Russia or Canada, you gain a significant number of outdoor days. And when you go south to places like Bangladesh or Sudan, it’s bad news. You get significantly fewer outdoor days. It is very striking.”

    To derive the data, the software developed by the team uses all of the available climate models, about 50 of them, and provides output showing all of those projections on a single graph to make clear the range of possibilities, as well as the average forecast.

    When we think of climate change, Eltahir says, we tend to look at maps that show that virtually everywhere, temperatures will rise. “But if you think in terms of outdoor days, you see that the world is not flat. The North is gaining; the South is losing.”

    While North-South disparity in exposure and vulnerability has been broadly recognized in the past, he says, this way of quantifying the effects on the hazard (change in weather patterns) helps to bring home how strong the uneven risks from climate change on quality of life will be. “When you look at places like Bangladesh, Colombia, Ivory Coast, Sudan, Indonesia — they are all losing outdoor days.”

    The same kind of disparity shows up in Europe, he says. The effects are already being felt, and are showing up in travel patterns: “There is a shift to people spending time in northern European states. They go to Sweden and places like that instead of the Mediterranean, which is showing a significant drop,” he says.

    Placing this kind of detailed and localized information at people’s fingertips, he says, “I think brings the issue of communication of climate change to a different level.” With this tool, instead of looking at global averages, “we are saying according to your own definition of what a pleasant day is, [this is] how climate change is going to impact you, your activities.”

    And, he adds, “hopefully that will help society make decisions about what to do with this global challenge.”

    The project received support from the MIT Climate Grand Challenges project “Jameel Observatory – Climate Resilience Early Warning System Network,” as well as from the Abdul Latif Jameel Water and Food Systems Lab. More

  • in

    Lessons from Fukushima: Prepare for the unlikely

    When a devastating earthquake and tsunami overwhelmed the protective systems at the Fukushima Dai’ichi nuclear power plant complex in Japan in March 2011, it triggered a sequence of events leading to one of the worst releases of radioactive materials in the world to date. Although nuclear energy is having a revival as a low-emissions energy source to mitigate climate change, the Fukushima accident is still cited as a reason for hesitancy in adopting it.

    A new study synthesizes information from multidisciplinary sources to understand how the Fukushima Dai’ichi disaster unfolded, and points to the importance of mitigation measures and last lines of defense — even against accidents considered highly unlikely. These procedures have received relatively little attention, but they are critical in determining how severe the consequences of a reactor failure will be, the researchers say.

    The researchers note that their synthesis is one of the few attempts to look at data across disciplinary boundaries, including: the physics and engineering of what took place within the plant’s systems, the plant operators’ actions throughout the emergency, actions by emergency responders, the meteorology of radionuclide releases and transport, and the environmental and health consequences documented since the event.

    The study appears in the journal iScience, in an open-access paper by postdoc Ali Ayoub and Professor Haruko Wainwright at MIT, along with others in Switzerland, Japan, and New Mexico.

    Since 2013, Wainwright has been leading the research to integrate all the radiation monitoring data in the Fukushima region into integrated maps. “I was staring at the contamination map for nearly 10 years, wondering what created the main plume extending in the northwest direction, but I could not find exact information,” Wainwright says. “Our study is unique because we started from the consequence, the contamination map, and tried to identify the key factors for the consequence. Other people study the Fukushima accident from the root cause, the tsunami.”

    One thing they found was that while all the operating reactors, units 1, 2, and 3, suffered core meltdowns as a result of the failure of emergency cooling systems, units 1 and 3 — although they did experience hydrogen explosions — did not release as much radiation to the environment because their venting systems essentially worked to relieve pressure inside the containment vessels as intended. But the same system in unit 2 failed badly.

    “People think that the hydrogen explosion or the core meltdown were the worst things, or the major driver of the radiological consequences of the accident,” Wainright says, “but our analysis found that’s not the case.” Much more significant in terms of the radiological release was the failure of the one venting mechanism.

    “There is a pressure-release mechanism that goes through water where a lot of the radionuclides get filtered out,” she explains. That system was effective in units 1 and 3, filtering out more than 90 percent of the radioactive elements before the gas was vented. However, “in unit 2, that pressure release mechanism got stuck, and the operators could not manually open it.” A hydrogen explosion in unit 1 had damaged the pressure relief mechanism of unit 2. This led to a breach of the containment structure and direct, unfiltered venting to the atmosphere, which, according to the new study, was what produced the greatest amount of contamination from the whole weeks-long event.

    Another factor was the timing of the attempt to vent the pressure buildup in the reactor. Guidelines at the time, and to this day in many reactors, specified that no venting should take place until the pressure inside the reactor containment vessel reached a specified threshold, with no regard to the wind directions at the time. In the case of Fukushima, an earlier venting could have dramatically reduced the impact: Much of the release happened when winds were blowing directly inland, but earlier the wind had been blowing offshore.

    “That pressure-release mechanism has not been a major focus of the engineering community,” she says. While there is appropriate attention to measures that prevent a core meltdown in the first place, “this sort of last line of defense has not been the main focus and should get more attention.”

    Wainwright says the study also underlines several successes in the management of the Fukushima accident. Many of the safety systems did work as they were designed. For example, even though the oldest reactor, unit 1, suffered the greatest internal damage, it released little radioactive material. Most people were able to evacuate from the 20-kilometer (12-mile) zone before the largest release happened. The mitigation measures were “somewhat successful,” Wainwright says. But there was tremendous confusion and anger during and after the accident because there were no preparations in place for such an event.

    Much work has focused on ways to prevent the kind of accidents that happened at Fukushima — for example, in the U.S. reactor operators can deploy portable backup power supplies to maintain proper reactor cooling at any reactor site. But the ongoing situation at the Zaporizhzhia nuclear complex in Ukraine, where nuclear safety is challenged by acts of war, demonstrates that despite engineers’ and operators’ best efforts to prevent it, “the totally unexpected could still happen,” Wainwright says.

    “The big-picture message is that we should have equal attention to both prevention and mitigation of accidents,” she says. “This is the essence of resilience, and it applies beyond nuclear power plants to all essential infrastructure of a functioning society, for example, the electric grid, the food and water supply, the transportation sector, etc.”

    One thing the researchers recommend is that in designing evacuation protocols, planners should make more effort to learn from much more frequent disasters such as wildfires and hurricanes. “We think getting more interdisciplinary, transdisciplinary knowledge from other kinds of disasters would be essential,” she says. Most of the emergency response strategies presently in place, she says, were designed in the 1980s and ’90s, and need to be modernized. “Consequences can be mitigated. A nuclear accident does not have to be a catastrophe, as is often portrayed in popular culture,” Wainright says.

    The research team included Giovanni Sansavini at ETH Zurich in Switzerland; Randall Gauntt at Sandia National Laboratories in New Mexico; and Kimiaki Saito at the Japan Atomic Energy Agency. More

  • in

    Future nuclear power reactors could rely on molten salts — but what about corrosion?

    Most discussions of how to avert climate change focus on solar and wind generation as key to the transition to a future carbon-free power system. But Michael Short, the Class of ’42 Associate Professor of Nuclear Science and Engineering at MIT and associate director of the MIT Plasma Science and Fusion Center (PSFC), is impatient with such talk. “We can say we should have only wind and solar someday. But we don’t have the luxury of ‘someday’ anymore, so we can’t ignore other helpful ways to combat climate change,” he says. “To me, it’s an ‘all-hands-on-deck’ thing. Solar and wind are clearly a big part of the solution. But I think that nuclear power also has a critical role to play.”

    For decades, researchers have been working on designs for both fission and fusion nuclear reactors using molten salts as fuels or coolants. While those designs promise significant safety and performance advantages, there’s a catch: Molten salt and the impurities within it often corrode metals, ultimately causing them to crack, weaken, and fail. Inside a reactor, key metal components will be exposed not only to molten salt but also simultaneously to radiation, which generally has a detrimental effect on materials, making them more brittle and prone to failure. Will irradiation make metal components inside a molten salt-cooled nuclear reactor corrode even more quickly?

    Short and Weiyue Zhou PhD ’21, a postdoc in the PSFC, have been investigating that question for eight years. Their recent experimental findings show that certain alloys will corrode more slowly when they’re irradiated — and identifying them among all the available commercial alloys can be straightforward.

    The first challenge — building a test facility

    When Short and Zhou began investigating the effect of radiation on corrosion, practically no reliable facilities existed to look at the two effects at once. The standard approach was to examine such mechanisms in sequence: first corrode, then irradiate, then examine the impact on the material. That approach greatly simplifies the task for the researchers, but with a major trade-off. “In a reactor, everything is going to be happening at the same time,” says Short. “If you separate the two processes, you’re not simulating a reactor; you’re doing some other experiment that’s not as relevant.”

    So, Short and Zhou took on the challenge of designing and building an experimental setup that could do both at once. Short credits a team at the University of Michigan for paving the way by designing a device that could accomplish that feat in water, rather than molten salts. Even so, Zhou notes, it took them three years to come up with a device that would work with molten salts. Both researchers recall failure after failure, but the persistent Zhou ultimately tried a totally new design, and it worked. Short adds that it also took them three years to precisely replicate the salt mixture used by industry — another factor critical to getting a meaningful result. The hardest part was achieving and ensuring that the purity was correct by removing critical impurities such as moisture, oxygen, and certain other metals.

    As they were developing and testing their setup, Short and Zhou obtained initial results showing that proton irradiation did not always accelerate corrosion but sometimes actually decelerated it. They and others had hypothesized that possibility, but even so, they were surprised. “We thought we must be doing something wrong,” recalls Short. “Maybe we mixed up the samples or something.” But they subsequently made similar observations for a variety of conditions, increasing their confidence that their initial observations were not outliers.

    The successful setup

    Central to their approach is the use of accelerated protons to mimic the impact of the neutrons inside a nuclear reactor. Generating neutrons would be both impractical and prohibitively expensive, and the neutrons would make everything highly radioactive, posing health risks and requiring very long times for an irradiated sample to cool down enough to be examined. Using protons would enable Short and Zhou to examine radiation-altered corrosion both rapidly and safely.

    Key to their experimental setup is a test chamber that they attach to a proton accelerator. To prepare the test chamber for an experiment, they place inside it a thin disc of the metal alloy being tested on top of a a pellet of salt. During the test, the entire foil disc is exposed to a bath of molten salt. At the same time, a beam of protons bombards the sample from the side opposite the salt pellet, but the proton beam is restricted to a circle in the middle of the foil sample. “No one can argue with our results then,” says Short. “In a single experiment, the whole sample is subjected to corrosion, and only a circle in the center of the sample is simultaneously irradiated by protons. We can see the curvature of the proton beam outline in our results, so we know which region is which.”

    The results with that arrangement were unchanged from the initial results. They confirmed the researchers’ preliminary findings, supporting their controversial hypothesis that rather than accelerating corrosion, radiation would actually decelerate corrosion in some materials under some conditions. Fortunately, they just happen to be the same conditions that will be experienced by metals in molten salt-cooled reactors.

    Why is that outcome controversial? A closeup look at the corrosion process will explain. When salt corrodes metal, the salt finds atomic-level openings in the solid, seeps in, and dissolves salt-soluble atoms, pulling them out and leaving a gap in the material — a spot where the material is now weak. “Radiation adds energy to atoms, causing them to be ballistically knocked out of their positions and move very fast,” explains Short. So, it makes sense that irradiating a material would cause atoms to move into the salt more quickly, increasing the rate of corrosion. Yet in some of their tests, the researchers found the opposite to be true.

    Experiments with “model” alloys

    The researchers’ first experiments in their novel setup involved “model” alloys consisting of nickel and chromium, a simple combination that would give them a first look at the corrosion process in action. In addition, they added europium fluoride to the salt, a compound known to speed up corrosion. In our everyday world, we often think of corrosion as taking years or decades, but in the more extreme conditions of a molten salt reactor it can noticeably occur in just hours. The researchers used the europium fluoride to speed up corrosion even more without changing the corrosion process. This allowed for more rapid determination of which materials, under which conditions, experienced more or less corrosion with simultaneous proton irradiation.

    The use of protons to emulate neutron damage to materials meant that the experimental setup had to be carefully designed and the operating conditions carefully selected and controlled. Protons are hydrogen atoms with an electrical charge, and under some conditions the hydrogen could chemically react with atoms in the sample foil, altering the corrosion response, or with ions in the salt, making the salt more corrosive. Therefore, the proton beam had to penetrate the foil sample but then stop in the salt as soon as possible. Under these conditions, the researchers found they could deliver a relatively uniform dose of radiation inside the foil layer while also minimizing chemical reactions in both the foil and the salt.

    Tests showed that a proton beam accelerated to 3 million electron-volts combined with a foil sample between 25 and 30 microns thick would work well for their nickel-chromium alloys. The temperature and duration of the exposure could be adjusted based on the corrosion susceptibility of the specific materials being tested.

    Optical images of samples examined after tests with the model alloys showed a clear boundary between the area that was exposed only to the molten salt and the area that was also exposed to the proton beam. Electron microscope images focusing on that boundary showed that the area that had been exposed only to the molten salt included dark patches where the molten salt had penetrated all the way through the foil, while the area that had also been exposed to the proton beam showed almost no such dark patches.

    To confirm that the dark patches were due to corrosion, the researchers cut through the foil sample to create cross sections. In them, they could see tunnels that the salt had dug into the sample. “For regions not under radiation, we see that the salt tunnels link the one side of the sample to the other side,” says Zhou. “For regions under radiation, we see that the salt tunnels stop more or less halfway and rarely reach the other side. So we verified that they didn’t penetrate the whole way.”

    The results “exceeded our wildest expectations,” says Short. “In every test we ran, the application of radiation slowed corrosion by a factor of two to three times.”

    More experiments, more insights

    In subsequent tests, the researchers more closely replicated commercially available molten salt by omitting the additive (europium fluoride) that they had used to speed up corrosion, and they tweaked the temperature for even more realistic conditions. “In carefully monitored tests, we found that by raising the temperature by 100 degrees Celsius, we could get corrosion to happen about 1,000 times faster than it would in a reactor,” says Short.

    Images from experiments with the nickel-chromium alloy plus the molten salt without the corrosive additive yielded further insights. Electron microscope images of the side of the foil sample facing the molten salt showed that in sections only exposed to the molten salt, the corrosion is clearly focused on the weakest part of the structure — the boundaries between the grains in the metal. In sections that were exposed to both the molten salt and the proton beam, the corrosion isn’t limited to the grain boundaries but is more spread out over the surface. Experimental results showed that these cracks are shallower and less likely to cause a key component to break.

    Short explains the observations. Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are areas — called grain boundaries — where the atoms don’t line up as well. In the corrosion-only images, dark lines track the grain boundaries. Molten salt has seeped into the grain boundaries and pulled out salt-soluble atoms. In the corrosion-plus-irradiation images, the damage is more general. It’s not only the grain boundaries that get attacked but also regions within the grains.

    So, when the material is irradiated, the molten salt also removes material from within the grains. Over time, more material comes out of the grains themselves than from the spaces between them. The removal isn’t focused on the grain boundaries; it’s spread out over the whole surface. As a result, any cracks that form are shallower and more spread out, and the material is less likely to fail.

    Testing commercial alloys

    The experiments described thus far involved model alloys — simple combinations of elements that are good for studying science but would never be used in a reactor. In the next series of experiments, the researchers focused on three commercially available alloys that are composed of nickel, chromium, iron, molybdenum, and other elements in various combinations.

    Results from the experiments with the commercial alloys showed a consistent pattern — one that confirmed an idea that the researchers had going in: the higher the concentration of salt-soluble elements in the alloy, the worse the radiation-induced corrosion damage. Radiation will increase the rate at which salt-soluble atoms such as chromium leave the grain boundaries, hastening the corrosion process. However, if there are more not-soluble elements such as nickel present, those atoms will go into the salt more slowly. Over time, they’ll accumulate at the grain boundary and form a protective coating that blocks the grain boundary — a “self-healing mechanism that decelerates the rate of corrosion,” say the researchers.

    Thus, if an alloy consists mostly of atoms that don’t dissolve in molten salt, irradiation will cause them to form a protective coating that slows the corrosion process. But if an alloy consists mostly of atoms that dissolve in molten salt, irradiation will make them dissolve faster, speeding up corrosion. As Short summarizes, “In terms of corrosion, irradiation makes a good alloy better and a bad alloy worse.”

    Real-world relevance plus practical guidelines

    Short and Zhou find their results encouraging. In a nuclear reactor made of “good” alloys, the slowdown in corrosion will probably be even more pronounced than what they observed in their proton-based experiments because the neutrons that inflict the damage won’t chemically react with the salt to make it more corrosive. As a result, reactor designers could push the envelope more in their operating conditions, allowing them to get more power out of the same nuclear plant without compromising on safety.

    However, the researchers stress that there’s much work to be done. Many more projects are needed to explore and understand the exact corrosion mechanism in specific alloys under different irradiation conditions. In addition, their findings need to be replicated by groups at other institutions using their own facilities. “What needs to happen now is for other labs to build their own facilities and start verifying whether they get the same results as we did,” says Short. To that end, Short and Zhou have made the details of their experimental setup and all of their data freely available online. “We’ve also been actively communicating with researchers at other institutions who have contacted us,” adds Zhou. “When they’re planning to visit, we offer to show them demonstration experiments while they’re here.”

    But already their findings provide practical guidance for other researchers and equipment designers. For example, the standard way to quantify corrosion damage is by “mass loss,” a measure of how much weight the material has lost. But Short and Zhou consider mass loss a flawed measure of corrosion in molten salts. “If you’re a nuclear plant operator, you usually care whether your structural components are going to break,” says Short. “Our experiments show that radiation can change how deep the cracks are, when all other things are held constant. The deeper the cracks, the more likely a structural component is to break, leading to a reactor failure.”

    In addition, the researchers offer a simple rule for identifying good metal alloys for structural components in molten salt reactors. Manufacturers provide extensive lists of available alloys with different compositions, microstructures, and additives. Faced with a list of options for critical structures, the designer of a new nuclear fission or fusion reactor can simply examine the composition of each alloy being offered. The one with the highest content of corrosion-resistant elements such as nickel will be the best choice. Inside a nuclear reactor, that alloy should respond to a bombardment of radiation not by corroding more rapidly but by forming a protective layer that helps block the corrosion process. “That may seem like a trivial result, but the exact threshold where radiation decelerates corrosion depends on the salt chemistry, the density of neutrons in the reactor, their energies, and a few other factors,” says Short. “Therefore, the complete guidelines are a bit more complicated. But they’re presented in a straightforward way that users can understand and utilize to make a good choice for the molten salt–based reactor they’re designing.”

    This research was funded, in part, by Eni S.p.A. through the MIT Plasma Science and Fusion Center’s Laboratory for Innovative Fusion Technologies. Earlier work was funded, in part, by the Transatomic Power Corporation and by the U.S. Department of Energy Nuclear Energy University Program. Equipment development and testing was supported by the Transatomic Power Corporation.

    This article appears in the Winter 2024 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Optimizing nuclear fuels for next-generation reactors

    In 2010, when Ericmoore Jossou was attending college in northern Nigeria, the lights would flicker in and out all day, sometimes lasting only for a couple of hours at a time. The frustrating experience reaffirmed Jossou’s realization that the country’s sporadic energy supply was a problem. It was the beginning of his path toward nuclear engineering.

    Because of the energy crisis, “I told myself I was going to find myself in a career that allows me to develop energy technologies that can easily be scaled to meet the energy needs of the world, including my own country,” says Jossou, an assistant professor in a shared position between the departments of Nuclear Science and Engineering (NSE), where is the John Clark Hardwick (1986) Professor, and of Electrical Engineering and Computer Science.

    Today, Jossou uses computer simulations for rational materials design, AI-aided purposeful development of cladding materials and fuels for next-generation nuclear reactors. As one of the shared faculty hires between the MIT Schwarzman College of Computing and departments across MIT, his appointment recognizes his commitment to computing for climate and the environment.

    A well-rounded education in Nigeria

    Growing up in Lagos, Jossou knew education was about more than just bookish knowledge, so he was eager to travel and experience other cultures. He would start in his own backyard by traveling across the Niger river and enrolling in Ahmadu Bello University in northern Nigeria. Moving from the south was a cultural education with a different language and different foods. It was here that Jossou got to try and love tuwo shinkafa, a northern Nigerian rice-based specialty, for the first time.

    After his undergraduate studies, armed with a bachelor’s degree in chemistry, Jossou was among a small cohort selected for a specialty master’s training program funded by the World Bank Institute and African Development Bank. The program at the African University of Science and Technology in Abuja, Nigeria, is a pan-African venture dedicated to nurturing homegrown science talent on the continent. Visiting professors from around the world taught intensive three-week courses, an experience which felt like drinking from a fire hose. The program widened Jossou’s views and he set his sights on a doctoral program with an emphasis on clean energy systems.

    A pivot to nuclear science

    While in Nigeria, Jossou learned of Professor Jerzy Szpunar at the University of Saskatchewan in Canada, who was looking for a student researcher to explore fuels and alloys for nuclear reactors. Before then, Jossou was lukewarm on nuclear energy, but the research sounded fascinating. The Fukushima, Japan, incident was recently in the rearview mirror and Jossou remembered his early determination to address his own country’s energy crisis. He was sold on the idea and graduated with a doctoral degree from the University of Saskatchewan on an international dean’s scholarship.

    Jossou’s postdoctoral work registered a brief stint at Brookhaven National Laboratory as staff scientist. He leaped at the opportunity to join MIT NSE as a way of realizing his research interest and teaching future engineers. “I would really like to conduct cutting-edge research in nuclear materials design and to pass on my knowledge to the next generation of scientists and engineers and there’s no better place to do that than at MIT,” Jossou says.

    Merging material science and computational modeling

    Jossou’s doctoral work on designing nuclear fuels for next-generation reactors forms the basis of research his lab is pursuing at MIT NSE. Nuclear reactors that were built in the 1950s and ’60s are getting a makeover in terms of improved accident tolerance. Reactors are not confined to one kind, either: We have micro reactors and are now considering ones using metallic nuclear fuels, Jossou points out. The diversity of options is enough to keep researchers busy testing materials fit for cladding, the lining that prevents corrosion of the fuel and release of radioactive fission products into the surrounding reactor coolant.

    The team is also investigating fuels that improve burn-up efficiencies, so they can last longer in the reactor. An intriguing approach has been to immobilize the gas bubbles that arise from the fission process, so they don’t grow and degrade the fuel.

    Since joining MIT in July 2023, Jossou is setting up a lab that optimizes the composition of accident-tolerant nuclear fuels. He is leaning on his materials science background and looping computer simulations and artificial intelligence in the mix.

    Computer simulations allow the researchers to narrow down the potential field of candidates, optimized for specific parameters, so they can synthesize only the most promising candidates in the lab. And AI’s predictive capabilities guide researchers on which materials composition to consider next. “We no longer depend on serendipity to choose our materials, our lab is based on rational materials design,” Jossou says, “we can rapidly design advanced nuclear fuels.”

    Advancing energy causes in Africa

    Now that he is at MIT, Jossou admits the view from the outside is different. He now harbors a different perspective on what Africa needs to address some of its challenges. “The starting point to solve our problems is not money; it needs to start with ideas,” he says, “we need to find highly skilled people who can actually solve problems.” That job involves adding economic value to the rich arrays of raw materials that the continent is blessed with. It frustrates Jossou that Niger, a country rich in raw material for uranium, has no nuclear reactors of its own. It ships most of its ore to France. “The path forward is to find a way to refine these materials in Africa and to be able to power the industries on that continent as well,” Jossou says.

    Jossou is determined to do his part to eliminate these roadblocks.

    Anchored in mentorship, Jossou’s solution aims to train talent from Africa in his own lab. He has applied for a MIT Global Experiences MISTI grant to facilitate travel and research studies for Ghanaian scientists. “The goal is to conduct research in our facility and perhaps add value to indigenous materials,” Jossou says.

    Adding value has been a consistent theme of Jossou’s career. He remembers wanting to become a neurosurgeon after reading “Gifted Hands,” moved by the personal story of the author, Ben Carson. As Jossou grew older, however, he realized that becoming a doctor wasn’t necessarily what he wanted. Instead, he was looking to add value. “What I wanted was really to take on a career that allows me to solve a societal problem.” The societal problem of clean and safe energy for all is precisely what Jossou is working on today. More

  • in

    Study finds lands used for grazing can worsen or help climate change

    When it comes to global climate change, livestock grazing can be either a blessing or a curse, according to a new study, which offers clues on how to tell the difference.

    If managed properly, the study shows, grazing can actually increase the amount of carbon from the air that gets stored in the ground and sequestered for the long run. But if there is too much grazing, soil erosion can result, and the net effect is to cause more carbon losses, so that the land becomes a net carbon source, instead of a carbon sink. And the study found that the latter is far more common around the world today.

    The new work, published today in the journal Nature Climate Change, provides ways to determine the tipping point between the two, for grazing lands in a given climate zone and soil type. It also provides an estimate of the total amount of carbon that has been lost over past decades due to livestock grazing, and how much could be removed from the atmosphere if grazing optimization management implemented. The study was carried out by Cesar Terrer, an assistant professor of civil and environmental engineering at MIT; Shuai Ren, a PhD student at the Chinese Academy of Sciences whose thesis is co-supervised by Terrer; and four others.

    “This has been a matter of debate in the scientific literature for a long time,” Terrer says. “In general experiments, grazing decreases soil carbon stocks, but surprisingly, sometimes grazing increases soil carbon stocks, which is why it’s been puzzling.”

    What happens, he explains, is that “grazing could stimulate vegetation growth through easing resource constraints such as light and nutrients, thereby increasing root carbon inputs to soils, where carbon can stay there for centuries or millennia.”

    But that only works up to a certain point, the team found after a careful analysis of 1,473 soil carbon observations from different grazing studies from many locations around the world. “When you cross a threshold in grazing intensity, or the amount of animals grazing there, that is when you start to see sort of a tipping point — a strong decrease in the amount of carbon in the soil,” Terrer explains.

    That loss is thought to be primarily from increased soil erosion on the denuded land. And with that erosion, Terrer says, “basically you lose a lot of the carbon that you have been locking in for centuries.”

    The various studies the team compiled, although they differed somewhat, essentially used similar methodology, which is to fence off a portion of land so that livestock can’t access it, and then after some time take soil samples from within the enclosure area, and from comparable nearby areas that have been grazed, and compare the content of carbon compounds.

    “Along with the data on soil carbon for the control and grazed plots,” he says, “we also collected a bunch of other information, such as the mean annual temperature of the site, mean annual precipitation, plant biomass, and properties of the soil, like pH and nitrogen content. And then, of course, we estimate the grazing intensity — aboveground biomass consumed, because that turns out to be the key parameter.”  

    With artificial intelligence models, the authors quantified the importance of each of these parameters, those drivers of intensity — temperature, precipitation, soil properties — in modulating the sign (positive or negative) and magnitude of the impact of grazing on soil carbon stocks. “Interestingly, we found soil carbon stocks increase and then decrease with grazing intensity, rather than the expected linear response,” says Ren.

    Having developed the model through AI methods and validated it, including by comparing its predictions with those based on underlying physical principles, they can then apply the model to estimating both past and future effects. “In this case,” Terrer says, “we use the model to quantify the historical loses in soil carbon stocks from grazing. And we found that 46 petagrams [billion metric tons] of soil carbon, down to a depth of one meter, have been lost in the last few decades due to grazing.”

    By way of comparison, the total amount of greenhouse gas emissions per year from all fossil fuels is about 10 petagrams, so the loss from grazing equals more than four years’ worth of all the world’s fossil emissions combined.

    What they found was “an overall decline in soil carbon stocks, but with a lot of variability.” Terrer says. The analysis showed that the interplay between grazing intensity and environmental conditions such as temperature could explain the variability, with higher grazing intensity and hotter climates resulting in greater carbon loss. “This means that policy-makers should take into account local abiotic and biotic factors to manage rangelands efficiently,” Ren notes. “By ignoring such complex interactions, we found that using IPCC [Intergovernmental Panel on Climate Change] guidelines would underestimate grazing-induced soil carbon loss by a factor of three globally.”

    Using an approach that incorporates local environmental conditions, the team produced global, high-resolution maps of optimal grazing intensity and the threshold of intensity at which carbon starts to decrease very rapidly. These maps are expected to serve as important benchmarks for evaluating existing grazing practices and provide guidance to local farmers on how to effectively manage their grazing lands.

    Then, using that map, the team estimated how much carbon could be captured if all grazing lands were limited to their optimum grazing intensity. Currently, the authors found, about 20 percent of all pasturelands have crossed the thresholds, leading to severe carbon losses. However, they found that under the optimal levels, global grazing lands would sequester 63 petagrams of carbon. “It is amazing,” Ren says. “This value is roughly equivalent to a 30-year carbon accumulation from global natural forest regrowth.”

    That would be no simple task, of course. To achieve optimal levels, the team found that approximately 75 percent of all grazing areas need to reduce grazing intensity. Overall, if the world seriously reduces the amount of grazing, “you have to reduce the amount of meat that’s available for people,” Terrer says.

    “Another option is to move cattle around,” he says, “from areas that are more severely affected by grazing intensity, to areas that are less affected. Those rotations have been suggested as an opportunity to avoid the more drastic declines in carbon stocks without necessarily reducing the availability of meat.”

    This study didn’t delve into these social and economic implications, Terrer says. “Our role is to just point out what would be the opportunity here. It shows that shifts in diets can be a powerful way to mitigate climate change.”

    “This is a rigorous and careful analysis that provides our best look to date at soil carbon changes due to livestock grazing practiced worldwide,” say Ben Bond-Lamberty, a terrestrial ecosystem research scientist at Pacific Northwest National Laboratory, who was not associated with this work. “The authors’ analysis gives us a unique estimate of soil carbon losses due to grazing and, intriguingly, where and how the process might be reversed.”

    He adds: “One intriguing aspect to this work is the discrepancies between its results and the guidelines currently used by the IPCC — guidelines that affect countries’ commitments, carbon-market pricing, and policies.” However, he says, “As the authors note, the amount of carbon historically grazed soils might be able to take up is small relative to ongoing human emissions. But every little bit helps!”

    “Improved management of working lands can be a powerful tool to combat climate change,” says Jonathan Sanderman, carbon program director of the Woodwell Climate Research Center in Falmouth, Massachusetts, who was not associated with this work. He adds, “This work demonstrates that while, historically, grazing has been a large contributor to climate change, there is significant potential to decrease the climate impact of livestock by optimizing grazing intensity to rebuild lost soil carbon.”

    Terrer states that for now, “we have started a new study, to evaluate the consequences of shifts in diets for carbon stocks. I think that’s the million-dollar question: How much carbon could you sequester, compared to business as usual, if diets shift to more vegan or vegetarian?” The answers will not be simple, because a shift to more vegetable-based diets would require more cropland, which can also have different environmental impacts. Pastures take more land than crops, but produce different kinds of emissions. “What’s the overall impact for climate change? That is the question we’re interested in,” he says.

    The research team included Juan Li, Yingfao Cao, Sheshan Yang, and Dan Liu, all with the  Chinese Academy of Sciences. The work was supported by the Second Tibetan Plateau Scientific Expedition and Research Program, and the Science and Technology Major Project of Tibetan Autonomous Region of China. More