More stories

  • in

    Improving predictions of sea level rise for the next century

    When we think of climate change, one of the most dramatic images that comes to mind is the loss of glacial ice. As the Earth warms, these enormous rivers of ice become a casualty of the rising temperatures. But, as ice sheets retreat, they also become an important contributor to one the more dangerous outcomes of climate change: sea-level rise. At MIT, an interdisciplinary team of scientists is determined to improve sea level rise predictions for the next century, in part by taking a closer look at the physics of ice sheets.

    Last month, two research proposals on the topic, led by Brent Minchew, the Cecil and Ida Green Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), were announced as finalists in the MIT Climate Grand Challenges initiative. Launched in July 2020, Climate Grand Challenges fielded almost 100 project proposals from collaborators across the Institute who heeded the bold charge: to develop research and innovations that will deliver game-changing advances in the world’s efforts to address the climate challenge.

    As finalists, Minchew and his collaborators from the departments of Urban Studies and Planning, Economics, Civil and Environmental Engineering, the Haystack Observatory, and external partners, received $100,000 to develop their research plans. A subset of the 27 proposals tapped as finalists will be announced next month, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    One goal of both Minchew proposals is to more fully understand the most fundamental processes that govern rapid changes in glacial ice, and to use that understanding to build next-generation models that are more predictive of ice sheet behavior as they respond to, and influence, climate change.

    “We need to develop more accurate and computationally efficient models that provide testable projections of sea-level rise over the coming decades. To do so quickly, we want to make better and more frequent observations and learn the physics of ice sheets from these data,” says Minchew. “For example, how much stress do you have to apply to ice before it breaks?”

    Currently, Minchew’s Glacier Dynamics and Remote Sensing group uses satellites to observe the ice sheets on Greenland and Antarctica primarily with interferometric synthetic aperture radar (InSAR). But the data are often collected over long intervals of time, which only gives them “before and after” snapshots of big events. By taking more frequent measurements on shorter time scales, such as hours or days, they can get a more detailed picture of what is happening in the ice.

    “Many of the key unknowns in our projections of what ice sheets are going to look like in the future, and how they’re going to evolve, involve the dynamics of glaciers, or our understanding of how the flow speed and the resistances to flow are related,” says Minchew.

    At the heart of the two proposals is the creation of SACOS, the Stratospheric Airborne Climate Observatory System. The group envisions developing solar-powered drones that can fly in the stratosphere for months at a time, taking more frequent measurements using a new lightweight, low-power radar and other high-resolution instrumentation. They also propose air-dropping sensors directly onto the ice, equipped with seismometers and GPS trackers to measure high-frequency vibrations in the ice and pinpoint the motions of its flow.

    How glaciers contribute to sea level rise

    Current climate models predict an increase in sea levels over the next century, but by just how much is still unclear. Estimates are anywhere from 20 centimeters to two meters, which is a large difference when it comes to enacting policy or mitigation. Minchew points out that response measures will be different, depending on which end of the scale it falls toward. If it’s closer to 20 centimeters, coastal barriers can be built to protect low-level areas. But with higher surges, such measures become too expensive and inefficient to be viable, as entire portions of cities and millions of people would have to be relocated.

    “If we’re looking at a future where we could get more than a meter of sea level rise by the end of the century, then we need to know about that sooner rather than later so that we can start to plan and to do our best to prepare for that scenario,” he says.

    There are two ways glaciers and ice sheets contribute to rising sea levels: direct melting of the ice and accelerated transport of ice to the oceans. In Antarctica, warming waters melt the margins of the ice sheets, which tends to reduce the resistive stresses and allow ice to flow more quickly to the ocean. This thinning can also cause the ice shelves to be more prone to fracture, facilitating the calving of icebergs — events which sometimes cause even further acceleration of ice flow.

    Using data collected by SACOS, Minchew and his group can better understand what material properties in the ice allow for fracturing and calving of icebergs, and build a more complete picture of how ice sheets respond to climate forces. 

    “What I want is to reduce and quantify the uncertainties in projections of sea level rise out to the year 2100,” he says.

    From that more complete picture, the team — which also includes economists, engineers, and urban planning specialists — can work on developing predictive models and methods to help communities and governments estimate the costs associated with sea level rise, develop sound infrastructure strategies, and spur engineering innovation.

    Understanding glacier dynamics

    More frequent radar measurements and the collection of higher-resolution seismic and GPS data will allow Minchew and the team to develop a better understanding of the broad category of glacier dynamics — including calving, an important process in setting the rate of sea level rise which is currently not well understood.  

    “Some of what we’re doing is quite similar to what seismologists do,” he says. “They measure seismic waves following an earthquake, or a volcanic eruption, or things of this nature and use those observations to better understand the mechanisms that govern these phenomena.”

    Air-droppable sensors will help them collect information about ice sheet movement, but this method comes with drawbacks — like installation and maintenance, which is difficult to do out on a massive ice sheet that is moving and melting. Also, the instruments can each only take measurements at a single location. Minchew equates it to a bobber in water: All it can tell you is how the bobber moves as the waves disturb it.

    But by also taking continuous radar measurements from the air, Minchew’s team can collect observations both in space and in time. Instead of just watching the bobber in the water, they can effectively make a movie of the waves propagating out, as well as visualize processes like iceberg calving happening in multiple dimensions.

    Once the bobbers are in place and the movies recorded, the next step is developing machine learning algorithms to help analyze all the new data being collected. While this data-driven kind of discovery has been a hot topic in other fields, this is the first time it has been applied to glacier research.

    “We’ve developed this new methodology to ingest this huge amount of data,” he says, “and from that create an entirely new way of analyzing the system to answer these fundamental and critically important questions.”  More

  • in

    How molecular biology could reduce global food insecurity

    Staple crops like rice, maize, and wheat feed over half of the global population, but they are increasingly vulnerable to severe environmental risks. The effects of climate change, including changing temperatures, rainfall variability, shifting patterns of agricultural pests and diseases, and saltwater intrusion from sea-level rise, all contribute to decreased crop yields. As these effects continue to worsen, there will be less food available for a rapidly growing population. 

    Mary Gehring, associate professor of biology and a member of the Whitehead Institute for Biomedical Research, is growing increasingly concerned about the potentially catastrophic impacts of climate change and has resolved to do something about it.

    The Gehring Lab’s primary research focus is plant epigenetics, which refers to the heritable information that influences plant cellular function but is not encoded in the DNA sequence itself. This research is adding to our fundamental understanding of plant biology and could have agricultural applications in the future. “I’ve been working with seeds for many years,” says Gehring. “Understanding how seeds work is going to be critical to agriculture and food security,” she explains.

    Laying the foundation

    Gehring is using her expertise to help crops develop climate resilience through a 2021 seed grant from MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Her research is aimed at discovering how we can accelerate the production of genetic diversity to generate plant populations that are better suited to challenging environmental conditions.

    Genetic variation gives rise to phenotypic variations that can help plants adapt to a wider range of climates. Traits such as flood resistance and salt tolerance will become more important as the effects of climate change are realized. However, many important plant species do not appear to have much standing genetic variation, which could become an issue if farmers need to breed their crops quickly to adapt to a changing climate. 

    In researching a nutritious crop that has little genetic variation, Gehring came across the pigeon pea, a species she had never worked with before. Pigeon peas are a legume eaten in Asia, Africa, and Latin America. They have some of the highest levels of protein in a seed, so eating more pigeon peas could decrease our dependence on meat, which has numerous negative environmental impacts. Pigeon peas also have a positive impact on the environment; as perennial plants, they live for three to five years and sequester carbon for longer periods of time. They can also help with soil restoration. “Legumes are very interesting because they’re nitrogen-fixers, so they create symbioses with microbes in the soil and fix nitrogen, which can renew soils,” says Gehring. Furthermore, pigeon peas are known to be drought-resistant, so they will likely become more attractive as many farmers transition away from water-intensive crops.

    Developing a strategy

    Using the pigeon pea plant, Gehring began to explore a universal technology that would increase the amount of genetic diversity in plants. One method her research group chose is to enhance transposable element proliferation. Genomes are made up of genes that make proteins, but large fractions are also made up of transposable elements. In fact, about 45 percent of the human genome is made up of transposable elements, Gehring notes. The primary function of transposable elements is to make more copies of themselves. Since our bodies do not need an infinite number of these copies, there are systems in place to “silence” them from copying. 

    Gehring is trying to reverse that silencing so that the transposable elements can move freely throughout the genome, which could create genetic variation by creating mutations or altering the promoter of a gene — that is, what controls a certain gene’s expression. Scientists have traditionally initiated mutagenesis by using a chemical that changes single base pairs in DNA, or by using X-rays, which can cause very large chromosome breaks. Gehring’s research team is attempting to induce transposable element proliferation by treatment with a suite of chemicals that inhibit transposable element silencing. The goal is to impact multiple sites in the genome simultaneously. “This is unexplored territory where you’re changing 50 genes at a time, or 100, rather than just one,” she explains. “It’s a fairly risky project, but sometimes you have to be ambitious and take risks.”

    Looking forward

    Less than one year after receiving the J-WAFS seed grant, the research project is still in its early stages. Despite various restrictions due to the ongoing pandemic, the Gehring Lab is now generating data on the Arabidopsis plant that will be applied to pigeon pea plants. However, Gehring expects it will take a good amount of time to complete this research phase, considering the pigeon pea plants can take upward of 100 days just to flower. While it might take time, this technology could help crops withstand the effects of climate change, ultimately contributing to J-WAFS’ goal of finding solutions to food system challenges.

    “Climate change is not something any of us can ignore. … If one of us has the ability to address it, even in a very small way, that’s important to try to pursue,” Gehring remarks. “It’s part of our responsibility as scientists to take what knowledge we have and try to apply it to these sorts of problems.” More

  • in

    New program bolsters innovation in next-generation artificial intelligence hardware

    The MIT AI Hardware Program is a new academia and industry collaboration aimed at defining and developing translational technologies in hardware and software for the AI and quantum age. A collaboration between the MIT School of Engineering and MIT Schwarzman College of Computing, involving the Microsystems Technologies Laboratories and programs and units in the college, the cross-disciplinary effort aims to innovate technologies that will deliver enhanced energy efficiency systems for cloud and edge computing.

    “A sharp focus on AI hardware manufacturing, research, and design is critical to meet the demands of the world’s evolving devices, architectures, and systems,” says Anantha Chandrakasan, dean of the MIT School of Engineering and Vannevar Bush Professor of Electrical Engineering and Computer Science. “Knowledge-sharing between industry and academia is imperative to the future of high-performance computing.”

    Based on use-inspired research involving materials, devices, circuits, algorithms, and software, the MIT AI Hardware Program convenes researchers from MIT and industry to facilitate the transition of fundamental knowledge to real-world technological solutions. The program spans materials and devices, as well as architecture and algorithms enabling energy-efficient and sustainable high-performance computing.

    “As AI systems become more sophisticated, new solutions are sorely needed to enable more advanced applications and deliver greater performance,” says Daniel Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our aim is to devise real-world technological solutions and lead the development of technologies for AI in hardware and software.”

    The inaugural members of the program are companies from a wide range of industries including chip-making, semiconductor manufacturing equipment, AI and computing services, and information systems R&D organizations. The companies represent a diverse ecosystem, both nationally and internationally, and will work with MIT faculty and students to help shape a vibrant future for our planet through cutting-edge AI hardware research.

    The five inaugural members of the MIT AI Hardware Program are:  

    Amazon, a global technology company whose hardware inventions include the Kindle, Amazon Echo, Fire TV, and Astro; 
    Analog Devices, a global leader in the design and manufacturing of analog, mixed signal, and DSP integrated circuits; 
    ASML, an innovation leader in the semiconductor industry, providing chipmakers with hardware, software, and services to mass produce patterns on silicon through lithography; 
    NTT Research, a subsidiary of NTT that conducts fundamental research to upgrade reality in game-changing ways that improve lives and brighten our global future; and 
    TSMC, the world’s leading dedicated semiconductor foundry.

    The MIT AI Hardware Program will create a roadmap of transformative AI hardware technologies. Leveraging MIT.nano, the most advanced university nanofabrication facility anywhere, the program will foster a unique environment for AI hardware research.  

    “We are all in awe at the seemingly superhuman capabilities of today’s AI systems. But this comes at a rapidly increasing and unsustainable energy cost,” says Jesús del Alamo, the Donner Professor in MIT’s Department of Electrical Engineering and Computer Science. “Continued progress in AI will require new and vastly more energy-efficient systems. This, in turn, will demand innovations across the entire abstraction stack, from materials and devices to systems and software. The program is in a unique position to contribute to this quest.”

    The program will prioritize the following topics:

    analog neural networks;
    new roadmap CMOS designs;
    heterogeneous integration for AI systems;
    onolithic-3D AI systems;
    analog nonvolatile memory devices;
    software-hardware co-design;
    intelligence at the edge;
    intelligent sensors;
    energy-efficient AI;
    intelligent internet of things (IIoT);
    neuromorphic computing;
    AI edge security;
    quantum AI;
    wireless technologies;
    hybrid-cloud computing; and
    high-performance computation.

    “We live in an era where paradigm-shifting discoveries in hardware, systems communications, and computing have become mandatory to find sustainable solutions — solutions that we are proud to give to the world and generations to come,” says Aude Oliva, senior research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and director of strategic industry engagement in the MIT Schwarzman College of Computing.

    The new program is co-led by Jesús del Alamo and Aude Oliva, and Anantha Chandrakasan serves as chair. More

  • in

    Q&A: Climate Grand Challenges finalists on new pathways to decarbonizing industry

    Note: This is the third article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalist teams, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    The industrial sector is the backbone of today’s global economy, yet its activities are among the most energy-intensive and the toughest to decarbonize. Efforts to reach net-zero targets and avert runaway climate change will not succeed without new solutions for replacing sources of carbon emissions with low-carbon alternatives and developing scalable nonemitting applications of hydrocarbons.

    In conversations prepared for MIT News, faculty from three of the teams with projects in the competition’s “Decarbonizing complex industries and processes” category discuss strategies for achieving impact in hard-to-abate sectors, from long-distance transportation and building construction to textile manufacturing and chemical refining. The other Climate Grand Challenges research themes include using data and science to forecast climate-related risk, building equity and fairness into climate solutions, and removing, managing, and storing greenhouse gases. The following responses have been edited for length and clarity.

    Moving toward an all-carbon material approach to building

    Faced with the prospect of building stock doubling globally by 2050, there is a great need for sustainable alternatives to conventional mineral- and metal-based construction materials. Mark Goulthorpe, associate professor in the Department of Architecture, explains the methods behind Carbon >Building, an initiative to develop energy-efficient building materials by reorienting hydrocarbons from current use as fuels to environmentally benign products, creating an entirely new genre of lightweight, all-carbon buildings that could actually drive decarbonization.

    Q: What are all-carbon buildings and how can they help mitigate climate change?

    A: Instead of burning hydrocarbons as fuel, which releases carbon dioxide and other greenhouse gases that contribute to atmospheric pollution, we seek to pioneer a process that uses carbon materially to build at macro scale. New forms of carbon — carbon nanotube, carbon foam, etc. — offer salient properties for building that might effectively displace the current material paradigm. Only hydrocarbons offer sufficient scale to beat out the billion-ton mineral and metal markets, and their perilous impact. Carbon nanotube from methane pyrolysis is of special interest, as it offers hydrogen as a byproduct.

    Q: How will society benefit from the widespread use of all-carbon buildings?

    A: We anticipate reducing costs and timelines in carbon composite buildings, while increasing quality, longevity, and performance, and diminishing environmental impact. Affordability of buildings is a growing problem in all global markets as the cost of labor and logistics in multimaterial assemblies creates a burden that is very detrimental to economic growth and results in overcrowding and urban blight.

    Alleviating these challenges would have huge societal benefits, especially for those in lower income brackets who cannot afford housing, but the biggest benefit would be in drastically reducing the environmental footprint of typical buildings, which account for nearly 40 percent of global energy consumption.

    An all-carbon building sector will not only reduce hydrocarbon extraction, but can produce higher value materials for building. We are looking to rethink the building industry by greatly streamlining global production and learning from the low-labor methods pioneered by composite manufacturing such as wind turbine blades, which are quick and cheap to produce. This technology can improve the sustainability and affordability of buildings — and holds the promise of faster, cheaper, greener, and more resilient modes of dwelling.

    Emissions reduction through innovation in the textile industry

    Collectively, the textile industry is responsible for over 4 billion metric tons of carbon dioxide equivalent per year, or 5 to 10 percent of global greenhouse gas emissions — more than aviation and maritime shipping combined. And the problem is only getting worse with the industry’s rapid growth. Under the current trajectory, consumption is projected to increase 30 percent by 2030, reaching 102 million tons. A diverse group of faculty and researchers led by Gregory Rutledge, the Lammot du Pont Professor in the Department of Chemical Engineering, and Yuly Fuentes-Medel, project manager for fiber technologies and research advisor to the MIT Innovation Initiative, is developing groundbreaking innovations to reshape how textiles are selected, sourced, designed, manufactured, and used, and to create the structural changes required for sustained reductions in emissions by this industry.

    Q: Why has the textile industry been difficult to decarbonize?

    A: The industry currently operates under a linear model that relies heavily on virgin feedstock, at roughly 97 percent, yet recycles or downcycles less than 15 percent. Furthermore, recent trends in “fast fashion” have led to massive underutilization of apparel, such that products are discarded on average after only seven to 10 uses. In an industry with high volume and low margins, replacement technologies must achieve emissions reduction at scale while maintaining performance and economic efficiency.

    There are also technical barriers to adopting circular business models, from the challenge of dealing with products comprising fiber blends and chemical additives to the low maturity of recycling technologies. The environmental impacts of textiles and apparel have been estimated using life cycle analysis, and industry-standard indexes are under development to assess sustainability throughout the life cycle of a product, but information and tools are needed to model how new solutions will alter those impacts and include the consumer as an active player to keep our planet safe. This project seeks to deliver both the new solutions and the tools to evaluate their potential for impact.

    Q: Describe the five components of your program. What is the anticipated timeline for implementing these solutions?

    A: Our plan comprises five programmatic sections, which include (1) enabling a paradigm shift to sustainable materials using nontraditional, carbon-negative polymers derived from biomass and additives that facilitate recycling; (2) rethinking manufacturing with processes to structure fibers and fabrics for performance, waste reduction, and increased material efficiency; (3) designing textiles for value by developing products that are customized, adaptable, and multifunctional, and that interact with their environment to reduce energy consumption; (4) exploring consumer behavior change through human interventions that reduce emissions by encouraging the adoption of new technologies, increased utilization of products, and circularity; and (5) establishing carbon transparency with systems-level analyses that measure the impact of these strategies and guide decision making.

    We have proposed a five-year timeline with annual targets for each project. Conservatively, we estimate our program could reduce greenhouse gas emissions in the industry by 25 percent by 2030, with further significant reductions to follow.

    Tough-to-decarbonize transportation

    Airplanes, transoceanic ships, and freight trucks are critical to transporting people and delivering goods, and the cornerstone of global commerce, manufacturing, and tourism. But these vehicles also emit 3.7 billion tons of carbon dioxide annually and, left unchecked, they could take up a quarter of the remaining carbon budget by 2050. William Green, the Hoyt C. Hottel Professor in the Department Chemical Engineering, co-leads a multidisciplinary team with Steven Barrett, professor of aeronautics and astronautics and director of the MIT Laboratory for Aviation and the Environment, that is working to identify and advance economically viable technologies and policies for decarbonizing heavy duty trucking, shipping, and aviation. The Tough to Decarbonize Transportation research program aims to design and optimize fuel chemistry and production, vehicles, operations, and policies to chart the course to net-zero emissions by midcentury.

    Q: What are the highest priority focus areas of your research program?

    A: Hydrocarbon fuels made from biomass are the least expensive option, but it seems impractical, and probably damaging to the environment, to harvest the huge amount of biomass that would be needed to meet the massive and growing energy demands from these sectors using today’s biomass-to-fuel technology. We are exploring strategies to increase the amount of useful fuel made per ton of biomass harvested, other methods to make low-climate-impact hydrocarbon fuels, such as from carbon dioxide, and ways to make fuels that do not contain carbon at all, such as with hydrogen, ammonia, and other hydrogen carriers.

    These latter zero-carbon options free us from the need for biomass or to capture gigatons of carbon dioxide, so they could be a very good long-term solution, but they would require changing the vehicles significantly, and the construction of new refueling infrastructure, with high capital costs.

    Q: What are the scientific, technological, and regulatory barriers to scaling and implementing potential solutions?

    A: Reimagining an aviation, trucking, and shipping sector that connects the world and increases equity without creating more environmental damage is challenging because these vehicles must operate disconnected from the electrical grid and have energy requirements that cannot be met by batteries alone. Some of the concepts do not even exist in prototype yet, and none of the appealing options have been implemented at anywhere near the scale required.

    In most cases, we do not know the best way to make the fuel, and for new fuels the vehicles and refueling systems all need to be developed. Also, new fuels, or large-scale use of biomass, will introduce new environmental problems that need to be carefully considered, to ensure that decarbonization solutions do not introduce big new problems.

    Perhaps most difficult are the policy, economic, and equity issues. A new long-haul transportation system will be expensive, and everyone will be affected by the increased cost of shipping freight. To have the desired climate impact, the transport system must change in almost every country. During the transition period, we will need both the existing vehicle and fuel system to keep running smoothly, even as a new low-greenhouse system is introduced. We will also examine what policies could make that work and how we can get countries around the world to agree to implement them. More

  • in

    A better way to separate gases

    Industrial processes for chemical separations, including natural gas purification and the production of oxygen and nitrogen for medical or industrial uses, are collectively responsible for about 15 percent of the world’s energy use. They also contribute a corresponding amount to the world’s greenhouse gas emissions. Now, researchers at MIT and Stanford University have developed a new kind of membrane for carrying out these separation processes with roughly 1/10 the energy use and emissions.

    Using membranes for separation of chemicals is known to be much more efficient than processes such as distillation or absorption, but there has always been a tradeoff between permeability — how fast gases can penetrate through the material — and selectivity — the ability to let the desired molecules pass through while blocking all others. The new family of membrane materials, based on “hydrocarbon ladder” polymers, overcomes that tradeoff, providing both high permeability and extremely good selectivity, the researchers say.

    The findings are reported today in the journal Science, in a paper by Yan Xia, an associate professor of chemistry at Stanford; Zachary Smith, an assistant professor of chemical engineering at MIT; Ingo Pinnau, a professor at King Abdullah University of Science and Technology, and five others.

    Gas separation is an important and widespread industrial process whose uses include removing impurities and undesired compounds from natural gas or biogas, separating oxygen and nitrogen from air for medical and industrial purposes, separating carbon dioxide from other gases for carbon capture, and producing hydrogen for use as a carbon-free transportation fuel. The new ladder polymer membranes show promise for drastically improving the performance of such separation processes. For example, separating carbon dioxide from methane, these new membranes have five times the selectivity and 100 times the permeability of existing cellulosic membranes for that purpose. Similarly, they are 100 times more permeable and three times as selective for separating hydrogen gas from methane.

    The new type of polymers, developed over the last several years by the Xia lab, are referred to as ladder polymers because they are formed from double strands connected by rung-like bonds, and these linkages provide a high degree of rigidity and stability to the polymer material. These ladder polymers are synthesized via an efficient and selective chemistry the Xia lab developed called CANAL, an acronym for catalytic arene-norbornene annulation, which stitches readily available chemicals into ladder structures with hundreds or even thousands of rungs. The polymers are synthesized in a solution, where they form rigid and kinked ribbon-like strands that can easily be made into a thin sheet with sub-nanometer-scale pores by using industrially available polymer casting processes. The sizes of the resulting pores can be tuned through the choice of the specific hydrocarbon starting compounds. “This chemistry and choice of chemical building blocks allowed us to make very rigid ladder polymers with different configurations,” Xia says.

    To apply the CANAL polymers as selective membranes, the collaboration made use of Xia’s expertise in polymers and Smith’s specialization in membrane research. Holden Lai, a former Stanford doctoral student, carried out much of the development and exploration of how their structures impact gas permeation properties. “It took us eight years from developing the new chemistry to finding the right polymer structures that bestow the high separation performance,” Xia says.

    The Xia lab spent the past several years varying the structures of CANAL polymers to understand how their structures affect their separation performance. Surprisingly, they found that adding additional kinks to their original CANAL polymers significantly improved the mechanical robustness of their membranes and boosted their selectivity  for molecules of similar sizes, such as oxygen and nitrogen gases, without losing permeability of the more permeable gas. The selectivity actually improves as the material ages. The combination of high selectivity and high permeability makes these materials outperform all other polymer materials in many gas separations, the researchers say.

    Today, 15 percent of global energy use goes into chemical separations, and these separation processes are “often based on century-old technologies,” Smith says. “They work well, but they have an enormous carbon footprint and consume massive amounts of energy. The key challenge today is trying to replace these nonsustainable processes.” Most of these processes require high temperatures for boiling and reboiling solutions, and these often are the hardest processes to electrify, he adds.

    For the separation of oxygen and nitrogen from air, the two molecules only differ in size by about 0.18 angstroms (ten-billionths of a meter), he says. To make a filter capable of separating them efficiently “is incredibly difficult to do without decreasing throughput.” But the new ladder polymers, when manufactured into membranes produce tiny pores that achieve high selectivity, he says. In some cases, 10 oxygen molecules permeate for every nitrogen, despite the razor-thin sieve needed to access this type of size selectivity. These new membrane materials have “the highest combination of permeability and selectivity of all known polymeric materials for many applications,” Smith says.

    “Because CANAL polymers are strong and ductile, and because they are soluble in certain solvents, they could be scaled for industrial deployment within a few years,” he adds. An MIT spinoff company called Osmoses, led by authors of this study, recently won the MIT $100K entrepreneurship competition and has been partly funded by The Engine to commercialize the technology.

    There are a variety of potential applications for these materials in the chemical processing industry, Smith says, including the separation of carbon dioxide from other gas mixtures as a form of emissions reduction. Another possibility is the purification of biogas fuel made from agricultural waste products in order to provide carbon-free transportation fuel. Hydrogen separation for producing a fuel or a chemical feedstock, could also be carried out efficiently, helping with the transition to a hydrogen-based economy.

    The close-knit team of researchers is continuing to refine the process to facilitate the development from laboratory to industrial scale, and to better understand the details on how the macromolecular structures and packing result in the ultrahigh selectivity. Smith says he expects this platform technology to play a role in multiple decarbonization pathways, starting with hydrogen separation and carbon capture, because there is such a pressing need for these technologies in order to transition to a carbon-free economy.

    “These are impressive new structures that have outstanding gas separation performance,” says Ryan Lively, am associate professor of chemical and biomolecular engineering at Georgia Tech, who was not involved in this work. “Importantly, this performance is improved during membrane aging and when the membranes are challenged with concentrated gas mixtures. … If they can scale these materials and fabricate membrane modules, there is significant potential practical impact.”

    The research team also included Jun Myun Ahn and Ashley Robinson at Stanford, Francesco Benedetti at MIT, now the chief executive officer at Osmoses, and Yingge Wang at King Abdullah University of Science and Technology in Saudi Arabia. The work was supported by the Stanford Natural Gas Initiative, the Sloan Research Fellowship, the U.S. Department of Energy Office of Basic Energy Sciences, and the National Science Foundation. More

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    Microbes and minerals may have set off Earth’s oxygenation

    For the first 2 billion years of Earth’s history, there was barely any oxygen in the air. While some microbes were photosynthesizing by the latter part of this period, oxygen had not yet accumulated at levels that would impact the global biosphere.

    But somewhere around 2.3 billion years ago, this stable, low-oxygen equilibrium shifted, and oxygen began building up in the atmosphere, eventually reaching the life-sustaining levels we breathe today. This rapid infusion is known as the Great Oxygenation Event, or GOE. What triggered the event and pulled the planet out of its low-oxygen funk is one of the great mysteries of science.

    A new hypothesis, proposed by MIT scientists, suggests that oxygen finally started accumulating in the atmosphere thanks to interactions between certain marine microbes and minerals in ocean sediments. These interactions helped prevent oxygen from being consumed, setting off a self-amplifying process where more and more oxygen was made available to accumulate in the atmosphere.

    The scientists have laid out their hypothesis using mathematical and evolutionary analyses, showing that there were indeed microbes that existed before the GOE and evolved the ability to interact with sediment in the way that the researchers have proposed.

    Their study, appearing today in Nature Communications, is the first to connect the co-evolution of microbes and minerals to Earth’s oxygenation.

    “Probably the most important biogeochemical change in the history of the planet was oxygenation of the atmosphere,” says study author Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS). “We show how the interactions of microbes, minerals, and the geochemical environment acted in concert to increase oxygen in the atmosphere.”

    The study’s co-authors include lead author Haitao Shang, a former MIT graduate student, and Gregory Fournier, associate professor of geobiology in EAPS.

    A step up

    Today’s oxygen levels in the atmosphere are a stable balance between processes that produce oxygen and those that consume it. Prior to the GOE, the atmosphere maintained a different kind of equilibrium, with producers and consumers of oxygen  in balance, but in a way that didn’t leave much extra oxygen for the atmosphere.

    What could have pushed the planet out of one stable, oxygen-deficient state to another stable, oxygen-rich state?

    “If you look at Earth’s history, it appears there were two jumps, where you went from a steady state of low oxygen to a steady state of much higher oxygen, once in the Paleoproterozoic, once in the Neoproterozoic,” Fournier notes. “These jumps couldn’t have been because of a gradual increase in excess oxygen. There had to have been some feedback loop that caused this step-change in stability.”

    He and his colleagues wondered whether such a positive feedback loop could have come from a process in the ocean that made some organic carbon unavailable to its consumers. Organic carbon is mainly consumed through oxidation, usually accompanied by the consumption of oxygen — a process by which microbes in the ocean use oxygen to break down organic matter, such as detritus that has settled in sediment. The team wondered: Could there have been some process by which the presence of oxygen stimulated its further accumulation?

    Shang and Rothman worked out a mathematical model that made the following prediction: If microbes possessed the ability to only partially oxidize organic matter, the partially-oxidized matter, or “POOM,” would effectively become “sticky,” and chemically bind to minerals in sediment in a way that would protect the material from further oxidation. The oxygen that would otherwise have been consumed to fully degrade the material would instead be free to build up in the atmosphere. This process, they found, could serve as a positive feedback, providing a natural pump to push the atmosphere into a new, high-oxygen equilibrium.

    “That led us to ask, is there a microbial metabolism out there that produced POOM?” Fourier says.

    In the genes

    To answer this, the team searched through the scientific literature and identified a group of microbes that partially oxidizes organic matter in the deep ocean today. These microbes belong to the bacterial group SAR202, and their partial oxidation is carried out through an enzyme, Baeyer-Villiger monooxygenase, or BVMO.

    The team carried out a phylogenetic analysis to see how far back the microbe, and the gene for the enzyme, could be traced. They found that the bacteria did indeed have ancestors dating back before the GOE, and that the gene for the enzyme could be traced across various microbial species, as far back as pre-GOE times.

    What’s more, they found that the gene’s diversification, or the number of species that acquired the gene, increased significantly during times when the atmosphere experienced spikes in oxygenation, including once during the GOE’s Paleoproterozoic, and again in the Neoproterozoic.

    “We found some temporal correlations between diversification of POOM-producing genes, and the oxygen levels in the atmosphere,” Shang says. “That supports our overall theory.”

    To confirm this hypothesis will require far more follow-up, from experiments in the lab to surveys in the field, and everything in between. With their new study, the team has introduced a new suspect in the age-old case of what oxygenated Earth’s atmosphere.

    “Proposing a novel method, and showing evidence for its plausibility, is the first but important step,” Fournier says. “We’ve identified this as a theory worthy of study.”

    This work was supported in part by the mTerra Catalyst Fund and the National Science Foundation. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More