More stories

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    Microbes and minerals may have set off Earth’s oxygenation

    For the first 2 billion years of Earth’s history, there was barely any oxygen in the air. While some microbes were photosynthesizing by the latter part of this period, oxygen had not yet accumulated at levels that would impact the global biosphere.

    But somewhere around 2.3 billion years ago, this stable, low-oxygen equilibrium shifted, and oxygen began building up in the atmosphere, eventually reaching the life-sustaining levels we breathe today. This rapid infusion is known as the Great Oxygenation Event, or GOE. What triggered the event and pulled the planet out of its low-oxygen funk is one of the great mysteries of science.

    A new hypothesis, proposed by MIT scientists, suggests that oxygen finally started accumulating in the atmosphere thanks to interactions between certain marine microbes and minerals in ocean sediments. These interactions helped prevent oxygen from being consumed, setting off a self-amplifying process where more and more oxygen was made available to accumulate in the atmosphere.

    The scientists have laid out their hypothesis using mathematical and evolutionary analyses, showing that there were indeed microbes that existed before the GOE and evolved the ability to interact with sediment in the way that the researchers have proposed.

    Their study, appearing today in Nature Communications, is the first to connect the co-evolution of microbes and minerals to Earth’s oxygenation.

    “Probably the most important biogeochemical change in the history of the planet was oxygenation of the atmosphere,” says study author Daniel Rothman, professor of geophysics in MIT’s Department of Earth, Atmospheric, and Planetary Sciences (EAPS). “We show how the interactions of microbes, minerals, and the geochemical environment acted in concert to increase oxygen in the atmosphere.”

    The study’s co-authors include lead author Haitao Shang, a former MIT graduate student, and Gregory Fournier, associate professor of geobiology in EAPS.

    A step up

    Today’s oxygen levels in the atmosphere are a stable balance between processes that produce oxygen and those that consume it. Prior to the GOE, the atmosphere maintained a different kind of equilibrium, with producers and consumers of oxygen  in balance, but in a way that didn’t leave much extra oxygen for the atmosphere.

    What could have pushed the planet out of one stable, oxygen-deficient state to another stable, oxygen-rich state?

    “If you look at Earth’s history, it appears there were two jumps, where you went from a steady state of low oxygen to a steady state of much higher oxygen, once in the Paleoproterozoic, once in the Neoproterozoic,” Fournier notes. “These jumps couldn’t have been because of a gradual increase in excess oxygen. There had to have been some feedback loop that caused this step-change in stability.”

    He and his colleagues wondered whether such a positive feedback loop could have come from a process in the ocean that made some organic carbon unavailable to its consumers. Organic carbon is mainly consumed through oxidation, usually accompanied by the consumption of oxygen — a process by which microbes in the ocean use oxygen to break down organic matter, such as detritus that has settled in sediment. The team wondered: Could there have been some process by which the presence of oxygen stimulated its further accumulation?

    Shang and Rothman worked out a mathematical model that made the following prediction: If microbes possessed the ability to only partially oxidize organic matter, the partially-oxidized matter, or “POOM,” would effectively become “sticky,” and chemically bind to minerals in sediment in a way that would protect the material from further oxidation. The oxygen that would otherwise have been consumed to fully degrade the material would instead be free to build up in the atmosphere. This process, they found, could serve as a positive feedback, providing a natural pump to push the atmosphere into a new, high-oxygen equilibrium.

    “That led us to ask, is there a microbial metabolism out there that produced POOM?” Fourier says.

    In the genes

    To answer this, the team searched through the scientific literature and identified a group of microbes that partially oxidizes organic matter in the deep ocean today. These microbes belong to the bacterial group SAR202, and their partial oxidation is carried out through an enzyme, Baeyer-Villiger monooxygenase, or BVMO.

    The team carried out a phylogenetic analysis to see how far back the microbe, and the gene for the enzyme, could be traced. They found that the bacteria did indeed have ancestors dating back before the GOE, and that the gene for the enzyme could be traced across various microbial species, as far back as pre-GOE times.

    What’s more, they found that the gene’s diversification, or the number of species that acquired the gene, increased significantly during times when the atmosphere experienced spikes in oxygenation, including once during the GOE’s Paleoproterozoic, and again in the Neoproterozoic.

    “We found some temporal correlations between diversification of POOM-producing genes, and the oxygen levels in the atmosphere,” Shang says. “That supports our overall theory.”

    To confirm this hypothesis will require far more follow-up, from experiments in the lab to surveys in the field, and everything in between. With their new study, the team has introduced a new suspect in the age-old case of what oxygenated Earth’s atmosphere.

    “Proposing a novel method, and showing evidence for its plausibility, is the first but important step,” Fournier says. “We’ve identified this as a theory worthy of study.”

    This work was supported in part by the mTerra Catalyst Fund and the National Science Foundation. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Study: Ice flow is more sensitive to stress than previously thought

    The rate of glacier ice flow is more sensitive to stress than previously calculated, according to a new study by MIT researchers that upends a decades-old equation used to describe ice flow.

    Stress in this case refers to the forces acting on Antarctic glaciers, which are primarily influenced by gravity that drags the ice down toward lower elevations. Viscous glacier ice flows “really similarly to honey,” explains Joanna Millstein, a PhD student in the Glacier Dynamics and Remote Sensing Group and lead author of the study. “If you squeeze honey in the center of a piece of toast, and it piles up there before oozing outward, that’s the exact same motion that’s happening for ice.”

    The revision to the equation proposed by Millstein and her colleagues should improve models for making predictions about the ice flow of glaciers. This could help glaciologists predict how Antarctic ice flow might contribute to future sea level rise, although Millstein said the equation change is unlikely to raise estimates of sea level rise beyond the maximum levels already predicted under climate change models.

    “Almost all our uncertainties about sea level rise coming from Antarctica have to do with the physics of ice flow, though, so this will hopefully be a constraint on that uncertainty,” she says.

    Other authors on the paper, published in Nature Communications Earth and Environment, include Brent Minchew, the Cecil and Ida Green Career Development Professor in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and Samuel Pegler, a university academic fellow at the University of Leeds.

    Benefits of big data

    The equation in question, called Glen’s Flow Law, is the most widely used equation to describe viscous ice flow. It was developed in 1958 by British scientist J.W. Glen, one of the few glaciologists working on the physics of ice flow in the 1950s, according to Millstein.

    With relatively few scientists working in the field until recently, along with the remoteness and inaccessibility of most large glacier ice sheets, there were few attempts to calibrate Glen’s Flow Law outside the lab until recently. In the recent study, Millstein and her colleagues took advantage of a new wealth of satellite imagery over Antarctic ice shelves, the floating extensions of the continent’s ice sheet, to revise the stress exponent of the flow law.

    “In 2002, this major ice shelf [Larsen B] collapsed in Antarctica, and all we have from that collapse is two satellite images that are a month apart,” she says. “Now, over that same area we can get [imagery] every six days.”

    The new analysis shows that “the ice flow in the most dynamic, fastest-changing regions of Antarctica — the ice shelves, which basically hold back and hug the interior of the continental ice — is more sensitive to stress than commonly assumed,” Millstein says. She’s optimistic that the growing record of satellite data will help capture rapid changes on Antarctica in the future, providing insights into the underlying physical processes of glaciers.   

    But stress isn’t the only thing that affects ice flow, the researchers note. Other parts of the flow law equation represent differences in temperature, ice grain size and orientation, and impurities and water contained in the ice — all of which can alter flow velocity. Factors like temperature could be especially important in understanding how ice flow impacts sea level rise in the future, Millstein says.

    Cracking under strain

    Millstein and colleagues are also studying the mechanics of ice sheet collapse, which involves different physical models than those used to understand the ice flow problem. “The cracking and breaking of ice is what we’re working on now, using strain rate observations,” Millstein says.

    The researchers use InSAR, radar images of the Earth’s surface collected by satellites, to observe deformations of the ice sheets that can be used to make precise measurements of strain. By observing areas of ice with high strain rates, they hope to better understand the rate at which crevasses and rifts propagate to trigger collapse.

    The research was supported by the National Science Foundation. More

  • in

    Using soap to remove micropollutants from water

    Imagine millions of soapy sponges the size of human cells that can clean water by soaking up contaminants. This simplistic model is used to describe technology that MIT chemical engineers have recently developed to remove micropollutants from water — a concerning, worldwide problem.

    Patrick S. Doyle, the Robert T. Haslam Professor of Chemical Engineering, PhD student Devashish Pratap Gokhale, and undergraduate Ian Chen recently published their research on micropollutant removal in the journal ACS Applied Polymer Materials. The work is funded by MIT’s Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

    In spite of their low concentrations (about 0.01–100 micrograms per liter), micropollutants can be hazardous to the ecosystem and to human health. They come from a variety of sources and have been detected in almost all bodies of water, says Gokhale. Pharmaceuticals passing through people and animals, for example, can end up as micropollutants in the water supply. Others, like endocrine disruptor bisphenol A (BPA), can leach from plastics during industrial manufacturing. Pesticides, dyes, petrochemicals, and per-and polyfluoroalkyl substances, more commonly known as PFAS, are also examples of micropollutants, as are some heavy metals like lead and arsenic. These are just some of the kinds of micropollutants, all of which can be toxic to humans and animals over time, potentially causing cancer, organ damage, developmental defects, or other adverse effects.

    Micropollutants are numerous but since their collective mass is small, they are difficult to remove from water. Currently, the most common practice for removing micropollutants from water is activated carbon adsorption. In this process, water passes through a carbon filter, removing only 30 percent of micropollutants. Activated carbon requires high temperatures to produce and regenerate, requiring specialized equipment and consuming large amounts of energy. Reverse osmosis can also be used to remove micropollutants from water; however, “it doesn’t lead to good elimination of this class of molecules, because of both their concentration and their molecular structure,” explains Doyle.

    Inspired by soap

    When devising their solution for how to remove micropollutants from water, the MIT researchers were inspired by a common household cleaning supply — soap. Soap cleans everything from our hands and bodies to dirty dishes to clothes, so perhaps the chemistry of soap could also be applied to sanitizing water. Soap has molecules called surfactants which have both hydrophobic (water-hating) and hydrophilic (water-loving) components. When water comes in contact with soap, the hydrophobic parts of the surfactant stick together, assembling into spherical structures called micelles with the hydrophobic portions of the molecules in the interior. The hydrophobic micelle cores trap and help carry away oily substances like dirt. 

    Doyle’s lab synthesized micelle-laden hydrogel particles to essentially cleanse water. Gokhale explains that they used microfluidics which “involve processing fluids on very small, micron-like scales” to generate uniform polymeric hydrogel particles continuously and reproducibly. These hydrogels, which are porous and absorbent, incorporate a surfactant, a photoinitiator (a molecule that creates reactive species), and a cross-linking agent known as PEGDA. The surfactant assembles into micelles that are chemically bonded to the hydrogel using ultraviolet light. When water flows through this micro-particle system, micropollutants latch onto the micelles and separate from the water. The physical interaction used in the system is strong enough to pull micropollutants from water, but weak enough that the hydrogel particles can be separated from the micropollutants, restabilized, and reused. Lab testing shows that both the speed and extent of pollutant removal increase when the amount of surfactant incorporated into the hydrogels is increased.

    “We’ve shown that in terms of rate of pullout, which is what really matters when you scale this up for industrial use, that with our initial format, we can already outperform the activated carbon,” says Doyle. “We can actually regenerate these particles very easily at room temperature. Nearly 10 regeneration cycles with minimal change in performance,” he adds.

    Regeneration of the particles occurs by soaking the micelles in 90 percent ethanol, whereby “all the pollutants just come out of the particles and back into the ethanol” says Gokhale. Ethanol is biosafe at low concentrations, inexpensive, and combustible, allowing for safe and economically feasible disposal. The recycling of the hydrogel particles makes this technology sustainable, which is a large advantage over activated carbon. The hydrogels can also be tuned to any hydrophobic micropollutant, making this system a novel, flexible approach to water purification.

    Scaling up

    The team experimented in the lab using 2-naphthol, a micropollutant that is an organic pollutant of concern and known to be difficult to remove by using conventional water filtration methods. They hope to continue testing with real water samples. 

    “Right now, we spike one micropollutant into pure lab water. We’d like to get water samples from the natural environment, that we can study and look at experimentally,” says Doyle. 

    By using microfluidics to increase particle production, Doyle and his lab hope to make household-scale filters to be tested with real wastewater. They then anticipate scaling up to municipal water treatment or even industrial wastewater treatment. 

    The lab recently filed an international patent application for their hydrogel technology that uses immobilized micelles. They plan to continue this work by experimenting with different kinds of hydrogels for the removal of heavy metal contaminants like lead from water. 

    Societal impacts

    Funded by a 2019 J-WAFS seed grant that is currently ongoing, this research has the potential to improve the speed, precision, efficiency, and environmental sustainability of water purification systems across the world. 

    “I always wanted to do work which had a social impact, and I was also always interested in water, because I think it’s really cool,” says Gokhale. He notes, “it’s really interesting how water sort of fits into different kinds of fields … we have to consider the cultures of peoples, how we’re going to use this, and then just the equity of these water processes.” Originally from India, Gokhale says he’s seen places that have barely any water at all and others that have floods year after year. “There’s a lot of interesting work to be done, and I think it’s work in this area that’s really going to impact a lot of people’s lives in years to come,” Gokhale says.

    Doyle adds, “water is the most important thing, perhaps for the next decades to come, so it’s very fulfilling to work on something that is so important to the whole world.” More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    MIT Center for Real Estate launches the Asia Real Estate Initiative

    To appreciate the explosive urbanization taking place in Asia, consider this analogy: Every 40 days, a city the equivalent size of Boston is built in Asia. Of the $24.7 trillion real estate investment opportunities predicted by 2030 in emerging cities, $17.8 trillion (72 percent) will be in Asia. While this growth is exciting to the real estate industry, it brings with it the attendant social and environmental issues.

    To promote a sustainable and innovative approach to this growth, leadership at the MIT Center for Real Estate (MIT CRE) recently established the Asia Real Estate Initiative (AREI), which aims to become a platform for industry leaders, entrepreneurs, and the academic community to find solutions to the practical concerns of real estate development across these countries.

    “Behind the creation of this initiative is the understanding that Asia is a living lab for the study of future global urban development,” says Hashim Sarkis, dean of the MIT School of Architecture and Planning.

    An investment in cities of the future

    One of the areas in AREI’s scope of focus is connecting sustainability and technology in real estate.

    “We believe the real estate sector should work cooperatively with the energy, science, and technology sectors to solve the climate challenges,” says Richard Lester, the Institute’s associate provost for international activities. “AREI will engage academics and industry leaders, nongovernment organizations, and civic leaders globally and in Asia, to advance sharing knowledge and research.”

    In its effort to understand how trends and new technologies will impact the future of real estate, AREI has received initial support from a prominent alumnus of MIT CRE who wishes to remain anonymous. The gift will support a cohort of researchers working on innovative technologies applicable to advancing real estate sustainability goals, with a special focus on the global and Asia markets. The call for applications is already under way, with AREI seeking to collaborate with scholars who have backgrounds in economics, finance, urban planning, technology, engineering, and other disciplines.

    “The research on real estate sustainability and technology could transform this industry and help invent global real estate of the future,” says Professor Siqi Zheng, faculty director of MIT CRE and AREI faculty chair. “The pairing of real estate and technology often leads to innovative and differential real estate development strategies such as buildings that are green, smart, and healthy.”

    The initiative arrives at a key time to make a significant impact and cement a leadership role in real estate development across Asia. MIT CRE is positioned to help the industry increase its efficiency and social responsibility, with nearly 40 years of pioneering research in the field. Zheng, an established scholar with expertise on urban growth in fast-urbanizing regions, is the former president of the Asia Real Estate Society and sits on the Board of American Real Estate and Urban Economics Association. Her research has been supported by international institutions including the World Bank, the Asian Development Bank, and the Lincoln Institute of Land Policy.

    “The researchers in AREI are now working on three interrelated themes: the future of real estate and live-work-play dynamics; connecting sustainability and technology in real estate; and innovations in real estate finance and business,” says Zheng.

    The first theme has already yielded a book — “Toward Urban Economic Vibrancy: Patterns and Practices in Asia’s New Cities” — recently published by SA+P Press.

    Engaging thought leaders and global stakeholders

    AREI also plans to collaborate with counterparts in Asia to contribute to research, education, and industry dialogue to meet the challenges of sustainable city-making across the continent and identify areas for innovation. Traditionally, real estate has been a very local business with a lengthy value chain, according to Zhengzhen Tan, director of AREI. Most developers focused their career on one particular product type in one particular regional market. AREI is working to change that dynamic.

    “We want to create a cross-border dialogue within Asia and among Asia, North America, and European leaders to exchange knowledge and practices,” says Tan. “The real estate industry’s learning costs are very high compared to other sectors. Collective learning will reduce the cost of failure and have a significant impact on these global issues.”

    The 2021 United Nations Climate Change Conference in Glasgow shed additional light on environmental commitments being made by governments in Asia. With real estate representing 40 percent of global greenhouse gas emissions, the Asian real estate market is undergoing an urgent transformation to deliver on this commitment.

    “One of the most pressing calls is to get to net-zero emissions for real estate development and operation,” says Tan. “Real estate investors and developers are making short- and long-term choices that are locking in environmental footprints for the ‘decisive decade.’ We hope to inspire developers and investors to think differently and get out of their comfort zone.” More

  • in

    New maps show airplane contrails over the U.S. dropped steeply in 2020

    As Covid-19’s initial wave crested around the world, travel restrictions and a drop in passengers led to a record number of grounded flights in 2020. The air travel reduction cleared the skies of not just jets but also the fluffy white contrails they produce high in the atmosphere.

    MIT engineers have mapped the contrails that were generated over the United States in 2020, and compared the results to prepandemic years. They found that on any given day in 2018, and again in 2019, contrails covered a total area equal to Massachusetts and Connecticut combined. In 2020, this contrail coverage shrank by about 20 percent, mirroring a similar drop in U.S. flights.  

    While 2020’s contrail dip may not be surprising, the findings are proof that the team’s mapping technique works. Their study marks the first time researchers have captured the fine and ephemeral details of contrails over a large continental scale.

    Now, the researchers are applying the technique to predict where in the atmosphere contrails are likely to form. The cloud-like formations are known to play a significant role in aviation-related global warming. The team is working with major airlines to forecast regions in the atmosphere where contrails may form, and to reroute planes around these regions to minimize contrail production.

    “This kind of technology can help divert planes to prevent contrails, in real time,” says Steven Barrett, professor and associate head of MIT’s Department of Aeronautics and Astronautics. “There’s an unusual opportunity to halve aviation’s climate impact by eliminating most of the contrails produced today.”

    Barrett and his colleagues have published their results today in the journal Environmental Research Letters. His co-authors at MIT include graduate student Vincent Meijer, former graduate student Luke Kulik, research scientists Sebastian Eastham, Florian Allroggen, and Raymond Speth, and LIDS Director and professor Sertac Karaman.

    Trail training

    About half of the aviation industry’s contribution to global warming comes directly from planes’ carbon dioxide emissions. The other half is thought to be a consequence of their contrails. The signature white tails are produced when a plane’s hot, humid exhaust mixes with cool humid air high in the atmosphere. Emitted in thin lines, contrails quickly spread out and can act as blankets that trap the Earth’s outgoing heat.

    While a single contrail may not have much of a warming effect, taken together contrails have a significant impact. But the estimates of this effect are uncertain and based on computer modeling as well as limited satellite data. What’s more, traditional computer vision algorithms that analyze contrail data have a hard time discerning the wispy tails from natural clouds.

    To precisely pick out and track contrails over a large scale, the MIT team looked to images taken by NASA’s GOES-16, a geostationary satellite that hovers over the same swath of the Earth, including the United States, taking continuous, high-resolution images.

    The team first obtained about 100 images taken by the satellite, and trained a set of people to interpret remote sensing data and label each image’s pixel as either part of a contrail or not. They used this labeled dataset to train a computer-vision algorithm to discern a contrail from a cloud or other image feature.

    The researchers then ran the algorithm on about 100,000 satellite images, amounting to nearly 6 trillion pixels, each pixel representing an area of about 2 square kilometers. The images covered the contiguous U.S., along with parts of Canada and Mexico, and were taken about every 15 minutes, between Jan. 1, 2018, and Dec. 31, 2020.

    The algorithm automatically classified each pixel as either a contrail or not a contrail, and generated daily maps of contrails over the United States. These maps mirrored the major flight paths of most U.S. airlines, with some notable differences. For instance, contrail “holes” appeared around major airports, which reflects the fact that planes landing and taking off around airports are generally not high enough in the atmosphere for contrails to form.

    “The algorithm knows nothing about where planes fly, and yet when processing the satellite imagery, it resulted in recognizable flight routes,” Barrett says. “That’s one piece of evidence that says this method really does capture contrails over a large scale.”

    Cloudy patterns

    Based on the algorithm’s maps, the researchers calculated the total area covered each day by contrails in the US. On an average day in 2018 and in 2019, U.S. contrails took up about 43,000 square kilometers. This coverage dropped by 20 percent in March of 2020 as the pandemic set in. From then on, contrails slowly reappeared as air travel resumed through the year.

    The team also observed daily and seasonal patterns. In general, contrails appeared to peak in the morning and decline in the afternoon. This may be a training artifact: As natural cirrus clouds are more likely to form in the afternoon, the algorithm may have trouble discerning contrails amid the clouds later in the day. But it might also be an important indication about when contrails form most. Contrails also peaked in late winter and early spring, when more of the air is naturally colder and more conducive for contrail formation.

    The team has now adapted the technique to predict where contrails are likely to form in real time. Avoiding these regions, Barrett says, could take a significant, almost immediate chunk out of aviation’s global warming contribution.  

    “Most measures to make aviation sustainable take a long time,” Barrett says. “(Contrail avoidance) could be accomplished in a few years, because it requires small changes to how aircraft are flown, with existing airplanes and observational technology. It’s a near-term way of reducing aviation’s warming by about half.”

    The team is now working towards this objective of large-scale contrail avoidance using realtime satellite observations.

    This research was supported in part by NASA and the MIT Environmental Solutions Initiative. More