More stories

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    Building communities, founding a startup with people in mind

    MIT postdoc Francesco Benedetti admits he wasn’t always a star student. But the people he met along his educational journey inspired him to strive, which led him to conduct research at MIT, launch a startup, and even lead the team that won the 2021 MIT $100K Entrepreneurship Competition. Now he is determined to make sure his company, Osmoses, succeeds in boosting the energy efficiency of traditional and renewable natural gas processing, hydrogen production, and carbon capture — thus helping to address climate change.

    “I can’t be grateful enough to MIT for bringing together a community of people who want to change the world,” Benedetti says. “Now we have a technology that can solve one of the big problems of our society.”

    Benedetti and his team have developed an innovative way to separate molecules using a membrane fine enough to extract impurities such as carbon dioxide or hydrogen sulfide from raw natural gas to obtain higher-quality fuel, fulfilling a crucial need in the energy industry. “Natural gas now provides about 40 percent of the energy used to power homes and industry in the United States,” Benedetti says. Using his team’s technology to upgrade natural gas more efficiently could reduce emissions of greenhouse gases while saving enough energy to power the equivalent of 7 million additional U.S. homes for a year, he adds.

    The MIT community

    Benedetti first came to MIT in 2017 as a visiting student from the University of Bologna in Italy, where he was working on membranes for gas separation for his PhD in chemical engineering. Having completed a master’s thesis on water desalination at the University of Texas (UT) at Austin, he connected with UT alumnus Zachary P. Smith, the Robert N. Noyce Career Development Professor of Chemical Engineering at MIT, and the two discovered they shared a vision. “We found ourselves very much aligned on the need for new technology in industry to lower the energy consumption of separating components,” Benedetti says.

    Although Benedetti had always been interested in making a positive impact on the world, particularly the environment, he says it was his university studies that first sparked his interest in more efficient separation technologies. “When you study chemical engineering, you understand hundreds of ways the field can have a positive impact in the world. But we learn very early that 15 percent of the world’s energy is wasted because of inefficient chemical separation — because we still rely on centuries-old technology,” he says. Most separation processes still use heat or toxic solvents to separate components, he explains.

    Still, Benedetti says, his main drive comes from the joy of working with terrific mentors and colleagues. “It’s the people I’ve met that really inspired me to tackle the biggest challenges and find that intrinsic motivation,” he says.

    To help build his community at MIT and provide support for international students, Benedetti co-founded the MIT Visiting Student Association (VISTA) in September 2017. By February 2018, the organization had hundreds of members and official Institute recognition. In May 2018, the group won two Institute awards, including the Golden Beaver Award for enhancing the campus environment. “VISTA gave me a sense of belonging; I loved it,” Benedetti says.

    Membrane technology

    Benedetti also published two papers on membrane research during his stint as a visiting student at MIT, so he was delighted to return in 2019 for postdoctoral work through the MIT Energy Initiative, where he was a 2019-20 ExxonMobil-MIT Energy Fellow. “I came back because the research was extremely exciting, but also because I got extremely passionate about the energy I found on campus and with the people,” he says.

    Returning to MIT enabled Benedetti to continue his work with Smith and Holden Lai, both of whom helped co-found Osmoses. Lai, a recent Stanford PhD in chemistry who was also a visiting student at MIT in 2018, is now the chief technology officer at Osmoses. Co-founder Katherine Mizrahi Rodriguez ’17, an MIT PhD candidate, joined the team more recently.

    Together, the Osmoses team has developed polymer membranes with microporosities capable of filtering gases by separating out molecules that differ by as little as a fraction of an angstrom — a unit of length equal to one hundred-millionth of a centimeter. “We can get up to five times higher selectivity than commercially available technology for methane upgrading, and this has been observed operating the membranes in industrially relevant environments,” Benedetti says.

    Today, methane upgrading — removing carbon dioxide (CO2) from raw natural gas to obtain a higher-grade fuel — is often accomplished using amine absorption, a process that uses toxic solvents to capture CO2 and burns methane to fuel the regeneration of those solvents for reuse. Using Osmoses’ filters would eliminate the need for such solvents while reducing CO2 emissions by up to 16 million metric tons per year in the United States alone, Benedetti says.

    The technology has a wide range of applications — in oxygen and nitrogen generation, hydrogen purification, and carbon capture, for example — but Osmoses plans to start with the $5 billion market for natural gas upgrading because the need to bring innovation and sustainability to that space is urgent, says Benedetti, who received guidance in bringing technology to market from MIT’s Deshpande Center for Technological Innovation. The Osmoses team has also received support from the MIT Sandbox Innovation Fund Program.

    The next step for the startup is to build an industrial-scale prototype, and Benedetti says the company got a huge boost toward that goal in May when it won the MIT $100K Entrepreneurship Competition, a student-run contest that has launched more than 160 companies since it began in 1990. Ninety teams began the competition by pitching their startup ideas; 20 received mentorship and development funding; then eight finalists presented business plans to compete for the $100,000 prize. “Because of this, we’re getting a lot of interest from venture capital firms, investors, companies, corporate funds, et cetera, that want to partner with us or to use our product,” he says. In June, the Osmoses team received a two-year Activate Fellowship, which will support moving its research to market; in October, it won the Northeast Regional and Carbon Sequestration Prizes at the Cleantech Open Accelerator; and in November, the team closed a $3 million pre-seed round of financing.

    FAIL!

    Naturally, Benedetti hopes Osmoses is on the path to success, but he wants everyone to know that there is no shame in failures that come from best efforts. He admits it took him three years longer than usual to finish his undergraduate and master’s degrees, and he says, “I have experienced the pressure you feel when society judges you like a book by its cover and how much a lack of inspired leaders and a supportive environment can kill creativity and the will to try.”

    That’s why in 2018 he, along with other MIT students and VISTA members, started FAIL!–Inspiring Resilience, an organization that provides a platform for sharing unfiltered stories and the lessons leaders have gleaned from failure. “We wanted to help de-stigmatize failure, appreciate vulnerabilities, and inspire humble leadership, eventually creating better communities,” Benedetti says. “If we can make failures, big and small, less intimidating and all-consuming, individuals with great potential will be more willing to take risks, think outside the box, and try things that may push new boundaries. In this way, more breakthrough discoveries are likely to follow, without compromising anyone’s mental health.”

    Benedetti says he will strive to create a supportive culture at Osmoses, because people are central to success. “What drives me every day is the people. I would have no story without the people around me,” he says. “The moment you lose touch with people, you lose the opportunity to create something special.”

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    How to clean solar panels without water

    Solar power is expected to reach 10 percent of global power generation by the year 2030, and much of that is likely to be located in desert areas, where sunlight is abundant. But the accumulation of dust on solar panels or mirrors is already a significant issue — it can reduce the output of photovoltaic panels by as much as 30 percent in just one month — so regular cleaning is essential for such installations.

    But cleaning solar panels currently is estimated to use about 10 billion gallons of water per year — enough to supply drinking water for up to 2 million people. Attempts at waterless cleaning are labor intensive and tend to cause irreversible scratching of the surfaces, which also reduces efficiency. Now, a team of researchers at MIT has devised a way of automatically cleaning solar panels, or the mirrors of solar thermal plants, in a waterless, no-contact system that could significantly reduce the dust problem, they say.

    The new system uses electrostatic repulsion to cause dust particles to detach and virtually leap off the panel’s surface, without the need for water or brushes. To activate the system, a simple electrode passes just above the solar panel’s surface, imparting an electrical charge to the dust particles, which are then repelled by a charge applied to the panel itself. The system can be operated automatically using a simple electric motor and guide rails along the side of the panel. The research is described today in the journal Science Advances, in a paper by MIT graduate student Sreedath Panat and professor of mechanical engineering Kripa Varanasi.

    Play video

    Despite concerted efforts worldwide to develop ever more efficient solar panels, Varanasi says, “a mundane problem like dust can actually put a serious dent in the whole thing.” Lab tests conducted by Panat and Varanasi showed that the dropoff of energy output from the panels happens steeply at the very beginning of the process of dust accumulation and can easily reach 30 percent reduction after just one month without cleaning. Even a 1 percent reduction in power, for a 150-megawatt solar installation, they calculated, could result in a $200,000 loss in annual revenue. The researchers say that globally, a 3 to 4 percent reduction in power output from solar plants would amount to a loss of between $3.3 billion and $5.5 billion.

    “There is so much work going on in solar materials,” Varanasi says. “They’re pushing the boundaries, trying to gain a few percent here and there in improving the efficiency, and here you have something that can obliterate all of that right away.”

    Many of the largest solar power installations in the world, including ones in China, India, the U.A.E., and the U.S., are located in desert regions. The water used for cleaning these solar panels using pressurized water jets has to be trucked in from a distance, and it has to be very pure to avoid leaving behind deposits on the surfaces. Dry scrubbing is sometimes used but is less effective at cleaning the surfaces and can cause permanent scratching that also reduces light transmission.

    Water cleaning makes up about 10 percent of the operating costs of solar installations. The new system could potentially reduce these costs while improving the overall power output by allowing for more frequent automated cleanings, the researchers say.

    “The water footprint of the solar industry is mind boggling,” Varanasi says, and it will be increasing as these installations continue to expand worldwide. “So, the industry has to be very careful and thoughtful about how to make this a sustainable solution.”

    Other groups have tried to develop electrostatic based solutions, but these have relied on a layer called an electrodynamic screen, using interdigitated electrodes. These screens can have defects that allow moisture in and cause them to fail, Varanasi says. While they might be useful on a place like Mars, he says, where moisture is not an issue, even in desert environments on Earth this can be a serious problem.

    The new system they developed only requires an electrode, which can be a simple metal bar, to pass over the panel, producing an electric field that imparts a charge to the dust particles as it goes. An opposite charge applied to a transparent conductive layer just a few nanometers thick deposited on the glass covering of the the solar panel then repels the particles, and by calculating the right voltage to apply, the researchers were able to find a voltage range sufficient to overcome the pull of gravity and adhesion forces, and cause the dust to lift away.

    Using specially prepared laboratory samples of dust with a range of particle sizes, experiments proved that the process works effectively on a laboratory-scale test installation, Panat says. The tests showed that humidity in the air provided a thin coating of water on the particles, which turned out to be crucial to making the effect work. “We performed experiments at varying humidities from 5 percent to 95 percent,” Panat says. “As long as the ambient humidity is greater than 30 percent, you can remove almost all of the particles from the surface, but as humidity decreases, it becomes harder.”

    Varanasi says that “the good news is that when you get to 30 percent humidity, most deserts actually fall in this regime.” And even those that are typically drier than that tend to have higher humidity in the early morning hours, leading to dew formation, so the cleaning could be timed accordingly.

    “Moreover, unlike some of the prior work on electrodynamic screens, which actually do not work at high or even moderate humidity, our system can work at humidity even as high as 95 percent, indefinitely,” Panat says.

    In practice, at scale, each solar panel could be fitted with railings on each side, with an electrode spanning across the panel. A small electric motor, perhaps using a tiny portion of the output from the panel itself, would drive a belt system to move the electrode from one end of the panel to the other, causing all the dust to fall away. The whole process could be automated or controlled remotely. Alternatively, thin strips of conductive transparent material could be permanently arranged above the panel, eliminating the need for moving parts.

    By eliminating the dependency on trucked-in water, by eliminating the buildup of dust that can contain corrosive compounds, and by lowering the overall operational costs, such systems have the potential to significantly improve the overall efficiency and reliability of solar installations, Varanasi says.

    The research was supported by Italian energy firm Eni. S.p.A. through the MIT Energy Initiative. More

  • in

    Q&A: Randolph Kirchain on how cool pavements can mitigate climate change

    As cities search for climate change solutions, many have turned to one burgeoning technology: cool pavements. By reflecting a greater proportion of solar radiation, cool pavements can offer an array of climate change mitigation benefits, from direct radiative forcing to reduced building energy demand.

    Yet, scientists from the MIT Concrete Sustainability Hub (CSHub) have found that cool pavements are not just a summertime solution. Here, Randolph Kirchain, a principal research scientist at CSHub, discusses how implementing cool pavements can offer myriad greenhouse gas reductions in cities — some of which occur even in the winter.

    Q: What exactly are cool pavements? 

    A: There are two ways to make a cool pavement: changing the pavement formulation to make the pavement porous like a sponge (a so-called “pervious pavement”), or paving with reflective materials. The latter method has been applied extensively because it can be easily adopted on the current road network with different traffic volumes while sustaining — and sometimes improving — the road longevity. To the average observer, surface reflectivity usually corresponds to the color of a pavement — the lighter, the more reflective. 

    We can quantify this surface reflectivity through a measurement called albedo, which refers to the percentage of light a surface reflects. Typically, a reflective pavement has an albedo of 0.3 or higher, meaning that it reflects 30 percent of the light it receives.

    To attain this reflectivity, there are a number of techniques at our disposal. The most common approach is to simply paint a brighter coating atop existing pavements. But it’s also possible to pave with materials that possess naturally greater reflectivity, such as concrete or lighter-colored binders and aggregates.

    Q: How can cool pavements mitigate climate change?

    A: Cool pavements generate several, often unexpected, effects. The most widely known is a reduction in surface and local air temperatures. This occurs because cool pavements absorb less radiation and, consequently, emit less of that radiation as heat. In the summer, this means they can lower urban air temperatures by several degrees Fahrenheit.

    By changing air temperatures or reflecting light into adjacent structures, cool pavements can also alter the need for heating and cooling in those structures, which can change their energy demand and, therefore, mitigate the climate change impacts associated with building energy demand.

    However, depending on how dense the neighborhood is built, a proportion of the radiation cool pavements reflect doesn’t strike buildings; instead, it travels back into the atmosphere and out into space. This process, called a radiative forcing, shifts the Earth’s energy balance and effectively offsets some of the radiation trapped by greenhouse gases (GHGs).

    Perhaps the least-known impact of cool pavements is on vehicle fuel consumption. Certain cool pavements, namely concrete, possess a combination of structural properties and longevity that can minimize the excess fuel consumption of vehicles caused by road quality. Over the lifetime of a pavement, these fuel savings can add up — often offsetting the higher initial footprint of paving with more durable materials.

    Q: With these impacts in mind, how do the effects of cool pavements vary seasonally and by location?

    A: Many view cool pavements as a solution to summer heat. But research has shown that they can offer climate change benefits throughout the year.

    In high-volume traffic roads, the most prominent climate change benefit of cool pavements is not their reflectivity but their impact on vehicle fuel consumption. As such, cool pavement alternatives that minimize fuel consumption can continue to cut GHG emissions in winter, assuming traffic is constant.

    Even in winter, pavement reflectivity still contributes greatly to the climate change mitigation benefits of cool pavements. We found that roughly a third of the annual CO2-equivalent emissions reductions from the radiative forcing effects of cool pavements occurred in the fall and winter.

    It’s important to note, too, that the direction — not just the magnitude — of cool pavement impacts also vary seasonally. The most prominent seasonal variation is the changes to building energy demand. As they lower air temperatures, cool pavements can lessen the demand for cooling in buildings in the summer, while, conversely, they can cause buildings to consume more energy and generate more emissions due to heating in the winter.

    Interestingly, the radiation reflected by cool pavements can also strike adjacent buildings, heating them up. In the summer, this can increase building energy demand significantly, yet in the winter it can also warm structures and reduce their need for heating. In that sense, cool pavements can warm — as well as cool — their surroundings, depending on the building insolation [solar exposure] systems and neighborhood density.

    Q: How can cities manage these many impacts?

    A: As you can imagine, such different and often competing impacts can complicate the implementation of cool pavements. In some contexts, for instance, a cool pavement might even generate more emissions over its life than a conventional pavement — despite lowering air temperatures.

    To ensure that the lowest-emitting pavement is selected, then, cities should use a life-cycle perspective that considers all potential impacts. When they do, research has shown that they can reap sizeable benefits. The city of Phoenix, for instance, could see its projected emissions fall by as much as 6 percent, while Boston would experience a reduction of up to 3 percent.

    These benefits don’t just demonstrate the potential of cool pavements: they also reflect the outsized impact of pavements on our built environment and, moreover, our climate. As cities move to fight climate change, they should know that one of their most extensive assets also presents an opportunity for greater sustainability.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Toward batteries that pack twice as much energy per pound

    In the endless quest to pack more energy into batteries without increasing their weight or volume, one especially promising technology is the solid-state battery. In these batteries, the usual liquid electrolyte that carries charges back and forth between the electrodes is replaced with a solid electrolyte layer. Such batteries could potentially not only deliver twice as much energy for their size, they also could virtually eliminate the fire hazard associated with today’s lithium-ion batteries.

    But one thing has held back solid-state batteries: Instabilities at the boundary between the solid electrolyte layer and the two electrodes on either side can dramatically shorten the lifetime of such batteries. Some studies have used special coatings to improve the bonding between the layers, but this adds the expense of extra coating steps in the fabrication process. Now, a team of researchers at MIT and Brookhaven National Laboratory have come up with a way of achieving results that equal or surpass the durability of the coated surfaces, but with no need for any coatings.

    The new method simply requires eliminating any carbon dioxide present during a critical manufacturing step, called sintering, where the battery materials are heated to create bonding between the cathode and electrolyte layers, which are made of ceramic compounds. Even though the amount of carbon dioxide present is vanishingly small in air, measured in parts per million, its effects turn out to be dramatic and detrimental. Carrying out the sintering step in pure oxygen creates bonds that match the performance of the best coated surfaces, without that extra cost of the coating, the researchers say.

    The findings are reported in the journal Advanced Energy Materials, in a paper by MIT doctoral student Younggyu Kim, professor of nuclear science and engineering and of materials science and engineering Bilge Yildiz, and Iradikanari Waluyo and Adrian Hunt at Brookhaven National Laboratory.

    “Solid-state batteries have been desirable for different reasons for a long time,” Yildiz says. “The key motivating points for solid batteries are they are safer and have higher energy density,” but they have been held back from large scale commercialization by two factors, she says: the lower conductivity of the solid electrolyte, and the interface instability issues.

    The conductivity issue has been effectively tackled, and reasonably high-conductivity materials have already been demonstrated, according to Yildiz. But overcoming the instabilities that arise at the interface has been far more challenging. These instabilities can occur during both the manufacturing and the electrochemical operation of such batteries, but for now the researchers have focused on the manufacturing, and specifically the sintering process.

    Sintering is needed because if the ceramic layers are simply pressed onto each other, the contact between them is far from ideal, there are far too many gaps, and the electrical resistance across the interface is high. Sintering, which is usually done at temperatures of 1,000 degrees Celsius or above for ceramic materials, causes atoms from each material to migrate into the other to form bonds. The team’s experiments showed that at temperatures anywhere above a few hundred degrees, detrimental reactions take place that increase the resistance at the interface — but only if carbon dioxide is present, even in tiny amounts. They demonstrated that avoiding carbon dioxide, and in particular maintaining a pure oxygen atmosphere during sintering, could create very good bonding at temperatures up to 700 degrees, with none of the detrimental compounds formed.

    The performance of the cathode-electrolyte interface made using this method, Yildiz says, was “comparable to the best interface resistances we have seen in the literature,” but those were all achieved using the extra step of applying coatings. “We are finding that you can avoid that additional fabrication step, which is typically expensive.”

    The potential gains in energy density that solid-state batteries provide comes from the fact that they enable the use of pure lithium metal as one of the electrodes, which is much lighter than the currently used electrodes made of lithium-infused graphite.

    The team is now studying the next part of the performance of such batteries, which is how these bonds hold up over the long run during battery cycling. Meanwhile, the new findings could potentially be applied rapidly to battery production, she says. “What we are proposing is a relatively simple process in the fabrication of the cells. It doesn’t add much energy penalty to the fabrication. So, we believe that it can be adopted relatively easily into the fabrication process,” and the added costs, they have calculated, should be negligible.

    Large companies such as Toyota are already at work commercializing early versions of solid-state lithium-ion batteries, and these new findings could quickly help such companies improve the economics and durability of the technology.

    The research was supported by the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies. The team used facilities supported by the National Science Foundation and facilities at Brookhaven National Laboratory supported by the Department of Energy. More

  • in

    Q&A: Climate Grand Challenges finalists on building equity and fairness into climate solutions

    Note: This is the first in a four-part interview series that will highlight the work of the Climate Grand Challenges finalists, ahead of the April announcement of several multiyear, flagship projects.

    The finalists in MIT’s first-ever Climate Grand Challenges competition each received $100,000 to develop bold, interdisciplinary research and innovation plans designed to attack some of the world’s most difficult and unresolved climate problems. The 27 teams are addressing four Grand Challenge problem areas: building equity and fairness into climate solutions; decarbonizing complex industries and processes; removing, managing, and storing greenhouse gases; and using data and science for improved climate risk forecasting.  

    In a conversation prepared for MIT News, faculty from three of the teams in the competition’s “Building equity and fairness into climate solutions” category share their thoughts on the need for inclusive solutions that prioritize disadvantaged and vulnerable populations, and discuss how they are working to accelerate their research to achieve the greatest impact. The following responses have been edited for length and clarity.

    The Equitable Resilience Framework

    Any effort to solve the most complex global climate problems must recognize the unequal burdens borne by different groups, communities, and societies — and should be equitable as well as effective. Janelle Knox-Hayes, associate professor in the Department of Urban Studies and Planning, leads a team that is developing processes and practices for equitable resilience, starting with a local pilot project in Boston over the next five years and extending to other cities and regions of the country. The Equitable Resilience Framework (ERF) is designed to create long-term economic, social, and environmental transformations by increasing the capacity of interconnected systems and communities to respond to a broad range of climate-related events. 

    Q: What is the problem you are trying to solve?

    A: Inequity is one of the severe impacts of climate change and resonates in both mitigation and adaptation efforts. It is important for climate strategies to address challenges of inequity and, if possible, to design strategies that enhance justice, equity, and inclusion, while also enhancing the efficacy of mitigation and adaptation efforts. Our framework offers a blueprint for how communities, cities, and regions can begin to undertake this work.

    Q: What are the most significant barriers that have impacted progress to date?

    A: There is considerable inertia in policymaking. Climate change requires a rethinking, not only of directives but pathways and techniques of policymaking. This is an obstacle and part of the reason our project was designed to scale up from local pilot projects. Another consideration is that the private sector can be more adaptive and nimble in its adoption of creative techniques. Working with the MIT Climate and Sustainability Consortium there may be ways in which we could modify the ERF to help companies address similar internal adaptation and resilience challenges.

    Protecting and enhancing natural carbon sinks

    Deforestation and forest degradation of strategic ecosystems in the Amazon, Central Africa, and Southeast Asia continue to reduce capacity to capture and store carbon through natural systems and threaten even the most aggressive decarbonization plans. John Fernandez, professor in the Department of Architecture and director of the Environmental Solutions Initiative, reflects on his work with Daniela Rus, professor of electrical engineering and computer science and director of the Computer Science and Artificial Intelligence Laboratory, and Joann de Zegher, assistant professor of Operations Management at MIT Sloan, to protect tropical forests by deploying a three-part solution that integrates targeted technology breakthroughs, deep community engagement, and innovative bioeconomic opportunities. 

    Q: Why is the problem you seek to address a “grand challenge”?

    A: We are trying to bring the latest technology to monitoring, assessing, and protecting tropical forests, as well as other carbon-rich and highly biodiverse ecosystems. This is a grand challenge because natural sinks around the world are threatening to release enormous quantities of stored carbon that could lead to runaway global warming. When combined with deep community engagement, particularly with indigenous and afro-descendant communities, this integrated approach promises to deliver substantially enhanced efficacy in conservation coupled to robust and sustainable local development.

    Q: What is known about this problem and what questions remain unanswered?

    A: Satellites, drones, and other technologies are acquiring more data about natural carbon sinks than ever before. The problem is well-described in certain locations such as the eastern Amazon, which has shifted from a net carbon sink to now a net positive carbon emitter. It is also well-known that indigenous peoples are the most effective stewards of the ecosystems that store the greatest amounts of carbon. One of the key questions that remains to be answered is determining the bioeconomy opportunities inherent within the natural wealth of tropical forests and other important ecosystems that are important to sustained protection and conservation.

    Reducing group-based disparities in climate adaptation

    Race, ethnicity, caste, religion, and nationality are often linked to vulnerability to the adverse effects of climate change, and if left unchecked, threaten to exacerbate long standing inequities. A team led by Evan Lieberman, professor of political science and director of the MIT Global Diversity Lab and MIT International Science and Technology Initiatives, Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics, and Siqi Zheng, professor of urban and real estate sustainability in the Center for Real Estate and the Department of Urban Studies and Planning, is seeking to  reduce ethnic and racial group-based disparities in the capacity of urban communities to adapt to the changing climate. Working with partners in nine coastal cities, they will measure the distribution of climate-related burdens and resiliency through satellites, a custom mobile app, and natural language processing of social media, to help design and test communication campaigns that provide accurate information about risks and remediation to impacted groups. 

    Q: How has this problem evolved?

    A: Group-based disparities continue to intensify within and across countries, owing in part to some randomness in the location of adverse climate events, as well as deep legacies of unequal human development. In turn, economically and politically privileged groups routinely hoard resources for adaptation. In a few cases — notably the United States, Brazil, and with respect to climate-related migrancy, in South Asia — there has been a great deal of research documenting the extent of such disparities. However, we lack common metrics, and for the most part, such disparities are only understood where key actors have politicized the underlying problems. In much of the world, relatively vulnerable and excluded groups may not even be fully aware of the nature of the challenges they face or the resources they require.

    Q: Who will benefit most from your research? 

    A: The greatest beneficiaries will be members of those vulnerable groups who lack the resources and infrastructure to withstand adverse climate shocks. We believe that it will be important to develop solutions such that relatively privileged groups do not perceive them as punitive or zero-sum, but rather as long-term solutions for collective benefit that are both sound and just. More

  • in

    Can the world meet global climate targets without coordinated global action?

    Like many of its predecessors, the 2021 United Nations Climate Change Conference (COP26) in Glasgow, Scotland concluded with bold promises on international climate action aimed at keeping global warming well below 2 degrees Celsius, but few concrete plans to ensure that those promises will be kept. While it’s not too late for the Paris Agreement’s nearly 200 signatory nations to take concerted action to cap global warming at 2 C — if not 1.5 C — there is simply no guarantee that they will do so. If they fail, how much warming is the Earth likely to see in the 21st century and beyond?

    A new study by researchers at the MIT Joint Program on the Science and Policy of Global Change and the Shell Scenarios Team projects that without a globally coordinated mitigation effort to reduce greenhouse gas emissions, the planet’s average surface temperature will reach 2.8 C, much higher than the “well below 2 C” level to which the Paris Agreement aspires, but a lot lower than what many widely used “business-as-usual” scenarios project.  

    Recognizing the limitations of such scenarios, which generally assume that historical trends in energy technology choices and climate policy inaction will persist for decades to come, the researchers have designed a “Growing Pressures” scenario that accounts for mounting social, technological, business, and political pressures that are driving a transition away from fossil-fuel use and toward a low-carbon future. Such pressures have already begun to expand low-carbon technology and policy options, which, in turn, have escalated demand to utilize those options — a trend that’s expected to self-reinforce. Under this scenario, an array of future actions and policies cause renewable energy and energy storage costs to decline; fossil fuels to be phased out; electrification to proliferate; and emissions from agriculture and industry to be sharply reduced.

    Incorporating these growing pressures in the MIT Joint Program’s integrated model of Earth and human systems, the study’s co-authors project future energy use, greenhouse gas emissions, and global average surface temperatures in a world that fails to implement coordinated, global climate mitigation policies, and instead pursues piecemeal actions at mostly local and national levels.

    “Few, if any, previous studies explore scenarios of how piecemeal climate policies might plausibly unfold into the future and impact global temperature,” says MIT Joint Program research scientist Jennifer Morris, the study’s lead author. “We offer such a scenario, considering a future in which the increasingly visible impacts of climate change drive growing pressure from voters, shareholders, consumers, and investors, which in turn drives piecemeal action by governments and businesses that steer investments away from fossil fuels and toward low-carbon alternatives.”

    In the study’s central case (representing the mid-range climate response to greenhouse gas emissions), fossil fuels persist in the global energy mix through 2060 and then slowly decline toward zero by 2130; global carbon dioxide emissions reach near-zero levels by 2130 (total greenhouse gas emissions decline to near-zero by 2150); and global surface temperatures stabilize at 2.8 C by 2150, 2.5 C lower than a widely used “business-as-usual” projection. The results appear in the journal Environmental Economics and Policy Studies.

    Such a transition could bring the global energy system to near-zero emissions, but more aggressive climate action would be needed to keep global temperatures well below 2 C in alignment with the Paris Agreement.

    “While we fully support the need to decarbonize as fast as possible, it is critical to assess realistic alternative scenarios of world development,” says Joint Program Deputy Director Sergey Paltsev, a co-author of the study. “We investigate plausible actions that could bring society closer to the long-term goals of the Paris Agreement. To actually meet those goals will require an accelerated transition away from fossil energy through a combination of R&D, technology deployment, infrastructure development, policy incentives, and business practices.”

    The study was funded by government, foundation, and industrial sponsors of the MIT Joint Program, including Shell International Ltd. More

  • in

    Using artificial intelligence to find anomalies hiding in massive datasets

    Identifying a malfunction in the nation’s power grid can be like trying to find a needle in an enormous haystack. Hundreds of thousands of interrelated sensors spread across the U.S. capture data on electric current, voltage, and other critical information in real time, often taking multiple recordings per second.

    Researchers at the MIT-IBM Watson AI Lab have devised a computationally efficient method that can automatically pinpoint anomalies in those data streams in real time. They demonstrated that their artificial intelligence method, which learns to model the interconnectedness of the power grid, is much better at detecting these glitches than some other popular techniques.

    Because the machine-learning model they developed does not require annotated data on power grid anomalies for training, it would be easier to apply in real-world situations where high-quality, labeled datasets are often hard to come by. The model is also flexible and can be applied to other situations where a vast number of interconnected sensors collect and report data, like traffic monitoring systems. It could, for example, identify traffic bottlenecks or reveal how traffic jams cascade.

    “In the case of a power grid, people have tried to capture the data using statistics and then define detection rules with domain knowledge to say that, for example, if the voltage surges by a certain percentage, then the grid operator should be alerted. Such rule-based systems, even empowered by statistical data analysis, require a lot of labor and expertise. We show that we can automate this process and also learn patterns from the data using advanced machine-learning techniques,” says senior author Jie Chen, a research staff member and manager of the MIT-IBM Watson AI Lab.

    The co-author is Enyan Dai, an MIT-IBM Watson AI Lab intern and graduate student at the Pennsylvania State University. This research will be presented at the International Conference on Learning Representations.

    Probing probabilities

    The researchers began by defining an anomaly as an event that has a low probability of occurring, like a sudden spike in voltage. They treat the power grid data as a probability distribution, so if they can estimate the probability densities, they can identify the low-density values in the dataset. Those data points which are least likely to occur correspond to anomalies.

    Estimating those probabilities is no easy task, especially since each sample captures multiple time series, and each time series is a set of multidimensional data points recorded over time. Plus, the sensors that capture all that data are conditional on one another, meaning they are connected in a certain configuration and one sensor can sometimes impact others.

    To learn the complex conditional probability distribution of the data, the researchers used a special type of deep-learning model called a normalizing flow, which is particularly effective at estimating the probability density of a sample.

    They augmented that normalizing flow model using a type of graph, known as a Bayesian network, which can learn the complex, causal relationship structure between different sensors. This graph structure enables the researchers to see patterns in the data and estimate anomalies more accurately, Chen explains.

    “The sensors are interacting with each other, and they have causal relationships and depend on each other. So, we have to be able to inject this dependency information into the way that we compute the probabilities,” he says.

    This Bayesian network factorizes, or breaks down, the joint probability of the multiple time series data into less complex, conditional probabilities that are much easier to parameterize, learn, and evaluate. This allows the researchers to estimate the likelihood of observing certain sensor readings, and to identify those readings that have a low probability of occurring, meaning they are anomalies.

    Their method is especially powerful because this complex graph structure does not need to be defined in advance — the model can learn the graph on its own, in an unsupervised manner.

    A powerful technique

    They tested this framework by seeing how well it could identify anomalies in power grid data, traffic data, and water system data. The datasets they used for testing contained anomalies that had been identified by humans, so the researchers were able to compare the anomalies their model identified with real glitches in each system.

    Their model outperformed all the baselines by detecting a higher percentage of true anomalies in each dataset.

    “For the baselines, a lot of them don’t incorporate graph structure. That perfectly corroborates our hypothesis. Figuring out the dependency relationships between the different nodes in the graph is definitely helping us,” Chen says.

    Their methodology is also flexible. Armed with a large, unlabeled dataset, they can tune the model to make effective anomaly predictions in other situations, like traffic patterns.

    Once the model is deployed, it would continue to learn from a steady stream of new sensor data, adapting to possible drift of the data distribution and maintaining accuracy over time, says Chen.

    Though this particular project is close to its end, he looks forward to applying the lessons he learned to other areas of deep-learning research, particularly on graphs.

    Chen and his colleagues could use this approach to develop models that map other complex, conditional relationships. They also want to explore how they can efficiently learn these models when the graphs become enormous, perhaps with millions or billions of interconnected nodes. And rather than finding anomalies, they could also use this approach to improve the accuracy of forecasts based on datasets or streamline other classification techniques.

    This work was funded by the MIT-IBM Watson AI Lab and the U.S. Department of Energy. More