More stories

  • in

    Q&A: Climate Grand Challenges finalists on using data and science to forecast climate-related risk

    Note: This is the final article in a four-part interview series featuring the work of the 27 MIT Climate Grand Challenges finalist teams, which received a total of $2.7 million in startup funding to advance their projects. This month, the Institute will name a subset of the finalists as multiyear flagship projects.

    Advances in computation, artificial intelligence, robotics, and data science are enabling a new generation of observational tools and scientific modeling with the potential to produce timely, reliable, and quantitative analysis of future climate risks at a local scale. These projections can increase the accuracy and efficacy of early warning systems, improve emergency planning, and provide actionable information for climate mitigation and adaptation efforts, as human actions continue to change planetary conditions.

    In conversations prepared for MIT News, faculty from four Climate Grand Challenges teams with projects in the competition’s “Using data and science to forecast climate-related risk” category describe the promising new technologies that can help scientists understand the Earth’s climate system on a finer scale than ever before. (The other Climate Grand Challenges research themes include building equity and fairness into climate solutions, removing, managing, and storing greenhouse gases, and decarbonizing complex industries and processes.) The following responses have been edited for length and clarity.

    An observational system that can initiate a climate risk forecasting revolution

    Despite recent technological advances and massive volumes of data, climate forecasts remain highly uncertain. Gaps in observational capabilities create substantial challenges to predicting extreme weather events and establishing effective mitigation and adaptation strategies. R. John Hansman, the T. Wilson Professor of Aeronautics and Astronautics and director of the MIT International Center for Air Transportation, discusses the Stratospheric Airborne Climate Observatory System (SACOS) being developed together with Brent Minchew, the Cecil and Ida Green Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), and a team that includes researchers from MIT Lincoln Laboratory and Harvard University.

    Q: How does SACOS reduce uncertainty in climate risk forecasting?

    A: There is a critical need for higher spatial and temporal resolution observations of the climate system than are currently available through remote (satellite or airborne) and surface (in-situ) sensing. We are developing an ensemble of high-endurance, solar-powered aircraft with instrument systems capable of performing months-long climate observing missions that satellites or aircraft alone cannot fulfill. Summer months are ideal for SACOS operations, as many key climate phenomena are active and short night periods reduce the battery mass, vehicle size, and technical risks. These observations hold the potential to inform and predict, allowing emergency planners, policymakers, and the rest of society to better prepare for the changes to come.

    Q: Describe the types of observing missions where SACOS could provide critical improvements.

    A: The demise of the Antarctic Ice Sheet, which is leading to rising sea levels around the world and threatening the displacement of millions of people, is one example. Current sea level forecasts struggle to account for giant fissures that create massive icebergs and cause the Antarctic Ice Sheet to flow more rapidly into the ocean. SACOS can track these fissures to accurately forecast ice slippage and give impacted populations enough time to prepare or evacuate. Elsewhere, widespread droughts cause rampant wildfires and water shortages. SACOS has the ability to monitor soil moisture and humidity in critically dry regions to identify where and when wildfires and droughts are imminent. SACOS also offers the most effective method to measure, track, and predict local ozone depletion over North America, which has resulted in increasingly severe summer thunderstorms.

    Quantifying and managing the risks of sea-level rise

    Prevailing estimates of sea-level rise range from approximately 20 centimeters to 2 meters by the end of the century, with the associated costs on the order of trillions of dollars. The instability of certain portions of the world’s ice sheets creates vast uncertainties, complicating how the world prepares for and responds to these potential changes. EAPS Professor Brent Minchew is leading another Climate Grand Challenges finalist team working on an integrated, multidisciplinary effort to improve the scientific understanding of sea-level rise and provide actionable information and tools to manage the risks it poses.

    Q: What have been the most significant challenges to understanding the potential rates of sea-level rise?

    A: West Antarctica is one of the most remote, inaccessible, and hostile places on Earth — to people and equipment. Thus, opportunities to observe the collapse of the West Antarctic Ice Sheet, which contains enough ice to raise global sea levels by about 3 meters, are limited and current observations crudely resolved. It is essential that we understand how the floating edge of the ice sheets, often called ice shelves, fracture and collapse because they provide critical forces that govern the rate of ice mass loss and can stabilize the West Antarctic Ice Sheet.

    Q: How will your project advance what is currently known about sea-level rise?

    A: We aim to advance global-scale projections of sea-level rise through novel observational technologies and computational models of ice sheet change and to link those predictions to region- to neighborhood-scale estimates of costs and adaptation strategies. To do this, we propose two novel instruments: a first-of-its-kind drone that can fly for months at a time over Antarctica making continuous observations of critical areas and an airdropped seismometer and GPS bundle that can be deployed to vulnerable and hard-to-reach areas of the ice sheet. This technology will provide greater data quality and density and will observe the ice sheet at frequencies that are currently inaccessible — elements that are essential for understanding the physics governing the evolution of the ice sheet and sea-level rise.

    Changing flood risk for coastal communities in the developing world

    Globally, more than 600 million people live in low-elevation coastal areas that face an increasing risk of flooding from sea-level rise. This includes two-thirds of cities with populations of more than 5 million and regions that conduct the vast majority of global trade. Dara Entekhabi, the Bacardi and Stockholm Water Foundations Professor in the Department of Civil and Environmental Engineering and professor in the Department of Earth, Atmospheric, and Planetary Sciences, outlines an interdisciplinary partnership that leverages data and technology to guide short-term and chart long-term adaptation pathways with Miho Mazereeuw, associate professor of architecture and urbanism and director of the Urban Risk Lab in the School of Architecture and Planning, and Danielle Wood, assistant professor in the Program in Media Arts and Sciences and the Department of Aeronautics and Astronautics.

    Q: What is the key problem this program seeks to address?

    A: The accumulated heating of the Earth system due to fossil burning is largely absorbed by the oceans, and the stored heat expands the ocean volume leading to increased base height for tides. When the high tides inundate a city, the condition is referred to as “sunny day” flooding, but the saline waters corrode infrastructure and wreak havoc on daily routines. The danger ahead for many coastal cities in the developing world is the combination of increasing high tide intrusions, coupled with heavy precipitation storm events.

    Q: How will your proposed solutions impact flood risk management?

    A: We are producing detailed risk maps for coastal cities in developing countries using newly available, very high-resolution remote-sensing data from space-borne instruments, as well as historical tides records and regional storm characteristics. Using these datasets, we aim to produce street-by-street risk maps that provide local decision-makers and stakeholders with a way to estimate present and future flood risks. With the model of future tides and probabilistic precipitation events, we can forecast future inundation by a flooding event, decadal changes with various climate-change and sea-level rise projections, and an increase in the likelihood of sunny-day flooding. Working closely with local partners, we will develop toolkits to explore short-term emergency response, as well as long-term mitigation and adaptation techniques in six pilot locations in South and Southeast Asia, Africa, and South America.

    Ocean vital signs

    On average, every person on Earth generates fossil fuel emissions equivalent to an 8-pound bag of carbon, every day. Much of this is absorbed by the ocean, but there is wide variability in the estimates of oceanic absorption, which translates into differences of trillions of dollars in the required cost of mitigation. In the Department of Earth, Atmospheric and Planetary Sciences, Christopher Hill, a principal research engineer specializing in Earth and planetary computational science, works with Ryan Woosley, a principal research scientist focusing on the carbon cycle and ocean acidification. Hill explains that they hope to use artificial intelligence and machine learning to help resolve this uncertainty.

    Q: What is the current state of knowledge on air-sea interactions?

    A: Obtaining specific, accurate field measurements of critical physical, chemical, and biological exchanges between the ocean and the planet have historically entailed expensive science missions with large ship-based infrastructure that leave gaps in real-time data about significant ocean climate processes. Recent advances in highly scalable in-situ autonomous observing and navigation combined with airborne, remote sensing, and machine learning innovations have the potential to transform data gathering, provide more accurate information, and address fundamental scientific questions around air-sea interaction.

    Q: How will your approach accelerate real-time, autonomous surface ocean observing from an experimental research endeavor to a permanent and impactful solution?

    A: Our project seeks to demonstrate how a scalable surface ocean observing network can be launched and operated, and to illustrate how this can reduce uncertainties in estimates of air-sea carbon dioxide exchange. With an initial high-impact goal of substantially eliminating the vast uncertainties that plague our understanding of ocean uptake of carbon dioxide, we will gather critical measurements for improving extended weather and climate forecast models and reducing climate impact uncertainty. The results have the potential to more accurately identify trillions of dollars worth of economic activity. More

  • in

    Improving predictions of sea level rise for the next century

    When we think of climate change, one of the most dramatic images that comes to mind is the loss of glacial ice. As the Earth warms, these enormous rivers of ice become a casualty of the rising temperatures. But, as ice sheets retreat, they also become an important contributor to one the more dangerous outcomes of climate change: sea-level rise. At MIT, an interdisciplinary team of scientists is determined to improve sea level rise predictions for the next century, in part by taking a closer look at the physics of ice sheets.

    Last month, two research proposals on the topic, led by Brent Minchew, the Cecil and Ida Green Career Development Professor in the Department of Earth, Atmospheric and Planetary Sciences (EAPS), were announced as finalists in the MIT Climate Grand Challenges initiative. Launched in July 2020, Climate Grand Challenges fielded almost 100 project proposals from collaborators across the Institute who heeded the bold charge: to develop research and innovations that will deliver game-changing advances in the world’s efforts to address the climate challenge.

    As finalists, Minchew and his collaborators from the departments of Urban Studies and Planning, Economics, Civil and Environmental Engineering, the Haystack Observatory, and external partners, received $100,000 to develop their research plans. A subset of the 27 proposals tapped as finalists will be announced next month, making up a portfolio of multiyear “flagship” projects receiving additional funding and support.

    One goal of both Minchew proposals is to more fully understand the most fundamental processes that govern rapid changes in glacial ice, and to use that understanding to build next-generation models that are more predictive of ice sheet behavior as they respond to, and influence, climate change.

    “We need to develop more accurate and computationally efficient models that provide testable projections of sea-level rise over the coming decades. To do so quickly, we want to make better and more frequent observations and learn the physics of ice sheets from these data,” says Minchew. “For example, how much stress do you have to apply to ice before it breaks?”

    Currently, Minchew’s Glacier Dynamics and Remote Sensing group uses satellites to observe the ice sheets on Greenland and Antarctica primarily with interferometric synthetic aperture radar (InSAR). But the data are often collected over long intervals of time, which only gives them “before and after” snapshots of big events. By taking more frequent measurements on shorter time scales, such as hours or days, they can get a more detailed picture of what is happening in the ice.

    “Many of the key unknowns in our projections of what ice sheets are going to look like in the future, and how they’re going to evolve, involve the dynamics of glaciers, or our understanding of how the flow speed and the resistances to flow are related,” says Minchew.

    At the heart of the two proposals is the creation of SACOS, the Stratospheric Airborne Climate Observatory System. The group envisions developing solar-powered drones that can fly in the stratosphere for months at a time, taking more frequent measurements using a new lightweight, low-power radar and other high-resolution instrumentation. They also propose air-dropping sensors directly onto the ice, equipped with seismometers and GPS trackers to measure high-frequency vibrations in the ice and pinpoint the motions of its flow.

    How glaciers contribute to sea level rise

    Current climate models predict an increase in sea levels over the next century, but by just how much is still unclear. Estimates are anywhere from 20 centimeters to two meters, which is a large difference when it comes to enacting policy or mitigation. Minchew points out that response measures will be different, depending on which end of the scale it falls toward. If it’s closer to 20 centimeters, coastal barriers can be built to protect low-level areas. But with higher surges, such measures become too expensive and inefficient to be viable, as entire portions of cities and millions of people would have to be relocated.

    “If we’re looking at a future where we could get more than a meter of sea level rise by the end of the century, then we need to know about that sooner rather than later so that we can start to plan and to do our best to prepare for that scenario,” he says.

    There are two ways glaciers and ice sheets contribute to rising sea levels: direct melting of the ice and accelerated transport of ice to the oceans. In Antarctica, warming waters melt the margins of the ice sheets, which tends to reduce the resistive stresses and allow ice to flow more quickly to the ocean. This thinning can also cause the ice shelves to be more prone to fracture, facilitating the calving of icebergs — events which sometimes cause even further acceleration of ice flow.

    Using data collected by SACOS, Minchew and his group can better understand what material properties in the ice allow for fracturing and calving of icebergs, and build a more complete picture of how ice sheets respond to climate forces. 

    “What I want is to reduce and quantify the uncertainties in projections of sea level rise out to the year 2100,” he says.

    From that more complete picture, the team — which also includes economists, engineers, and urban planning specialists — can work on developing predictive models and methods to help communities and governments estimate the costs associated with sea level rise, develop sound infrastructure strategies, and spur engineering innovation.

    Understanding glacier dynamics

    More frequent radar measurements and the collection of higher-resolution seismic and GPS data will allow Minchew and the team to develop a better understanding of the broad category of glacier dynamics — including calving, an important process in setting the rate of sea level rise which is currently not well understood.  

    “Some of what we’re doing is quite similar to what seismologists do,” he says. “They measure seismic waves following an earthquake, or a volcanic eruption, or things of this nature and use those observations to better understand the mechanisms that govern these phenomena.”

    Air-droppable sensors will help them collect information about ice sheet movement, but this method comes with drawbacks — like installation and maintenance, which is difficult to do out on a massive ice sheet that is moving and melting. Also, the instruments can each only take measurements at a single location. Minchew equates it to a bobber in water: All it can tell you is how the bobber moves as the waves disturb it.

    But by also taking continuous radar measurements from the air, Minchew’s team can collect observations both in space and in time. Instead of just watching the bobber in the water, they can effectively make a movie of the waves propagating out, as well as visualize processes like iceberg calving happening in multiple dimensions.

    Once the bobbers are in place and the movies recorded, the next step is developing machine learning algorithms to help analyze all the new data being collected. While this data-driven kind of discovery has been a hot topic in other fields, this is the first time it has been applied to glacier research.

    “We’ve developed this new methodology to ingest this huge amount of data,” he says, “and from that create an entirely new way of analyzing the system to answer these fundamental and critically important questions.”  More

  • in

    Q&A: Climate Grand Challenges finalists on accelerating reductions in global greenhouse gas emissions

    This is the second article in a four-part interview series highlighting the work of the 27 MIT Climate Grand Challenges finalists, which received a total of $2.7 million in startup funding to advance their projects. In April, the Institute will name a subset of the finalists as multiyear flagship projects.

    Last month, the Intergovernmental Panel on Climate Change (IPCC), an expert body of the United Nations representing 195 governments, released its latest scientific report on the growing threats posed by climate change, and called for drastic reductions in greenhouse gas emissions to avert the most catastrophic outcomes for humanity and natural ecosystems.

    Bringing the global economy to net-zero carbon dioxide emissions by midcentury is complex and demands new ideas and novel approaches. The first-ever MIT Climate Grand Challenges competition focuses on four problem areas including removing greenhouse gases from the atmosphere and identifying effective, economic solutions for managing and storing these gases. The other Climate Grand Challenges research themes address using data and science to forecast climate-related risk, decarbonizing complex industries and processes, and building equity and fairness into climate solutions.

    In the following conversations prepared for MIT News, faculty from three of the teams working to solve “Removing, managing, and storing greenhouse gases” explain how they are drawing upon geological, biological, chemical, and oceanic processes to develop game-changing techniques for carbon removal, management, and storage. Their responses have been edited for length and clarity.

    Directed evolution of biological carbon fixation

    Agricultural demand is estimated to increase by 50 percent in the coming decades, while climate change is simultaneously projected to drastically reduce crop yield and predictability, requiring a dramatic acceleration of land clearing. Without immediate intervention, this will have dire impacts on wild habitat, rob the livelihoods of hundreds of millions of subsistence farmers, and create hundreds of gigatons of new emissions. Matthew Shoulders, associate professor in the Department of Chemistry, talks about the working group he is leading in partnership with Ed Boyden, the Y. Eva Tan professor of neurotechnology and Howard Hughes Medical Institute investigator at the McGovern Institute for Brain Research, that aims to massively reduce carbon emissions from agriculture by relieving core biochemical bottlenecks in the photosynthetic process using the most sophisticated synthetic biology available to science.

    Q: Describe the two pathways you have identified for improving agricultural productivity and climate resiliency.

    A: First, cyanobacteria grow millions of times faster than plants and dozens of times faster than microalgae. Engineering these cyanobacteria as a source of key food products using synthetic biology will enable food production using less land, in a fundamentally more climate-resilient manner. Second, carbon fixation, or the process by which carbon dioxide is incorporated into organic compounds, is the rate-limiting step of photosynthesis and becomes even less efficient under rising temperatures. Enhancements to Rubisco, the enzyme mediating this central process, will both improve crop yields and provide climate resilience to crops needed by 2050. Our team, led by Robbie Wilson and Max Schubert, has created new directed evolution methods tailored for both strategies, and we have already uncovered promising early results. Applying directed evolution to photosynthesis, carbon fixation, and food production has the potential to usher in a second green revolution.

    Q: What partners will you need to accelerate the development of your solutions?

    A: We have already partnered with leading agriculture institutes with deep experience in plant transformation and field trial capacity, enabling the integration of our improved carbon-dioxide-fixing enzymes into a wide range of crop plants. At the deployment stage, we will be positioned to partner with multiple industry groups to achieve improved agriculture at scale. Partnerships with major seed companies around the world will be key to leverage distribution channels in manufacturing supply chains and networks of farmers, agronomists, and licensed retailers. Support from local governments will also be critical where subsidies for seeds are necessary for farmers to earn a living, such as smallholder and subsistence farming communities. Additionally, our research provides an accessible platform that is capable of enabling and enhancing carbon dioxide sequestration in diverse organisms, extending our sphere of partnership to a wide range of companies interested in industrial microbial applications, including algal and cyanobacterial, and in carbon capture and storage.

    Strategies to reduce atmospheric methane

    One of the most potent greenhouse gases, methane is emitted by a range of human activities and natural processes that include agriculture and waste management, fossil fuel production, and changing land use practices — with no single dominant source. Together with a diverse group of faculty and researchers from the schools of Humanities, Arts, and Social Sciences; Architecture and Planning; Engineering; and Science; plus the MIT Schwarzman College of Computing, Desiree Plata, associate professor in the Department of Civil and Environmental Engineering, is spearheading the MIT Methane Network, an integrated approach to formulating scalable new technologies, business models, and policy solutions for driving down levels of atmospheric methane.

    Q: What is the problem you are trying to solve and why is it a “grand challenge”?

    A: Removing methane from the atmosphere, or stopping it from getting there in the first place, could change the rates of global warming in our lifetimes, saving as much as half a degree of warming by 2050. Methane sources are distributed in space and time and tend to be very dilute, making the removal of methane a challenge that pushes the boundaries of contemporary science and engineering capabilities. Because the primary sources of atmospheric methane are linked to our economy and culture — from clearing wetlands for cultivation to natural gas extraction and dairy and meat production — the social and economic implications of a fundamentally changed methane management system are far-reaching. Nevertheless, these problems are tractable and could significantly reduce the effects of climate change in the near term.

    Q: What is known about the rapid rise in atmospheric methane and what questions remain unanswered?

    A: Tracking atmospheric methane is a challenge in and of itself, but it has become clear that emissions are large, accelerated by human activity, and cause damage right away. While some progress has been made in satellite-based measurements of methane emissions, there is a need to translate that data into actionable solutions. Several key questions remain around improving sensor accuracy and sensor network design to optimize placement, improve response time, and stop leaks with autonomous controls on the ground. Additional questions involve deploying low-level methane oxidation systems and novel catalytic materials at coal mines, dairy barns, and other enriched sources; evaluating the policy strategies and the socioeconomic impacts of new technologies with an eye toward decarbonization pathways; and scaling technology with viable business models that stimulate the economy while reducing greenhouse gas emissions.

    Deploying versatile carbon capture technologies and storage at scale

    There is growing consensus that simply capturing current carbon dioxide emissions is no longer sufficient — it is equally important to target distributed sources such as the oceans and air where carbon dioxide has accumulated from past emissions. Betar Gallant, the American Bureau of Shipping Career Development Associate Professor of Mechanical Engineering, discusses her work with Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in the Department of Earth, Atmospheric and Planetary Sciences, and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering and director of the School of Chemical Engineering Practice, to dramatically advance the portfolio of technologies available for carbon capture and permanent storage at scale. (A team led by Assistant Professor Matěj Peč of EAPS is also addressing carbon capture and storage.)

    Q: Carbon capture and storage processes have been around for several decades. What advances are you seeking to make through this project?

    A: Today’s capture paradigms are costly, inefficient, and complex. We seek to address this challenge by developing a new generation of capture technologies that operate using renewable energy inputs, are sufficiently versatile to accommodate emerging industrial demands, are adaptive and responsive to varied societal needs, and can be readily deployed to a wider landscape.

    New approaches will require the redesign of the entire capture process, necessitating basic science and engineering efforts that are broadly interdisciplinary in nature. At the same time, incumbent technologies have been optimized largely for integration with coal- or natural gas-burning power plants. Future applications must shift away from legacy emitters in the power sector towards hard-to-mitigate sectors such as cement, iron and steel, chemical, and hydrogen production. It will become equally important to develop and optimize systems targeted for much lower concentrations of carbon dioxide, such as in oceans or air. Our effort will expand basic science studies as well as human impacts of storage, including how public engagement and education can alter attitudes toward greater acceptance of carbon dioxide geologic storage.

    Q: What are the expected impacts of your proposed solution, both positive and negative?

    A: Renewable energy cannot be deployed rapidly enough everywhere, nor can it supplant all emissions sources, nor can it account for past emissions. Carbon capture and storage (CCS) provides a demonstrated method to address emissions that will undoubtedly occur before the transition to low-carbon energy is completed. CCS can succeed even if other strategies fail. It also allows for developing nations, which may need to adopt renewables over longer timescales, to see equitable economic development while avoiding the most harmful climate impacts. And, CCS enables the future viability of many core industries and transportation modes, many of which do not have clear alternatives before 2050, let alone 2040 or 2030.

    The perceived risks of potential leakage and earthquakes associated with geologic storage can be minimized by choosing suitable geologic formations for storage. Despite CCS providing a well-understood pathway for removing enough of the carbon dioxide already emitted into the atmosphere, some environmentalists vigorously oppose it, fearing that CCS rewards oil companies and disincentivizes the transition away from fossil fuels. We believe that it is more important to keep in mind the necessity of meeting key climate targets for the sake of the planet, and welcome those who can help. More

  • in

    Q&A: Randolph Kirchain on how cool pavements can mitigate climate change

    As cities search for climate change solutions, many have turned to one burgeoning technology: cool pavements. By reflecting a greater proportion of solar radiation, cool pavements can offer an array of climate change mitigation benefits, from direct radiative forcing to reduced building energy demand.

    Yet, scientists from the MIT Concrete Sustainability Hub (CSHub) have found that cool pavements are not just a summertime solution. Here, Randolph Kirchain, a principal research scientist at CSHub, discusses how implementing cool pavements can offer myriad greenhouse gas reductions in cities — some of which occur even in the winter.

    Q: What exactly are cool pavements? 

    A: There are two ways to make a cool pavement: changing the pavement formulation to make the pavement porous like a sponge (a so-called “pervious pavement”), or paving with reflective materials. The latter method has been applied extensively because it can be easily adopted on the current road network with different traffic volumes while sustaining — and sometimes improving — the road longevity. To the average observer, surface reflectivity usually corresponds to the color of a pavement — the lighter, the more reflective. 

    We can quantify this surface reflectivity through a measurement called albedo, which refers to the percentage of light a surface reflects. Typically, a reflective pavement has an albedo of 0.3 or higher, meaning that it reflects 30 percent of the light it receives.

    To attain this reflectivity, there are a number of techniques at our disposal. The most common approach is to simply paint a brighter coating atop existing pavements. But it’s also possible to pave with materials that possess naturally greater reflectivity, such as concrete or lighter-colored binders and aggregates.

    Q: How can cool pavements mitigate climate change?

    A: Cool pavements generate several, often unexpected, effects. The most widely known is a reduction in surface and local air temperatures. This occurs because cool pavements absorb less radiation and, consequently, emit less of that radiation as heat. In the summer, this means they can lower urban air temperatures by several degrees Fahrenheit.

    By changing air temperatures or reflecting light into adjacent structures, cool pavements can also alter the need for heating and cooling in those structures, which can change their energy demand and, therefore, mitigate the climate change impacts associated with building energy demand.

    However, depending on how dense the neighborhood is built, a proportion of the radiation cool pavements reflect doesn’t strike buildings; instead, it travels back into the atmosphere and out into space. This process, called a radiative forcing, shifts the Earth’s energy balance and effectively offsets some of the radiation trapped by greenhouse gases (GHGs).

    Perhaps the least-known impact of cool pavements is on vehicle fuel consumption. Certain cool pavements, namely concrete, possess a combination of structural properties and longevity that can minimize the excess fuel consumption of vehicles caused by road quality. Over the lifetime of a pavement, these fuel savings can add up — often offsetting the higher initial footprint of paving with more durable materials.

    Q: With these impacts in mind, how do the effects of cool pavements vary seasonally and by location?

    A: Many view cool pavements as a solution to summer heat. But research has shown that they can offer climate change benefits throughout the year.

    In high-volume traffic roads, the most prominent climate change benefit of cool pavements is not their reflectivity but their impact on vehicle fuel consumption. As such, cool pavement alternatives that minimize fuel consumption can continue to cut GHG emissions in winter, assuming traffic is constant.

    Even in winter, pavement reflectivity still contributes greatly to the climate change mitigation benefits of cool pavements. We found that roughly a third of the annual CO2-equivalent emissions reductions from the radiative forcing effects of cool pavements occurred in the fall and winter.

    It’s important to note, too, that the direction — not just the magnitude — of cool pavement impacts also vary seasonally. The most prominent seasonal variation is the changes to building energy demand. As they lower air temperatures, cool pavements can lessen the demand for cooling in buildings in the summer, while, conversely, they can cause buildings to consume more energy and generate more emissions due to heating in the winter.

    Interestingly, the radiation reflected by cool pavements can also strike adjacent buildings, heating them up. In the summer, this can increase building energy demand significantly, yet in the winter it can also warm structures and reduce their need for heating. In that sense, cool pavements can warm — as well as cool — their surroundings, depending on the building insolation [solar exposure] systems and neighborhood density.

    Q: How can cities manage these many impacts?

    A: As you can imagine, such different and often competing impacts can complicate the implementation of cool pavements. In some contexts, for instance, a cool pavement might even generate more emissions over its life than a conventional pavement — despite lowering air temperatures.

    To ensure that the lowest-emitting pavement is selected, then, cities should use a life-cycle perspective that considers all potential impacts. When they do, research has shown that they can reap sizeable benefits. The city of Phoenix, for instance, could see its projected emissions fall by as much as 6 percent, while Boston would experience a reduction of up to 3 percent.

    These benefits don’t just demonstrate the potential of cool pavements: they also reflect the outsized impact of pavements on our built environment and, moreover, our climate. As cities move to fight climate change, they should know that one of their most extensive assets also presents an opportunity for greater sustainability.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Using nature’s structures in wooden buildings

    Concern about climate change has focused significant attention on the buildings sector, in particular on the extraction and processing of construction materials. The concrete and steel industries together are responsible for as much as 15 percent of global carbon dioxide emissions. In contrast, wood provides a natural form of carbon sequestration, so there’s a move to use timber instead. Indeed, some countries are calling for public buildings to be made at least partly from timber, and large-scale timber buildings have been appearing around the world.

    Observing those trends, Caitlin Mueller ’07, SM ’14, PhD ’14, an associate professor of architecture and of civil and environmental engineering in the Building Technology Program at MIT, sees an opportunity for further sustainability gains. As the timber industry seeks to produce wooden replacements for traditional concrete and steel elements, the focus is on harvesting the straight sections of trees. Irregular sections such as knots and forks are turned into pellets and burned, or ground up to make garden mulch, which will decompose within a few years; both approaches release the carbon trapped in the wood to the atmosphere.

    For the past four years, Mueller and her Digital Structures research group have been developing a strategy for “upcycling” those waste materials by using them in construction — not as cladding or finishes aimed at improving appearance, but as structural components. “The greatest value you can give to a material is to give it a load-bearing role in a structure,” she says. But when builders use virgin materials, those structural components are the most emissions-intensive parts of buildings due to their large volume of high-strength materials. Using upcycled materials in place of those high-carbon systems is therefore especially impactful in reducing emissions.

    Mueller and her team focus on tree forks — that is, spots where the trunk or branch of a tree divides in two, forming a Y-shaped piece. In architectural drawings, there are many similar Y-shaped nodes where straight elements come together. In such cases, those units must be strong enough to support critical loads.

    “Tree forks are naturally engineered structural connections that work as cantilevers in trees, which means that they have the potential to transfer force very efficiently thanks to their internal fiber structure,” says Mueller. “If you take a tree fork and slice it down the middle, you see an unbelievable network of fibers that are intertwining to create these often three-dimensional load transfer points in a tree. We’re starting to do the same thing using 3D printing, but we’re nowhere near what nature does in terms of complex fiber orientation and geometry.”

    She and her team have developed a five-step “design-to-fabrication workflow” that combines natural structures such as tree forks with the digital and computational tools now used in architectural design. While there’s long been a “craft” movement to use natural wood in railings and decorative features, the use of computational tools makes it possible to use wood in structural roles — without excessive cutting, which is costly and may compromise the natural geometry and internal grain structure of the wood.

    Given the wide use of digital tools by today’s architects, Mueller believes that her approach is “at least potentially scalable and potentially achievable within our industrialized materials processing systems.” In addition, by combining tree forks with digital design tools, the novel approach can also support the trend among architects to explore new forms. “Many iconic buildings built in the past two decades have unexpected shapes,” says Mueller. “Tree branches have a very specific geometry that sometimes lends itself to an irregular or nonstandard architectural form — driven not by some arbitrary algorithm but by the material itself.”

    Step 0: Find a source, set goals

    Before starting their design-to-fabrication process, the researchers needed to locate a source of tree forks. Mueller found help in the Urban Forestry Division of the City of Somerville, Massachusetts, which maintains a digital inventory of more than 2,000 street trees — including more than 20 species — and records information about the location, approximate trunk diameter, and condition of each tree.

    With permission from the forestry division, the team was on hand in 2018 when a large group of trees was cut down near the site of the new Somerville High School. Among the heavy equipment on site was a chipper, poised to turn all the waste wood into mulch. Instead, the workers obligingly put the waste wood into the researchers’ truck to be brought to MIT.

    In their project, the MIT team sought not only to upcycle that waste material but also to use it to create a structure that would be valued by the public. “Where I live, the city has had to take down a lot of trees due to damage from an invasive species of beetle,” Mueller explains. “People get really upset — understandably. Trees are an important part of the urban fabric, providing shade and beauty.” She and her team hoped to reduce that animosity by “reinstalling the removed trees in the form of a new functional structure that would recreate the atmosphere and spatial experience previously provided by the felled trees.”

    With their source and goals identified, the researchers were ready to demonstrate the five steps in their design-to-fabrication workflow for making spatial structures using an inventory of tree forks.

    Step 1: Create a digital material library

    The first task was to turn their collection of tree forks into a digital library. They began by cutting off excess material to produce isolated tree forks. They then created a 3D scan of each fork. Mueller notes that as a result of recent progress in photogrammetry (measuring objects using photographs) and 3D scanning, they could create high-resolution digital representations of the individual tree forks with relatively inexpensive equipment, even using apps that run on a typical smartphone.

    In the digital library, each fork is represented by a “skeletonized” version showing three straight bars coming together at a point. The relative geometry and orientation of the branches are of particular interest because they determine the internal fiber orientation that gives the component its strength.

    Step 2: Find the best match between the initial design and the material library

    Like a tree, a typical architectural design is filled with Y-shaped nodes where three straight elements meet up to support a critical load. The goal was therefore to match the tree forks in the material library with the nodes in a sample architectural design.

    First, the researchers developed a “mismatch metric” for quantifying how well the geometries of a particular tree fork aligned with a given design node. “We’re trying to line up the straight elements in the structure with where the branches originally were in the tree,” explains Mueller. “That gives us the optimal orientation for load transfer and maximizes use of the inherent strength of the wood fiber.” The poorer the alignment, the higher the mismatch metric.

    The goal was to get the best overall distribution of all the tree forks among the nodes in the target design. Therefore, the researchers needed to try different fork-to-node distributions and, for each distribution, add up the individual fork-to-node mismatch errors to generate an overall, or global, matching score. The distribution with the best matching score would produce the most structurally efficient use of the total tree fork inventory.

    Since performing that process manually would take far too long to be practical, they turned to the “Hungarian algorithm,” a technique developed in 1955 for solving such problems. “The brilliance of the algorithm is solving that [matching] problem very quickly,” Mueller says. She notes that it’s a very general-use algorithm. “It’s used for things like marriage match-making. It can be used any time you have two collections of things that you’re trying to find unique matches between. So, we definitely didn’t invent the algorithm, but we were the first to identify that it could be used for this problem.”

    The researchers performed repeated tests to show possible distributions of the tree forks in their inventory and found that the matching score improved as the number of forks available in the material library increased — up to a point. In general, the researchers concluded that the mismatch score was lowest, and thus best, when there were about three times as many forks in the material library as there were nodes in the target design.

    Step 3: Balance designer intention with structural performance

    The next step in the process was to incorporate the intention or preference of the designer. To permit that flexibility, each design includes a limited number of critical parameters, such as bar length and bending strain. Using those parameters, the designer can manually change the overall shape, or geometry, of the design or can use an algorithm that automatically changes, or “morphs,” the geometry. And every time the design geometry changes, the Hungarian algorithm recalculates the optimal fork-to-node matching.

    “Because the Hungarian algorithm is extremely fast, all the morphing and the design updating can be really fluid,” notes Mueller. In addition, any change to a new geometry is followed by a structural analysis that checks the deflections, strain energy, and other performance measures of the structure. On occasion, the automatically generated design that yields the best matching score may deviate far from the designer’s initial intention. In such cases, an alternative solution can be found that satisfactorily balances the design intention with a low matching score.

    Step 4: Automatically generate the machine code for fast cutting

    When the structural geometry and distribution of tree forks have been finalized, it’s time to think about actually building the structure. To simplify assembly and maintenance, the researchers prepare the tree forks by recutting their end faces to better match adjoining straight timbers and cutting off any remaining bark to reduce susceptibility to rot and fire.

    To guide that process, they developed a custom algorithm that automatically computes the cuts needed to make a given tree fork fit into its assigned node and to strip off the bark. The goal is to remove as little material as possible but also to avoid a complex, time-consuming machining process. “If we make too few cuts, we’ll cut off too much of the critical structural material. But we don’t want to make a million tiny cuts because it will take forever,” Mueller explains.

    The team uses facilities at the Autodesk Boston Technology Center Build Space, where the robots are far larger than any at MIT and the processing is all automated. To prepare each tree fork, they mount it on a robotic arm that pushes the joint through a traditional band saw in different orientations, guided by computer-generated instructions. The robot also mills all the holes for the structural connections. “That’s helpful because it ensures that everything is aligned the way you expect it to be,” says Mueller.

    Step 5: Assemble the available forks and linear elements to build the structure

    The final step is to assemble the structure. The tree-fork-based joints are all irregular, and combining them with the precut, straight wooden elements could be difficult. However, they’re all labeled. “All the information for the geometry is embedded in the joint, so the assembly process is really low-tech,” says Mueller. “It’s like a child’s toy set. You just follow the instructions on the joints to put all the pieces together.”

    They installed their final structure temporarily on the MIT campus, but Mueller notes that it was only a portion of the structure they plan to eventually build. “It had 12 nodes that we designed and fabricated using our process,” she says, adding that the team’s work was “a little interrupted by the pandemic.” As activity on campus resumes, the researchers plan to finish designing and building the complete structure, which will include about 40 nodes and will be installed as an outdoor pavilion on the site of the felled trees in Somerville.

    In addition, they will continue their research. Plans include working with larger material libraries, some with multibranch forks, and replacing their 3D-scanning technique with computerized tomography scanning technologies that can automatically generate a detailed geometric representation of a tree fork, including its precise fiber orientation and density. And in a parallel project, they’ve been exploring using their process with other sources of materials, with one case study focusing on using material from a demolished wood-framed house to construct more than a dozen geodesic domes.

    To Mueller, the work to date already provides new guidance for the architectural design process. With digital tools, it has become easy for architects to analyze the embodied carbon or future energy use of a design option. “Now we have a new metric of performance: How well am I using available resources?” she says. “With the Hungarian algorithm, we can compute that metric basically in real time, so we can work rapidly and creatively with that as another input to the design process.”

    This research was supported by MIT’s School of Architecture and Planning via the HASS Award.

    This article appears in the Autumn 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    MIT ReACT welcomes first Afghan cohort to its largest-yet certificate program

    Through the championing support of the faculty and leadership of the MIT Afghan Working Group convened last September by Provost Martin Schmidt and chaired by Associate Provost for International Activities Richard Lester, MIT has come together to support displaced Afghan learners and scholars in a time of crisis. The MIT Refugee Action Hub (ReACT) has opened opportunities for 25 talented Afghan learners to participate in the hub’s certificate program in computer and data science (CDS), now in its fourth year, welcoming its largest and most diverse cohort to date — 136 learners from 29 countries.

    ”Even in the face of extreme disruption, education and scholarship must continue, and MIT is committed to providing resources and safe forums for displaced scholars,” says Lester. “We greatly appreciate MIT ReACT’s work to create learning opportunities for Afghan students whose lives have been upended by the crisis in their homeland.”

    Currently, more than 3.5 million Afghans are internally displaced, while 2.5 million are registered refugees residing in other parts of the world. With millions in Afghanistan facing famine, poverty, and civil unrest in what has become the world’s largest humanitarian crisis, the United Nations predicts the number of Afghans forced to flee their homes will continue to rise. 

    “Forced displacement is on the rise, fueled not only by constant political, economical, and social turmoil worldwide, but also by the ongoing climate change crisis, which threatens costly disruptions to society and has potential to create unprecedented displacement internationally,” says associate professor of civil and environmental engineering and ReACT’s faculty founder Admir Masic. During the orientation for the new CDS cohort in January, Masic emphasized the great need for educational programs like ReACT’s that address the specific challenges refugees and displaced learners face.

    A former Bosnian refugee, Masic spent his teenage years in Croatia, where educational opportunities were limited for young people with refugee status. His experience motivated him to found ReACT, which launched in 2017. Housed within Open Learning, ReACT is an MIT-wide effort to deliver global education and professional development programs to underserved communities, including refugees and migrants. ReACT’s signature program, CDS is a year-long, online program that combines MITx courses in programming and data science, personal and professional development workshops including MIT Bootcamps, and opportunities for practical experience.

    ReACT’s group of 25 learners from Afghanistan, 52 percent of whom are women, joins the larger CDS cohort in the program. They will receive support from their new colleagues as well as members of ReACT’s mentor and alumni network. While the majority of the group are residing around the world, including in Europe, North America, and neighboring countries, several still remain in Afghanistan. With the support of the Afghan Working Group, ReACT is working to connect with communities from the region to provide safe and inclusive learning environments for the cohort. ​​

    Building community and confidence

    Selected from more than 1,000 applicants, the new CDS cohort reflected on their personal and professional goals during a weeklong orientation.

    “I am here because I want to change my career and learn basics in this field to then obtain networks that I wouldn’t have got if it weren’t for this program,” said Samiullah Ajmal, who is joining the program from Afghanistan.

    Interactive workshops on topics such as leadership development and virtual networking rounded out the week’s events. Members of ReACT’s greater community — which has grown in recent years to include a network of external collaborators including nonprofits, philanthropic supporters, universities, and alumni — helped facilitate these workshops and other orientation activities.

    For instance, Na’amal, a social enterprise that connects refugees to remote work opportunities, introduced the CDS learners to strategies for making career connections remotely. “We build confidence while doing,” says Susan Mulholland, a leadership and development coach with Na’amal who led the networking workshop.

    Along with the CDS program’s cohort-based model, ReACT also uses platforms that encourage regular communication between participants and with the larger ReACT network — making connections a critical component of the program.

    “I not only want to meet new people and make connections for my professional career, but I also want to test my communication and social skills,” says Pablo Andrés Uribe, a learner who lives in Colombia, describing ReACT’s emphasis on community-building. 

    Over the last two years, ReACT has expanded its geographic presence, growing from a hub in Jordan into a robust global community of many hubs, including in Colombia and Uganda. These regional sites connect talented refugees and displaced learners to internships and employment, startup networks and accelerators, and pathways to formal undergraduate and graduate education.

    This expansion is thanks to the generous support internally from the MIT Office of the Provost and Associate Provost Richard Lester and external organizations including the Western Union Foundation. ReACT will build new hubs this year in Greece, Uruguay, and Afghanistan, as a result of gifts from the Hatsopoulos family and the Pfeffer family.

    Holding space to learn from each other

    In addition to establishing new global hubs, ReACT plans to expand its network of internship and experiential learning opportunities, increasing outreach to new collaborators such as nongovernmental organizations (NGOs), companies, and universities. Jointly with Na’amal and Paper Airplanes, a nonprofit that connects conflict-affected individuals with personal language tutors, ReACT will host the first Migration Summit. Scheduled for April 2022, the month-long global convening invites a broad range of participants, including displaced learners, universities, companies, nonprofits and NGOs, social enterprises, foundations, philanthropists, researchers, policymakers, employers, and governments, to address the key challenges and opportunities for refugee and migrant communities. The theme of the summit is “Education and Workforce Development in Displacement.”

    “The MIT Migration Summit offers a platform to discuss how new educational models, such as those employed in ReACT, can help solve emerging challenges in providing quality education and career opportunities to forcibly displaced and marginalized people around the world,” says Masic. 

    A key goal of the convening is to center the voices of those most directly impacted by displacement, such as ReACT’s learners from Afghanistan and elsewhere, in solution-making. More

  • in

    3 Questions: What a single car can say about traffic

    Vehicle traffic has long defied description. Once measured roughly through visual inspection and traffic cameras, new smartphone crowdsourcing tools are now quantifying traffic far more precisely. This popular method, however, also presents a problem: Accurate measurements require a lot of data and users.

    Meshkat Botshekan, an MIT PhD student in civil and environmental engineering and research assistant at the MIT Concrete Sustainability Hub, has sought to expand on crowdsourcing methods by looking into the physics of traffic. During his time as a doctoral candidate, he has helped develop Carbin, a smartphone-based roadway crowdsourcing tool created by MIT CSHub and the University of Massachusetts Dartmouth, and used its data to offer more insight into the physics of traffic — from the formation of traffic jams to the inference of traffic phase and driving behavior. Here, he explains how recent findings can allow smartphones to infer traffic properties from the measurements of a single vehicle.  

    Q: Numerous navigation apps already measure traffic. Why do we need alternatives?

    A: Traffic characteristics have always been tough to measure. In the past, visual inspection and cameras were used to produce traffic metrics. So, there’s no denying that today’s navigation tools apps offer a superior alternative. Yet even these modern tools have gaps.

    Chief among them is their dependence on spatially distributed user counts: Essentially, these apps tally up their users on road segments to estimate the density of traffic. While this approach may seem adequate, it is both vulnerable to manipulation, as demonstrated in some viral videos, and requires immense quantities of data for reliable estimates. Processing these data is so time- and resource-intensive that, despite their availability, they can’t be used to quantify traffic effectively across a whole road network. As a result, this immense quantity of traffic data isn’t actually optimal for traffic management.

    Q: How could new technologies improve how we measure traffic?

    A: New alternatives have the potential to offer two improvements over existing methods: First, they can extrapolate far more about traffic with far fewer data. Second, they can cost a fraction of the price while offering a far simpler method of data collection. Just like Waze and Google Maps, they rely on crowdsourcing data from users. Yet, they are grounded in the incorporation of high-level statistical physics into data analysis.

    For instance, the Carbin app, which we are developing in collaboration with UMass Dartmouth, applies principles of statistical physics to existing traffic models to entirely forgo the need for user counts. Instead, it can infer traffic density and driver behavior using the input of a smartphone mounted in single vehicle.

    The method at the heart of the app, which was published last fall in Physical Review E, treats vehicles like particles in a many-body system. Just as the behavior of a closed many-body system can be understood through observing the behavior of an individual particle relying on the ergodic theorem of statistical physics, we can characterize traffic through the fluctuations in speed and position of a single vehicle across a road. As a result, we can infer the behavior and density of traffic on a segment of a road.

    As far less data is required, this method is more rapid and makes data management more manageable. But most importantly, it also has the potential to make traffic data less expensive and accessible to those that need it.

    Q: Who are some of the parties that would benefit from new technologies?

    A: More accessible and sophisticated traffic data would benefit more than just drivers seeking smoother, faster routes. It would also enable state and city departments of transportation (DOTs) to make local and collective interventions that advance the critical transportation objectives of equity, safety, and sustainability.

    As a safety solution, new data collection technologies could pinpoint dangerous driving conditions on a much finer scale to inform improved traffic calming measures. And since socially vulnerable communities experience traffic violence disproportionately, these interventions would have the added benefit of addressing pressing equity concerns. 

    There would also be an environmental benefit. DOTs could mitigate vehicle emissions by identifying minute deviations in traffic flow. This would present them with more opportunities to mitigate the idling and congestion that generate excess fuel consumption.  

    As we’ve seen, these three challenges have become increasingly acute, especially in urban areas. Yet, the data needed to address them exists already — and is being gathered by smartphones and telematics devices all over the world. So, to ensure a safer, more sustainable road network, it will be crucial to incorporate these data collection methods into our decision-making. More

  • in

    A dirt cheap solution? Common clay materials may help curb methane emissions

    Methane is a far more potent greenhouse gas than carbon dioxide, and it has a pronounced effect within first two decades of its presence in the atmosphere. In the recent international climate negotiations in Glasgow, abatement of methane emissions was identified as a major priority in attempts to curb global climate change quickly.

    Now, a team of researchers at MIT has come up with a promising approach to controlling methane emissions and removing it from the air, using an inexpensive and abundant type of clay called zeolite. The findings are described in the journal ACS Environment Au, in a paper by doctoral student Rebecca Brenneis, Associate Professor Desiree Plata, and two others.

    Although many people associate atmospheric methane with drilling and fracking for oil and natural gas, those sources only account for about 18 percent of global methane emissions, Plata says. The vast majority of emitted methane comes from such sources as slash-and-burn agriculture, dairy farming, coal and ore mining, wetlands, and melting permafrost. “A lot of the methane that comes into the atmosphere is from distributed and diffuse sources, so we started to think about how you could take that out of the atmosphere,” she says.

    The answer the researchers found was something dirt cheap — in fact, a special kind of “dirt,” or clay. They used zeolite clays, a material so inexpensive that it is currently used to make cat litter. Treating the zeolite with a small amount of copper, the team found, makes the material very effective at absorbing methane from the air, even at extremely low concentrations.

    The system is simple in concept, though much work remains on the engineering details. In their lab tests, tiny particles of the copper-enhanced zeolite material, similar to cat litter, were packed into a reaction tube, which was then heated from the outside as the stream of gas, with methane levels ranging from just 2 parts per million up to 2 percent concentration, flowed through the tube. That range covers everything that might exist in the atmosphere, down to subflammable levels that cannot be burned or flared directly.

    The process has several advantages over other approaches to removing methane from air, Plata says. Other methods tend to use expensive catalysts such as platinum or palladium, require high temperatures of at least 600 degrees Celsius, and tend to require complex cycling between methane-rich and oxygen-rich streams, making the devices both more complicated and more risky, as methane and oxygen are highly combustible on their own and in combination.

    “The 600 degrees where they run these reactors makes it almost dangerous to be around the methane,” as well as the pure oxygen, Brenneis says. “They’re solving the problem by just creating a situation where there’s going to be an explosion.” Other engineering complications also arise from the high operating temperatures. Unsurprisingly, such systems have not found much use.

    As for the new process, “I think we’re still surprised at how well it works,” says Plata, who is the Gilbert W. Winslow Associate Professor of Civil and Environmental Engineering. The process seems to have its peak effectiveness at about 300 degrees Celsius, which requires far less energy for heating than other methane capture processes. It also can work at concentrations of methane lower than other methods can address, even small fractions of 1 percent, which most methods cannot remove, and does so in air rather than pure oxygen, a major advantage for real-world deployment.

    The method converts the methane into carbon dioxide. That might sound like a bad thing, given the worldwide efforts to combat carbon dioxide emissions. “A lot of people hear ‘carbon dioxide’ and they panic; they say ‘that’s bad,’” Plata says. But she points out that carbon dioxide is much less impactful in the atmosphere than methane, which is about 80 times stronger as a greenhouse gas over the first 20 years, and about 25 times stronger for the first century. This effect arises from that fact that methane turns into carbon dioxide naturally over time in the atmosphere. By accelerating that process, this method would drastically reduce the near-term climate impact, she says. And, even converting half of the atmosphere’s methane to carbon dioxide would increase levels of the latter by less than 1 part per million (about 0.2 percent of today’s atmospheric carbon dioxide) while saving about 16 percent of total radiative warming.

    The ideal location for such systems, the team concluded, would be in places where there is a relatively concentrated source of methane, such as dairy barns and coal mines. These sources already tend to have powerful air-handling systems in place, since a buildup of methane can be a fire, health, and explosion hazard. To surmount the outstanding engineering details, the team has just been awarded a $2 million grant from the U.S. Department of Energy to continue to develop specific equipment for methane removal in these types of locations.

    “The key advantage of mining air is that we move a lot of it,” she says. “You have to pull fresh air in to enable miners to breathe, and to reduce explosion risks from enriched methane pockets. So, the volumes of air that are moved in mines are enormous.” The concentration of methane is too low to ignite, but it’s in the catalysts’ sweet spot, she says.

    Adapting the technology to specific sites should be relatively straightforward. The lab setup the team used in their tests consisted of  “only a few components, and the technology you would put in a cow barn could be pretty simple as well,” Plata says. However, large volumes of gas do not flow that easily through clay, so the next phase of the research will focus on ways of structuring the clay material in a multiscale, hierarchical configuration that will aid air flow.

    “We need new technologies for oxidizing methane at concentrations below those used in flares and thermal oxidizers,” says Rob Jackson, a professor of earth systems science at Stanford University, who was not involved in this work. “There isn’t a cost-effective technology today for oxidizing methane at concentrations below about 2,000 parts per million.”

    Jackson adds, “Many questions remain for scaling this and all similar work: How quickly will the catalyst foul under field conditions? Can we get the required temperatures closer to ambient conditions? How scaleable will such technologies be when processing large volumes of air?”

    One potential major advantage of the new system is that the chemical process involved releases heat. By catalytically oxidizing the methane, in effect the process is a flame-free form of combustion. If the methane concentration is above 0.5 percent, the heat released is greater than the heat used to get the process started, and this heat could be used to generate electricity.

    The team’s calculations show that “at coal mines, you could potentially generate enough heat to generate electricity at the power plant scale, which is remarkable because it means that the device could pay for itself,” Plata says. “Most air-capture solutions cost a lot of money and would never be profitable. Our technology may one day be a counterexample.”

    Using the new grant money, she says, “over the next 18 months we’re aiming to demonstrate a proof of concept that this can work in the field,” where conditions can be more challenging than in the lab. Ultimately, they hope to be able to make devices that would be compatible with existing air-handling systems and could simply be an extra component added in place. “The coal mining application is meant to be at a stage that you could hand to a commercial builder or user three years from now,” Plata says.

    In addition to Plata and Brenneis, the team included Yale University PhD student Eric Johnson and former MIT postdoc Wenbo Shi. The work was supported by the Gerstner Philanthropies, Vanguard Charitable Trust, the Betty Moore Inventor Fellows Program, and MIT’s Research Support Committee. More