More stories

  • in

    Coupling power and hydrogen sector pathways to benefit decarbonization

    Governments and companies worldwide are increasing their investments in hydrogen research and development, indicating a growing recognition that hydrogen could play a significant role in meeting global energy system decarbonization goals. Since hydrogen is light, energy-dense, storable, and produces no direct carbon dioxide emissions at the point of use, this versatile energy carrier has the potential to be harnessed in a variety of ways in a future clean energy system.

    Often considered in the context of grid-scale energy storage, hydrogen has garnered renewed interest, in part due to expectations that our future electric grid will be dominated by variable renewable energy (VRE) sources such as wind and solar, as well as decreasing costs for water electrolyzers — both of which could make clean, “green” hydrogen more cost-competitive with fossil-fuel-based production. But hydrogen’s versatility as a clean energy fuel also makes it an attractive option to meet energy demand and to open pathways for decarbonization in hard-to-abate sectors where direct electrification is difficult, such as transportation, buildings, and industry.

    “We’ve seen a lot of progress and analysis around pathways to decarbonize electricity, but we may not be able to electrify all end uses. This means that just decarbonizing electricity supply is not sufficient, and we must develop other decarbonization strategies as well,” says Dharik Mallapragada, a research scientist at the MIT Energy Initiative (MITEI). “Hydrogen is an interesting energy carrier to explore, but understanding the role for hydrogen requires us to study the interactions between the electricity system and a future hydrogen supply chain.”

    In a recent paper, researchers from MIT and Shell present a framework to systematically study the role and impact of hydrogen-based technology pathways in a future low-carbon, integrated energy system, taking into account interactions with the electric grid and the spatio-temporal variations in energy demand and supply. The developed framework co-optimizes infrastructure investment and operation across the electricity and hydrogen supply chain under various emissions price scenarios. When applied to a Northeast U.S. case study, the researchers find this approach results in substantial benefits — in terms of costs and emissions reduction — as it takes advantage of hydrogen’s potential to provide the electricity system with a large flexible load when produced through electrolysis, while also enabling decarbonization of difficult-to-electrify, end-use sectors.

    The research team includes Mallapragada; Guannan He, a postdoc at MITEI; Abhishek Bose, a graduate research assistant at MITEI; Clara Heuberger-Austin, a researcher at Shell; and Emre Gençer, a research scientist at MITEI. Their findings are published in the journal Energy & Environmental Science.

    Cross-sector modeling

    “We need a cross-sector framework to analyze each energy carrier’s economics and role across multiple systems if we are to really understand the cost/benefits of direct electrification or other decarbonization strategies,” says He.

    To do that analysis, the team developed the Decision Optimization of Low-carbon Power-HYdrogen Network (DOLPHYN) model, which allows the user to study the role of hydrogen in low-carbon energy systems, the effects of coupling the power and hydrogen sectors, and the trade-offs between various technology options across both supply chains — spanning production, transport, storage, and end use, and their impact on decarbonization goals.

    “We are seeing great interest from industry and government, because they are all asking questions about where to invest their money and how to prioritize their decarbonization strategies,” says Gençer. Heuberger-Austin adds, “Being able to assess the system-level interactions between electricity and the emerging hydrogen economy is of paramount importance to drive technology development and support strategic value chain decisions. The DOLPHYN model can be instrumental in tackling those kinds of questions.”

    For a predefined set of electricity and hydrogen demand scenarios, the model determines the least-cost technology mix across the power and hydrogen sectors while adhering to a variety of operation and policy constraints. The model can incorporate a range of technology options — from VRE generation to carbon capture and storage (CCS) used with both power and hydrogen generation to trucks and pipelines used for hydrogen transport. With its flexible structure, the model can be readily adapted to represent emerging technology options and evaluate their long-term value to the energy system.

    As an important addition, the model takes into account process-level carbon emissions by allowing the user to add a cost penalty on emissions in both sectors. “If you have a limited emissions budget, we are able to explore the question of where to prioritize the limited emissions to get the best bang for your buck in terms of decarbonization,” says Mallapragada.

    Insights from a case study

    To test their model, the researchers investigated the Northeast U.S. energy system under a variety of demand, technology, and carbon price scenarios. While their major conclusions can be generalized for other regions, the Northeast proved to be a particularly interesting case study. This region has current legislation and regulatory support for renewable generation, as well as increasing emission-reduction targets, a number of which are quite stringent. It also has a high demand for energy for heating — a sector that is difficult to electrify and could particularly benefit from hydrogen and from coupling the power and hydrogen systems.

    The researchers find that when combining the power and hydrogen sectors through electrolysis or hydrogen-based power generation, there is more operational flexibility to support VRE integration in the power sector and a reduced need for alternative grid-balancing supply-side resources such as battery storage or dispatchable gas generation, which in turn reduces the overall system cost. This increased VRE penetration also leads to a reduction in emissions compared to scenarios without sector-coupling. “The flexibility that electricity-based hydrogen production provides in terms of balancing the grid is as important as the hydrogen it is going to produce for decarbonizing other end uses,” says Mallapragada. They found this type of grid interaction to be more favorable than conventional hydrogen-based electricity storage, which can incur additional capital costs and efficiency losses when converting hydrogen back to power. This suggests that the role of hydrogen in the grid could be more beneficial as a source of flexible demand than as storage.

    The researchers’ multi-sector modeling approach also highlighted that CCS is more cost-effective when utilized in the hydrogen supply chain, versus the power sector. They note that counter to this observation, by the end of the decade, six times more CCS projects will be deployed in the power sector than for use in hydrogen production — a fact that emphasizes the need for more cross-sectoral modeling when planning future energy systems.

    In this study, the researchers tested the robustness of their conclusions against a number of factors, such as how the inclusion of non-combustion greenhouse gas emissions (including methane emissions) from natural gas used in power and hydrogen production impacts the model outcomes. They find that including the upstream emissions footprint of natural gas within the model boundary does not impact the value of sector coupling in regards to VRE integration and cost savings for decarbonization; in fact, the value actually grows because of the increased emphasis on electricity-based hydrogen production over natural gas-based pathways.

    “You cannot achieve climate targets unless you take a holistic approach,” says Gençer. “This is a systems problem. There are sectors that you cannot decarbonize with electrification, and there are other sectors that you cannot decarbonize without carbon capture, and if you think about everything together, there is a synergistic solution that significantly minimizes the infrastructure costs.”

    This research was supported, in part, by Shell Global Solutions International B.V. in Amsterdam, the Netherlands, and MITEI’s Low-Carbon Energy Centers for Electric Power Systems and Carbon Capture, Utilization, and Storage. More

  • in

    How marsh grass protects shorelines

    Marsh plants, which are ubiquitous along the world’s shorelines, can play a major role in mitigating the damage to coastlines as sea levels rise and storm surges increase. Now, a new MIT study provides greater detail about how these protective benefits work under real-world conditions shaped by waves and currents.

    The study combined laboratory experiments using simulated plants in a large wave tank along with mathematical modeling. It appears in the journal Physical Review — Fluids, in a paper by former MIT visiting doctoral student Xiaoxia Zhang, now a postdoc at Dalian University of Technology, and professor of civil and environmental engineering Heidi Nepf.

    It’s already clear that coastal marsh plants provide significant protection from surges and devastating  storms. For example, it has been estimated that the damage caused by Hurricane Sandy was reduced by $625 million thanks to the damping of wave energy provided by extensive areas of marsh along the affected coasts. But the new MIT analysis incorporates details of plant morphology, such as the number and spacing of flexible leaves versus stiffer stems, and the complex interactions of currents and waves that may be coming from different directions.

    This level of detail could enable coastal restoration planners to determine the area of marsh needed to mitigate expected amounts of storm surge or sea-level rise, and to decide which types of plants to introduce to maximize protection.

    “When you go to a marsh, you often will see that the plants are arranged in zones,” says Nepf, who is the Donald and Martha Harleman Professor of Civil and Environmental Engineering. “Along the edge, you tend to have plants that are more flexible, because they are using their flexibility to reduce the wave forces they feel. In the next zone, the plants are a little more rigid and have a bit more leaves.”

    As the zones progress, the plants become stiffer, leafier, and more effective at absorbing wave energy thanks to their greater leaf area. The new modeling done in this research, which incorporated work with simulated plants in the 24-meter-long wave tank at MIT’s Parsons Lab, can enable coastal planners to take these kinds of details into account when planning protection, mitigation, or restoration projects.

    “If you put the stiffest plants at the edge, they might not survive, because they’re feeling very high wave forces. By describing why Mother Nature organizes plants in this way, we can hopefully design a more sustainable restoration,” Nepf says.

    Once established, the marsh plants provide a positive feedback cycle that helps to not only stabilize but also build up these delicate coastal lands, Zhang says. “After a few years, the marsh grasses start to trap and hold the sediment, and the elevation gets higher and higher, which might keep up with sea level rise,” she says.

    The new MIT analysis incorporates details of plant morphology, such as the number and spacing of flexible leaves versus stiffer stems, and the complex interactions of currents and waves that may be coming from different directions.

    Awareness of the protective effects of marshland has been growing, Nepf says. For example, the Netherlands has been restoring lost marshland outside the dikes that surround much of the nation’s agricultural land, finding that the marsh can protect the dikes from erosion; the marsh and dikes work together much more effectively than the dikes alone at preventing flooding.

    But most such efforts so far have been largely empirical, trial-and-error plans, Nepf says. Now, they could take advantage of this modeling to know just how much marshland with what types of plants would be needed to provide the desired level of protection.

    It also provides a more quantitative way to estimate the value provided by marshes, she says. “It could allow you to more accurately say, ‘40 meters of marsh will reduce waves this much and therefore will reduce overtopping of your levee by this much.’ Someone could use that to say, ‘I’m going to save this much money over the next 10 years if I reduce flooding by maintaining this marsh.’ It might help generate some political motivation for restoration efforts.”

    Nepf herself is already trying to get some of these findings included in coastal planning processes. She serves on a practitioner panel led by Chris Esposito of the Water Institute of the Gulf, which serves the storm-battered Louisiana coastline. “We’d like to get this work into the coatal simulations that are used for large-scale restoration and coastal planning,” she says.

    “Understanding the wave damping process in real vegetation wetlands is of critical value, as it is needed in the assessment of the coastal defense value of these wetlands,” says Zhan Hu, an associate professor of marine sciences at Sun Yat-Sen University, who was not associated with this work. “The challenge, however, lies in the quantitative representation of the wave damping process, in which many factors are at play, such as plant flexibility, morphology, and coexisting currents.”

    The new study, Hu says, “neatly combines experimental findings and analytical modeling to reveal the impact of each factor in the wave damping process. … Overall, this work is a solid step forward toward a more accurate assessment of wave damping capacity of real coastal wetlands, which is needed for science-based design and management of nature-based coastal protection.”

    The work was partly supported by the National Science Foundation and the China Scholarship Council.  More

  • in

    New “risk triage” platform pinpoints compounding threats to US infrastructure

    Over a 36-hour period in August, Hurricane Henri delivered record rainfall in New York City, where an aging storm-sewer system was not built to handle the deluge, resulting in street flooding. Meanwhile, an ongoing drought in California continued to overburden aquifers and extend statewide water restrictions. As climate change amplifies the frequency and intensity of extreme events in the United States and around the world, and the populations and economies they threaten grow and change, there is a critical need to make infrastructure more resilient. But how can this be done in a timely, cost-effective way?

    An emerging discipline called multi-sector dynamics (MSD) offers a promising solution. MSD homes in on compounding risks and potential tipping points across interconnected natural and human systems. Tipping points occur when these systems can no longer sustain multiple, co-evolving stresses, such as extreme events, population growth, land degradation, drinkable water shortages, air pollution, aging infrastructure, and increased human demands. MSD researchers use observations and computer models to identify key precursory indicators of such tipping points, providing decision-makers with critical information that can be applied to mitigate risks and boost resilience in infrastructure and managed resources.

    At MIT, the Joint Program on the Science and Policy of Global Change has since 2018 been developing MSD expertise and modeling tools and using them to explore compounding risks and potential tipping points in selected regions of the United States. In a two-hour webinar on Sept. 15, MIT Joint Program researchers presented an overview of the program’s MSD research tool set and its applications.  

    MSD and the risk triage platform

    “Multi-sector dynamics explores interactions and interdependencies among human and natural systems, and how these systems may adapt, interact, and co-evolve in response to short-term shocks and long-term influences and stresses,” says MIT Joint Program Deputy Director C. Adam Schlosser, noting that such analysis can reveal and quantify potential risks that would likely evade detection in siloed investigations. “These systems can experience cascading effects or failures after crossing tipping points. The real question is not just where these tipping points are in each system, but how they manifest and interact across all systems.”

    To address that question, the program’s MSD researchers have developed the MIT Socio-Environmental Triage (MST) platform, now publicly available for the first time. Focused on the continental United States, the first version of the platform analyzes present-day risks related to water, land, climate, the economy, energy, demographics, health, and infrastructure, and where these compound to create risk hot spots. It’s essentially a screening-level visualization tool that allows users to examine risks, identify hot spots when combining risks, and make decisions about how to deploy more in-depth analysis to solve complex problems at regional and local levels. For example, MST can identify hot spots for combined flood and poverty risks in the lower Mississippi River basin, and thereby alert decision-makers as to where more concentrated flood-control resources are needed.

    Successive versions of the platform will incorporate projections based on the MIT Joint Program’s Integrated Global System Modeling (IGSM) framework of how different systems and stressors may co-evolve into the future and thereby change the risk landscape. This enhanced capability could help uncover cost-effective pathways for mitigating and adapting to a wide range of environmental and economic risks.  

    MSD applications

    Five webinar presentations explored how MIT Joint Program researchers are applying the program’s risk triage platform and other MSD modeling tools to identify potential tipping points and risks in five key domains: water quality, land use, economics and energy, health, and infrastructure. 

    Joint Program Principal Research Scientist Xiang Gao described her efforts to apply a high-resolution U.S. water-quality model to calculate a location-specific, water-quality index over more than 2,000 river basins in the country. By accounting for interactions among climate, agriculture, and socioeconomic systems, various water-quality measures can be obtained ranging from nitrate and phosphate levels to phytoplankton concentrations. This modeling approach advances a unique capability to identify potential water-quality risk hot spots for freshwater resources.

    Joint Program Research Scientist Angelo Gurgel discussed his MSD-based analysis of how climate change, population growth, changing diets, crop-yield improvements and other forces that drive land-use change at the global level may ultimately impact how land is used in the United States. Drawing upon national observational data and the IGSM framework, the analysis shows that while current U.S. land-use trends are projected to persist or intensify between now and 2050, there is no evidence of any concerning tipping points arising throughout this period.  

    MIT Joint Program Research Scientist Jennifer Morris presented several examples of how the risk triage platform can be used to combine existing U.S. datasets and the IGSM framework to assess energy and economic risks at the regional level. For example, by aggregating separate data streams on fossil-fuel employment and poverty, one can target selected counties for clean energy job training programs as the nation moves toward a low-carbon future. 

    “Our modeling and risk triage frameworks can provide pictures of current and projected future economic and energy landscapes,” says Morris. “They can also highlight interactions among different human, built, and natural systems, including compounding risks that occur in the same location.”  

    MIT Joint Program research affiliate Sebastian Eastham, a research scientist at the MIT Laboratory for Aviation and the Environment, described an MSD approach to the study of air pollution and public health. Linking the IGSM with an atmospheric chemistry model, Eastham ultimately aims to better understand where the greatest health risks are in the United States and how they may compound throughout this century under different policy scenarios. Using the risk triage tool to combine current risk metrics for air quality and poverty in a selected county based on current population and air-quality data, he showed how one can rapidly identify cardiovascular and other air-pollution-induced disease risk hot spots.

    Finally, MIT Joint Program research affiliate Alyssa McCluskey, a lecturer at the University of Colorado at Boulder, showed how the risk triage tool can be used to pinpoint potential risks to roadways, waterways, and power distribution lines from flooding, extreme temperatures, population growth, and other stressors. In addition, McCluskey described how transportation and energy infrastructure development and expansion can threaten critical wildlife habitats.

    Enabling comprehensive, location-specific analyses of risks and hot spots within and among multiple domains, the Joint Program’s MSD modeling tools can be used to inform policymaking and investment from the municipal to the global level.

    “MSD takes on the challenge of linking human, natural, and infrastructure systems in order to inform risk analysis and decision-making,” says Schlosser. “Through our risk triage platform and other MSD models, we plan to assess important interactions and tipping points, and to provide foresight that supports action toward a sustainable, resilient, and prosperous world.”

    This research is funded by the U.S. Department of Energy’s Office of Science as an ongoing project. More

  • in

    For campus “porosity hunters,” climate resilience is the goal

    At MIT, it’s not uncommon to see groups navigating campus with smartphones and measuring devices in hand, using the Institute as a test bed for research. During one week this summer more than a dozen students, researchers, and faculty, plus an altimeter, could be seen doing just that as they traveled across MIT to measure the points of entry into campus buildings — including windows, doors, and vents — known as a building’s porosity.

    Why measure campus building porosity?

    The group was part of the MIT Porosity Hunt, a citizen-science effort that is using the MIT campus as a place to test emerging methodologies, instruments, and data collection processes to better understand the potential impact of a changing climate — and specifically storm scenarios resulting from it — on infrastructure. The hunt is a collaborative effort between the Urban Risk Lab, led by director and associate professor of architecture and urbanism Miho Mazereeuw, and the Office of Sustainability (MITOS), aimed at supporting an MIT that is resilient to the impacts of climate change, including flooding and extreme heat events. Working over three days, members of the hunt catalogued openings in dozens of buildings across campus to better support flood mapping and resiliency planning at MIT.

    For Mazereeuw, the data collection project lies at the nexus of her work with the Urban Risk Lab and as a member of MIT’s Climate Resiliency Committee. While the lab’s mission is to “develop methods, prototypes, and technologies to embed risk reduction and preparedness into the design of cities and regions to increase resilience,” the Climate Resiliency Committee — made up of faculty, staff, and researchers — is focused on assessing, planning, and operationalizing a climate-resilient MIT. The work of both the lab and the committee is embedded in the recently released MIT Climate Resiliency Dashboard, a visualization tool that allows users to understand potential flooding impacts of a number of storm scenarios and drive decision-making.

    While the debut of the tool signaled a big advancement in resiliency planning at MIT, some, including Mazereeuw, saw an opportunity for enhancement. In working with Ken Strzepek, a MITOS Faculty Fellow and research scientist at the MIT Center for Global Change Science who was also an integral part of this work, Mazereeuw says she was surprised to learn that even the most sophisticated flood modeling treats buildings as solid blocks. With all buildings being treated the same, despite varying porosity, the dashboard is limited in some flood scenario analysis. To address this, Mazereeuw and others got to work to fill in that additional layer of data, with the citizen science efforts a key factor of that work. “Understanding the porosity of the building is important to understanding how much water actually goes in the building in these scenarios,” she explains.

    Though surveyors are often used to collect and map this type of information, Mazereeuw wanted to leverage the MIT community in order to collect data quickly while engaging students, faculty, and researchers as resiliency stewards for the campus. “It’s important for projects like this to encourage awareness,” she explains. “Generally, when something fails, we notice it, but otherwise we don’t. With climate change bringing on more uncertainty in the scale and intensity of events, we need everyone to be more aware and help us understand things like vulnerabilities.”

    To do this, MITOS and the Urban Risk Lab reached out to more than a dozen students, who were joined by faculty, staff, and researchers, to map porosity of 31 campus buildings connected by basements. The buildings were chosen based on this connectivity, understanding that water that reaches one basement could potentially flow to another.

    Urban Risk Lab research scientists Aditya Barve and Mayank Ojha aided the group’s efforts by creating a mapping app and chatbot to support consistency in reporting and ease of use. Each team member used the app to find buildings where porosity points needed to be mapped. As teams arrived at the building exteriors, they entered their location in the app, which then triggered the Facebook and LINE-powered chatbot on their phone. There, students were guided through measuring the opening, adjusting for elevation to correlate to the City of Cambridge base datum, and, based on observable features, noting the materials and quality of the opening on a one-through-three scale. Over just three days, the team, which included Mazereeuw herself, mapped 1,030 porosity points that will aid in resiliency planning and preparation on campus in a number of ways.

    “The goal is to understand various heights for flood waters around porous spots on campus,” says Mazereeuw. “But the impact can be different depending on the space. We hope this data can inform safety as well as understanding potential damage to research or disruption to campus operations from future storms.”

    The porosity data collection is complete for this round — future hunts will likely be conducted to confirm and converge data — but one team member’s work continues at the basement level of MIT. Katarina Boukin, a PhD student in civil and environmental engineering and PhD student fellow with MITOS, has been focused on methods of collecting data beneath buildings at MIT to understand how they would be impacted if flood water were to enter. “We have a number of connected basements on campus, and if one of them floods, potentially all of them do,” explains Boukin. “By looking at absolute elevation and porosity, we’re connecting the outside to the inside and tracking how much and where water may flow.” With the added data from the Porosity Hunt, a complete picture of vulnerabilities and resiliency opportunities can be shared.

    Synthesizing much of this data is where Eva Then ’21 comes in. Then was among the students who worked to capture data points over the three days and is now working in ArcGIS — an online mapping software that also powers the Climate Resiliency Dashboard — to process and visualize the data collected. Once completed, the data will be incorporated into the campus flood model to increase the accuracy of projections on the Climate Resiliency Dashboard. “Over the next decades, the model will serve as an adaptive planning tool to make campus safe and resilient amid growing climate risks,” Then says.

    For Mazereeuw, the Porosity Hunt and data collected additionally serve as a study in scalability, providing valuable insight on how similar research efforts inspired by the MIT test bed approach could be undertaken and inform policy beyond MIT. She also hopes it will inspire students to launch their own hunts in the future, becoming resiliency stewards for their campus and dorms. “Going through measuring and documenting turns on and shows a new set of goggles — you see campus and buildings in a slightly different way,” she says, “Having people look carefully and document change is a powerful tool in climate and resiliency planning.” 

    Mazereeuw also notes that recent devastating flooding events across the country, including those resulting from Hurricane Ida, have put a special focus on this work. “The loss of life that occurred in that storm, including those who died as waters flooded their basement homes  underscores the urgency of this type of research, planning, and readiness.” More

  • in

    Making roadway spending more sustainable

    The share of federal spending on infrastructure has reached an all-time low, falling from 30 percent in 1960 to just 12 percent in 2018.

    While the nation’s ailing infrastructure will require more funding to reach its full potential, recent MIT research finds that more sustainable and higher performing roads are still possible even with today’s limited budgets.

    The research, conducted by a team of current and former MIT Concrete Sustainability Hub (MIT CSHub) scientists and published in Transportation Research D, finds that a set of innovative planning strategies could improve pavement network environmental and performance outcomes even if budgets don’t increase.

    The paper presents a novel budget allocation tool and pairs it with three innovative strategies for managing pavement networks: a mix of paving materials, a mix of short- and long-term paving actions, and a long evaluation period for those actions.

    This novel approach offers numerous benefits. When applied to a 30-year case study of the Iowa U.S. Route network, the MIT CSHub model and management strategies cut emissions by 20 percent while sustaining current levels of road quality. Achieving this with a conventional planning approach would require the state to spend 32 percent more than it does today. The key to its success is the consideration of a fundamental — but fraught — aspect of pavement asset management: uncertainty.

    Predicting unpredictability

    The average road must last many years and support the traffic of thousands — if not millions — of vehicles. Over that time, a lot can change. Material prices may fluctuate, budgets may tighten, and traffic levels may intensify. Climate (and climate change), too, can hasten unexpected repairs.

    Managing these uncertainties effectively means looking long into the future and anticipating possible changes.

    “Capturing the impacts of uncertainty is essential for making effective paving decisions,” explains Fengdi Guo, the paper’s lead author and a departing CSHub research assistant.

    “Yet, measuring and relating these uncertainties to outcomes is also computationally intensive and expensive. Consequently, many DOTs [departments of transportation] are forced to simplify their analysis to plan maintenance — often resulting in suboptimal spending and outcomes.”

    To give DOTs accessible tools to factor uncertainties into their planning, CSHub researchers have developed a streamlined planning approach. It offers greater specificity and is paired with several new pavement management strategies.

    The planning approach, known as Probabilistic Treatment Path Dependence (PTPD), is based on machine learning and was devised by Guo.

    “Our PTPD model is composed of four steps,” he explains. “These steps are, in order, pavement damage prediction; treatment cost prediction; budget allocation; and pavement network condition evaluation.”

    The model begins by investigating every segment in an entire pavement network and predicting future possibilities for pavement deterioration, cost, and traffic.

    “We [then] run thousands of simulations for each segment in the network to determine the likely cost and performance outcomes for each initial and subsequent sequence, or ‘path,’ of treatment actions,” says Guo. “The treatment paths with the best cost and performance outcomes are selected for each segment, and then across the network.”

    The PTPD model not only seeks to minimize costs to agencies but also to users — in this case, drivers. These user costs can come primarily in the form of excess fuel consumption due to poor road quality.

    “One improvement in our analysis is the incorporation of electric vehicle uptake into our cost and environmental impact predictions,” Randolph Kirchain, a principal research scientist at MIT CSHub and MIT Materials Research Laboratory (MRL) and one of the paper’s co-authors. “Since the vehicle fleet will change over the next several decades due to electric vehicle adoption, we made sure to consider how these changes might impact our predictions of excess energy consumption.”

    After developing the PTPD model, Guo wanted to see how the efficacy of various pavement management strategies might differ. To do this, he developed a sophisticated deterioration prediction model.

    A novel aspect of this deterioration model is its treatment of multiple deterioration metrics simultaneously. Using a multi-output neural network, a tool of artificial intelligence, the model can predict several forms of pavement deterioration simultaneously, thereby, accounting for their correlations among one another.

    The MIT team selected two key metrics to compare the effectiveness of various treatment paths: pavement quality and greenhouse gas emissions. These metrics were then calculated for all pavement segments in the Iowa network.

    Improvement through variation

     The MIT model can help DOTs make better decisions, but that decision-making is ultimately constrained by the potential options considered.

    Guo and his colleagues, therefore, sought to expand current decision-making paradigms by exploring a broad set of network management strategies and evaluating them with their PTPD approach. Based on that evaluation, the team discovered that networks had the best outcomes when the management strategy includes using a mix of paving materials, a variety of long- and short-term paving repair actions (treatments), and longer time periods on which to base paving decisions.

    They then compared this proposed approach with a baseline management approach that reflects current, widespread practices: the use of solely asphalt materials, short-term treatments, and a five-year period for evaluating the outcomes of paving actions.

    With these two approaches established, the team used them to plan 30 years of maintenance across the Iowa U.S. Route network. They then measured the subsequent road quality and emissions.

    Their case study found that the MIT approach offered substantial benefits. Pavement-related greenhouse gas emissions would fall by around 20 percent across the network over the whole period. Pavement performance improved as well. To achieve the same level of road quality as the MIT approach, the baseline approach would need a 32 percent greater budget.

    “It’s worth noting,” says Guo, “that since conventional practices employ less effective allocation tools, the difference between them and the CSHub approach should be even larger in practice.”

    Much of the improvement derived from the precision of the CSHub planning model. But the three treatment strategies also play a key role.

    “We’ve found that a mix of asphalt and concrete paving materials allows DOTs to not only find materials best-suited to certain projects, but also mitigates the risk of material price volatility over time,” says Kirchain.

    It’s a similar story with a mix of paving actions. Employing a mix of short- and long-term fixes gives DOTs the flexibility to choose the right action for the right project.

    The final strategy, a long-term evaluation period, enables DOTs to see the entire scope of their choices. If the ramifications of a decision are predicted over only five years, many long-term implications won’t be considered. Expanding the window for planning, then, can introduce beneficial, long-term options.

    It’s not surprising that paving decisions are daunting to make; their impacts on the environment, driver safety, and budget levels are long-lasting. But rather than simplify this fraught process, the CSHub method aims to reflect its complexity. The result is an approach that provides DOTs with the tools to do more with less.

    This research was supported through the MIT Concrete Sustainability Hub by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Predicting building emissions across the US

    The United States is entering a building boom. Between 2017 and 2050, it will build the equivalent of New York City 20 times over. Yet, to meet climate targets, the nation must also significantly reduce the greenhouse gas (GHG) emissions of its buildings, which comprise 27 percent of the nation’s total emissions.

    A team of current and former MIT Concrete Sustainability Hub (CSHub) researchers is addressing these conflicting demands with the aim of giving policymakers the tools and information to act. They have detailed the results of their collaboration in a recent paper in the journal Applied Energy that projects emissions for all buildings across the United States under two GHG reduction scenarios.

    Their paper found that “embodied” emissions — those from materials production and construction — would represent around a quarter of emissions between 2016 and 2050 despite extensive construction.

    Further, many regions would have varying priorities for GHG reductions; some, like the West, would benefit most from reductions to embodied emissions, while others, like parts of the Midwest, would see the greatest payoff from interventions to emissions from energy consumption. If these regional priorities were addressed aggressively, building sector emissions could be reduced by around 30 percent between 2016 and 2050.

    Quantifying contradictions

    Modern buildings are far more complex — and efficient — than their predecessors. Due to new technologies and more stringent building codes, they can offer lower energy consumption and operational emissions. And yet, more-efficient materials and improved construction standards can also generate greater embodied emissions.

    Concrete, in many ways, epitomizes this tradeoff. Though its durability can minimize energy-intensive repairs over a building’s operational life, the scale of its production means that it contributes to a large proportion of the embodied impacts in the building sector.

    As such, the team centered GHG reductions for concrete in its analysis.

    “We took a bottom-up approach, developing reference designs based on a set of residential and commercial building models,” explains Ehsan Vahidi, an assistant professor at the University of Nevada at Reno and a former CSHub postdoc. “These designs were differentiated by roof and slab insulation, HVAC efficiency, and construction materials — chiefly concrete and wood.”

    After measuring the operational and embodied GHG emissions for each reference design, the team scaled up their results to the county level and then national level based on building stock forecasts. This allowed them to estimate the emissions of the entire building sector between 2016 and 2050.

    To understand how various interventions could cut GHG emissions, researchers ran two different scenarios — a “projected” and an “ambitious” scenario — through their framework.

    The projected scenario corresponded to current trends. It assumed grid decarbonization would follow Energy Information Administration predictions; the widespread adoption of new energy codes; efficiency improvement of lighting and appliances; and, for concrete, the implementation of 50 percent low-carbon cements and binders in all new concrete construction and the adoption of full carbon capture, storage, and utilization (CCUS) of all cement and concrete emissions.

    “Our ambitious scenario was intended to reflect a future where more aggressive actions are taken to reduce GHG emissions and achieve the targets,” says Vahidi. “Therefore, the ambitious scenario took these same strategies [of the projected scenario] but featured more aggressive targets for their implementation.”

    For instance, it assumed a 33 percent reduction in grid emissions by 2050 and moved the projected deadlines for lighting and appliances and thermal insulation forward by five and 10 years, respectively. Concrete decarbonization occurred far more quickly as well.

    Reductions and variations

    The extensive growth forecast for the U.S. building sector will inevitably generate a sizable number of emissions. But how much can this figure be minimized?

    Without the implementation of any GHG reduction strategies, the team found that the building sector would emit 62 gigatons CO2 equivalent between 2016 and 2050. That’s comparable to the emissions generated from 156 trillion passenger vehicle miles traveled.

    But both GHG reduction scenarios could cut the emissions from this unmitigated, business-as-usual scenario significantly.

    Under the projected scenario, emissions would fall to 45 gigatons CO2 equivalent — a 27 percent decrease over the analysis period. The ambitious scenario would offer a further 6 percent reduction over the projected scenario, reaching 40 gigatons CO2 equivalent — like removing around 55 trillion passenger vehicle miles from the road over the period.

    “In both scenarios, the largest contributor to reductions was the greening of the energy grid,” notes Vahidi. “Other notable opportunities for reductions were from increasing the efficiency of lighting, HVAC, and appliances. Combined, these four attributes contributed to 85 percent of the emissions over the analysis period. Improvements to them offered the greatest potential emissions reductions.”

    The remaining attributes, such as thermal insulation and low-carbon concrete, had a smaller impact on emissions and, consequently, offered smaller reduction opportunities. That’s because these two attributes were only applied to new construction in the analysis, which was outnumbered by existing structures throughout the period.

    The disparities in impact between strategies aimed at new and existing structures underscore a broader finding: Despite extensive construction over the period, embodied emissions would comprise just 23 percent of cumulative emissions between 2016 and 2050, with the remainder coming primarily from operation.  

    “This is a consequence of existing structures far outnumbering new structures,” explains Jasmina Burek, a CSHub postdoc and an incoming assistant professor at the University of Massachusetts Lowell. “The operational emissions generated by all new and existing structures between 2016 and 2050 will always greatly exceed the embodied emissions of new structures at any given time, even as buildings become more efficient and the grid gets greener.”

    Yet the emissions reductions from both scenarios were not distributed evenly across the entire country. The team identified several regional variations that could have implications for how policymakers must act to reduce building sector emissions.

    “We found that western regions in the United States would see the greatest reduction opportunities from interventions to residential emissions, which would constitute 90 percent of the region’s total emissions over the analysis period,” says Vahidi.

    The predominance of residential emissions stems from the region’s ongoing population surge and its subsequent growth in housing stock. Proposed solutions would include CCUS and low-carbon binders for concrete production, and improvements to energy codes aimed at residential buildings.

    As with the West, ideal solutions for the Southeast would include CCUS, low-carbon binders, and improved energy codes.

    “In the case of Southeastern regions, interventions should equally target commercial and residential buildings, which we found were split more evenly among the building stock,” explains Burek. “Due to the stringent energy codes in both regions, interventions to operational emissions were less impactful than those to embodied emissions.”

    Much of the Midwest saw the inverse outcome. Its energy mix remains one of the most carbon-intensive in the nation and improvements to energy efficiency and the grid would have a large payoff — particularly in Missouri, Kansas, and Colorado.

    New England and California would see the smallest reductions. As their already-strict energy codes would limit further operational reductions, opportunities to reduce embodied emissions would be the most impactful.

    This tremendous regional variation uncovered by the MIT team is in many ways a reflection of the great demographic and geographic diversity of the nation as a whole. And there are still further variables to consider.

    In addition to GHG emissions, future research could consider other environmental impacts, like water consumption and air quality. Other mitigation strategies to consider include longer building lifespans, retrofitting, rooftop solar, and recycling and reuse.

    In this sense, their findings represent the lower bounds of what is possible in the building sector. And even if further improvements are ultimately possible, they’ve shown that regional variation will invariably inform those environmental impact reductions.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    MIT appoints members of new faculty committee to drive climate action plan

    In May, responding to the world’s accelerating climate crisis, MIT issued an ambitious new plan, “Fast Forward: MIT’s Climate Action Plan for the Decade.” The plan outlines a broad array of new and expanded initiatives across campus to build on the Institute’s longstanding climate work.

    Now, to unite these varied climate efforts, maximize their impact, and identify new ways for MIT to contribute climate solutions, the Institute has appointed more than a dozen faculty members to a new committee established by the Fast Forward plan, named the Climate Nucleus.

    The committee includes leaders of a number of climate- and energy-focused departments, labs, and centers that have significant responsibilities under the plan. Its membership spans all five schools and the MIT Schwarzman College of Computing. Professors Noelle Selin and Anne White have agreed to co-chair the Climate Nucleus for a term of three years.

    “I am thrilled and grateful that Noelle and Anne have agreed to step up to this important task,” says Maria T. Zuber, MIT’s vice president for research. “Under their leadership, I’m confident that the Climate Nucleus will bring new ideas and new energy to making the strategy laid out in the climate action plan a reality.”

    The Climate Nucleus has broad responsibility for the management and implementation of the Fast Forward plan across its five areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts.

    Over the next few years, the nucleus will aim to advance MIT’s contribution to a two-track approach to decarbonizing the global economy, an approach described in the Fast Forward plan. First, humanity must go as far and as fast as it can to reduce greenhouse gas emissions using existing tools and methods. Second, societies need to invest in, invent, and deploy new tools — and promote new institutions and policies — to get the global economy to net-zero emissions by mid-century.

    The co-chairs of the nucleus bring significant climate and energy expertise, along with deep knowledge of the MIT community, to their task.

    Selin is a professor with joint appointments in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences. She is also the director of the Technology and Policy Program. She began at MIT in 2007 as a postdoc with the Center for Global Change Science and the Joint Program on the Science and Policy of Global Change. Her research uses modeling to inform decision-making on air pollution, climate change, and hazardous substances.

    “Climate change affects everything we do at MIT. For the new climate action plan to be effective, the Climate Nucleus will need to engage the entire MIT community and beyond, including policymakers as well as people and communities most affected by climate change,” says Selin. “I look forward to helping to guide this effort.”

    White is the School of Engineering’s Distinguished Professor of Engineering and the head of the Department of Nuclear Science and Engineering. She joined the MIT faculty in 2009 and has also served as the associate director of MIT’s Plasma Science and Fusion Center. Her research focuses on assessing and refining the mathematical models used in the design of fusion energy devices, such as tokamaks, which hold promise for delivering limitless zero-carbon energy.

    “The latest IPCC report underscores the fact that we have no time to lose in decarbonizing the global economy quickly. This is a problem that demands we use every tool in our toolbox — and develop new ones — and we’re committed to doing that,” says White, referring to an August 2021 report from the Intergovernmental Panel on Climate Change, a UN climate science body, that found that climate change has already affected every region on Earth and is intensifying. “We must train future technical and policy leaders, expand opportunities for students to work on climate problems, and weave sustainability into every one of MIT’s activities. I am honored to be a part of helping foster this Institute-wide collaboration.”

    A first order of business for the Climate Nucleus will be standing up three working groups to address specific aspects of climate action at MIT: climate education, climate policy, and MIT’s own carbon footprint. The working groups will be responsible for making progress on their particular areas of focus under the plan and will make recommendations to the nucleus on ways of increasing MIT’s effectiveness and impact. The working groups will also include student, staff, and alumni members, so that the entire MIT community has the opportunity to contribute to the plan’s implementation.  

    The nucleus, in turn, will report and make regular recommendations to the Climate Steering Committee, a senior-level team consisting of Zuber; Richard Lester, the associate provost for international activities; Glen Shor, the executive vice president and treasurer; and the deans of the five schools and the MIT Schwarzman College of Computing. The new plan created the Climate Steering Committee to ensure that climate efforts will receive both the high-level attention and the resources needed to succeed.

    Together the new committees and working groups are meant to form a robust new infrastructure for uniting and coordinating MIT’s climate action efforts in order to maximize their impact. They replace the Climate Action Advisory Committee, which was created in 2016 following the release of MIT’s first climate action plan.

    In addition to Selin and White, the members of the Climate Nucleus are:

    Bob Armstrong, professor in the Department of Chemical Engineering and director of the MIT Energy Initiative;
    Dara Entekhabi, professor in the departments of Civil and Environmental Engineering and Earth, Atmospheric and Planetary Sciences;
    John Fernández, professor in the Department of Architecture and director of the Environmental Solutions Initiative;
    Stefan Helmreich, professor in the Department of Anthropology;
    Christopher Knittel, professor in the MIT Sloan School of Management and director of the Center for Energy and Environmental Policy Research;
    John Lienhard, professor in the Department of Mechanical Engineering and director of the Abdul Latif Jameel Water and Food Systems Lab;
    Julie Newman, director of the Office of Sustainability and lecturer in the Department of Urban Studies and Planning;
    Elsa Olivetti, professor in the Department of Materials Science and Engineering and co-director of the Climate and Sustainability Consortium;
    Christoph Reinhart, professor in the Department of Architecture and director of the Building Technology Program;
    John Sterman, professor in the MIT Sloan School of Management and director of the Sloan Sustainability Initiative;
    Rob van der Hilst, professor and head of the Department of Earth, Atmospheric and Planetary Sciences; and
    Chris Zegras, professor and head of the Department of Urban Studies and Planning. More

  • in

    Concrete’s role in reducing building and pavement emissions

    Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.

    As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.

    But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.

    In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.

    They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets. 

    The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.

    Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.

    Sources and solutions

    The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.

    While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.

    Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.

    Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.

    Set of scenarios

    In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.

    Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.

    “We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.

    “As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”

    To reflect the complexity of these systems, their models had to have the highest resolutions possible.

    “In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.

    “To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”

    Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.

    As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.

    Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.

    “In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”

    Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.

    Results

    As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.

    “What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”

    “Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”

    In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.

    Their predictions for emissions reductions generated additional insights.  

    For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.

    As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.

    In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.

    “The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”

    These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.

    Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.

    “In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.

    “Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”

    This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.

    The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More