More stories

  • in

    For campus “porosity hunters,” climate resilience is the goal

    At MIT, it’s not uncommon to see groups navigating campus with smartphones and measuring devices in hand, using the Institute as a test bed for research. During one week this summer more than a dozen students, researchers, and faculty, plus an altimeter, could be seen doing just that as they traveled across MIT to measure the points of entry into campus buildings — including windows, doors, and vents — known as a building’s porosity.

    Why measure campus building porosity?

    The group was part of the MIT Porosity Hunt, a citizen-science effort that is using the MIT campus as a place to test emerging methodologies, instruments, and data collection processes to better understand the potential impact of a changing climate — and specifically storm scenarios resulting from it — on infrastructure. The hunt is a collaborative effort between the Urban Risk Lab, led by director and associate professor of architecture and urbanism Miho Mazereeuw, and the Office of Sustainability (MITOS), aimed at supporting an MIT that is resilient to the impacts of climate change, including flooding and extreme heat events. Working over three days, members of the hunt catalogued openings in dozens of buildings across campus to better support flood mapping and resiliency planning at MIT.

    For Mazereeuw, the data collection project lies at the nexus of her work with the Urban Risk Lab and as a member of MIT’s Climate Resiliency Committee. While the lab’s mission is to “develop methods, prototypes, and technologies to embed risk reduction and preparedness into the design of cities and regions to increase resilience,” the Climate Resiliency Committee — made up of faculty, staff, and researchers — is focused on assessing, planning, and operationalizing a climate-resilient MIT. The work of both the lab and the committee is embedded in the recently released MIT Climate Resiliency Dashboard, a visualization tool that allows users to understand potential flooding impacts of a number of storm scenarios and drive decision-making.

    While the debut of the tool signaled a big advancement in resiliency planning at MIT, some, including Mazereeuw, saw an opportunity for enhancement. In working with Ken Strzepek, a MITOS Faculty Fellow and research scientist at the MIT Center for Global Change Science who was also an integral part of this work, Mazereeuw says she was surprised to learn that even the most sophisticated flood modeling treats buildings as solid blocks. With all buildings being treated the same, despite varying porosity, the dashboard is limited in some flood scenario analysis. To address this, Mazereeuw and others got to work to fill in that additional layer of data, with the citizen science efforts a key factor of that work. “Understanding the porosity of the building is important to understanding how much water actually goes in the building in these scenarios,” she explains.

    Though surveyors are often used to collect and map this type of information, Mazereeuw wanted to leverage the MIT community in order to collect data quickly while engaging students, faculty, and researchers as resiliency stewards for the campus. “It’s important for projects like this to encourage awareness,” she explains. “Generally, when something fails, we notice it, but otherwise we don’t. With climate change bringing on more uncertainty in the scale and intensity of events, we need everyone to be more aware and help us understand things like vulnerabilities.”

    To do this, MITOS and the Urban Risk Lab reached out to more than a dozen students, who were joined by faculty, staff, and researchers, to map porosity of 31 campus buildings connected by basements. The buildings were chosen based on this connectivity, understanding that water that reaches one basement could potentially flow to another.

    Urban Risk Lab research scientists Aditya Barve and Mayank Ojha aided the group’s efforts by creating a mapping app and chatbot to support consistency in reporting and ease of use. Each team member used the app to find buildings where porosity points needed to be mapped. As teams arrived at the building exteriors, they entered their location in the app, which then triggered the Facebook and LINE-powered chatbot on their phone. There, students were guided through measuring the opening, adjusting for elevation to correlate to the City of Cambridge base datum, and, based on observable features, noting the materials and quality of the opening on a one-through-three scale. Over just three days, the team, which included Mazereeuw herself, mapped 1,030 porosity points that will aid in resiliency planning and preparation on campus in a number of ways.

    “The goal is to understand various heights for flood waters around porous spots on campus,” says Mazereeuw. “But the impact can be different depending on the space. We hope this data can inform safety as well as understanding potential damage to research or disruption to campus operations from future storms.”

    The porosity data collection is complete for this round — future hunts will likely be conducted to confirm and converge data — but one team member’s work continues at the basement level of MIT. Katarina Boukin, a PhD student in civil and environmental engineering and PhD student fellow with MITOS, has been focused on methods of collecting data beneath buildings at MIT to understand how they would be impacted if flood water were to enter. “We have a number of connected basements on campus, and if one of them floods, potentially all of them do,” explains Boukin. “By looking at absolute elevation and porosity, we’re connecting the outside to the inside and tracking how much and where water may flow.” With the added data from the Porosity Hunt, a complete picture of vulnerabilities and resiliency opportunities can be shared.

    Synthesizing much of this data is where Eva Then ’21 comes in. Then was among the students who worked to capture data points over the three days and is now working in ArcGIS — an online mapping software that also powers the Climate Resiliency Dashboard — to process and visualize the data collected. Once completed, the data will be incorporated into the campus flood model to increase the accuracy of projections on the Climate Resiliency Dashboard. “Over the next decades, the model will serve as an adaptive planning tool to make campus safe and resilient amid growing climate risks,” Then says.

    For Mazereeuw, the Porosity Hunt and data collected additionally serve as a study in scalability, providing valuable insight on how similar research efforts inspired by the MIT test bed approach could be undertaken and inform policy beyond MIT. She also hopes it will inspire students to launch their own hunts in the future, becoming resiliency stewards for their campus and dorms. “Going through measuring and documenting turns on and shows a new set of goggles — you see campus and buildings in a slightly different way,” she says, “Having people look carefully and document change is a powerful tool in climate and resiliency planning.” 

    Mazereeuw also notes that recent devastating flooding events across the country, including those resulting from Hurricane Ida, have put a special focus on this work. “The loss of life that occurred in that storm, including those who died as waters flooded their basement homes  underscores the urgency of this type of research, planning, and readiness.” More

  • in

    The language of change

    Ryan Conti came to MIT hoping to find a way to do good things in the world. Now a junior, his path is pointing toward a career in climate science, and he is preparing by majoring in both math and computer science and by minoring in philosophy.

    Language for catalyzing change

    Philosophy matters to Conti not only because he is interested in ethics — questions of right and wrong — but because he believes the philosophy of language can illuminate how humans communicate, including factors that contribute to miscommunication. “I care a lot about climate change, so I want to do scientific work on it, but I also want to help work on policy — which means conveying arguments well and convincing people so that change can occur,” he says.Conti says a key reason he came to MIT was because the Institute has such a strong School of Humanities, Arts, and Social Sciences (MIT SHASS). “One of the big factors in my choosing MIT is that the humanities departments here are really, really good,” says Conti, who was named a 2021 Burchard Scholar in honor of his excellence in the Institute’s humanistic fields. “I was considering literature, writing, philosophy, linguistics, all of that.”Revitalizing endangered indigenous languages

    Within MIT SHASS, Conti has focused academically on the philosophy of language, and he is also personally pursuing another linguistic passion — the preservation and revitalization of endangered indigenous languages. Raised in Plano, Texas, Conti is a citizen of the Chickasaw Nation, which today has fewer than 50 first-language speakers left.“I’ve been studying the language on my own. It’s something I really care about a lot, the entire endeavor of language revitalization,” says Conti, who credits his maternal grandmother with instilling his appreciation for his heritage. “She would always tell me that I should be proud of it,” he says. “As I got older and understood the history of things, the precarious nature of our language, I got more invested.” Conti says working to revitalize the Chickasaw language “could be one of the most important things I do with my life.”Already, MIT has given him an opportunity — through the MIT Solve initiative — to participate in a website project for speakers of Makah, an endangered indigenous language of the Pacific Northwest. “The thrust at a high level is trying to use AI [artificial intelligence] to develop speech-to-text software for languages in the Wakashan language family,” he says. The project taught him a lot about natural language processing and automatic speech recognition, he adds, although his website design was not chosen for implementation.

    Glacier dynamics, algorithms — and Quizbowl!

    MIT has also given Conti some experience on the front lines of climate change. Through the Undergraduate Research Opportunities Program, he has been working in MIT’s Glacier Dynamics and Remote Sensing Group, developing machine learning algorithms to improve iceberg detection using satellite imagery. After graduation, Conti plans to pursue a PhD in climate science, perhaps continuing to work in glaciology.He also hopes to participate in a Chickasaw program that pairs students with native speakers to become fluent. He says he sees some natural overlap between his two passions. “Issues of indigenous sovereignty and language preservation are inherently linked with climate change, because the effects of climate change fall unequally on poor communities, which are oftentimes indigenous communities,” he says.For the moment, however, those plans still lie at least two years in the future. In the meantime, Conti is having fun serving as vice president of the MIT Quizbowl Team, an academic quiz team that competes across the region and often participate in national tournaments. What are Conti’s competition specialties? Literature and philosophy. 

    Story prepared by MIT SHASS CommunicationsEditor, Designer: Emily Hiestand, Communications DirectorSenior Writer: Kathryn O’Neill, Associate News Manager More

  • in

    Making roadway spending more sustainable

    The share of federal spending on infrastructure has reached an all-time low, falling from 30 percent in 1960 to just 12 percent in 2018.

    While the nation’s ailing infrastructure will require more funding to reach its full potential, recent MIT research finds that more sustainable and higher performing roads are still possible even with today’s limited budgets.

    The research, conducted by a team of current and former MIT Concrete Sustainability Hub (MIT CSHub) scientists and published in Transportation Research D, finds that a set of innovative planning strategies could improve pavement network environmental and performance outcomes even if budgets don’t increase.

    The paper presents a novel budget allocation tool and pairs it with three innovative strategies for managing pavement networks: a mix of paving materials, a mix of short- and long-term paving actions, and a long evaluation period for those actions.

    This novel approach offers numerous benefits. When applied to a 30-year case study of the Iowa U.S. Route network, the MIT CSHub model and management strategies cut emissions by 20 percent while sustaining current levels of road quality. Achieving this with a conventional planning approach would require the state to spend 32 percent more than it does today. The key to its success is the consideration of a fundamental — but fraught — aspect of pavement asset management: uncertainty.

    Predicting unpredictability

    The average road must last many years and support the traffic of thousands — if not millions — of vehicles. Over that time, a lot can change. Material prices may fluctuate, budgets may tighten, and traffic levels may intensify. Climate (and climate change), too, can hasten unexpected repairs.

    Managing these uncertainties effectively means looking long into the future and anticipating possible changes.

    “Capturing the impacts of uncertainty is essential for making effective paving decisions,” explains Fengdi Guo, the paper’s lead author and a departing CSHub research assistant.

    “Yet, measuring and relating these uncertainties to outcomes is also computationally intensive and expensive. Consequently, many DOTs [departments of transportation] are forced to simplify their analysis to plan maintenance — often resulting in suboptimal spending and outcomes.”

    To give DOTs accessible tools to factor uncertainties into their planning, CSHub researchers have developed a streamlined planning approach. It offers greater specificity and is paired with several new pavement management strategies.

    The planning approach, known as Probabilistic Treatment Path Dependence (PTPD), is based on machine learning and was devised by Guo.

    “Our PTPD model is composed of four steps,” he explains. “These steps are, in order, pavement damage prediction; treatment cost prediction; budget allocation; and pavement network condition evaluation.”

    The model begins by investigating every segment in an entire pavement network and predicting future possibilities for pavement deterioration, cost, and traffic.

    “We [then] run thousands of simulations for each segment in the network to determine the likely cost and performance outcomes for each initial and subsequent sequence, or ‘path,’ of treatment actions,” says Guo. “The treatment paths with the best cost and performance outcomes are selected for each segment, and then across the network.”

    The PTPD model not only seeks to minimize costs to agencies but also to users — in this case, drivers. These user costs can come primarily in the form of excess fuel consumption due to poor road quality.

    “One improvement in our analysis is the incorporation of electric vehicle uptake into our cost and environmental impact predictions,” Randolph Kirchain, a principal research scientist at MIT CSHub and MIT Materials Research Laboratory (MRL) and one of the paper’s co-authors. “Since the vehicle fleet will change over the next several decades due to electric vehicle adoption, we made sure to consider how these changes might impact our predictions of excess energy consumption.”

    After developing the PTPD model, Guo wanted to see how the efficacy of various pavement management strategies might differ. To do this, he developed a sophisticated deterioration prediction model.

    A novel aspect of this deterioration model is its treatment of multiple deterioration metrics simultaneously. Using a multi-output neural network, a tool of artificial intelligence, the model can predict several forms of pavement deterioration simultaneously, thereby, accounting for their correlations among one another.

    The MIT team selected two key metrics to compare the effectiveness of various treatment paths: pavement quality and greenhouse gas emissions. These metrics were then calculated for all pavement segments in the Iowa network.

    Improvement through variation

     The MIT model can help DOTs make better decisions, but that decision-making is ultimately constrained by the potential options considered.

    Guo and his colleagues, therefore, sought to expand current decision-making paradigms by exploring a broad set of network management strategies and evaluating them with their PTPD approach. Based on that evaluation, the team discovered that networks had the best outcomes when the management strategy includes using a mix of paving materials, a variety of long- and short-term paving repair actions (treatments), and longer time periods on which to base paving decisions.

    They then compared this proposed approach with a baseline management approach that reflects current, widespread practices: the use of solely asphalt materials, short-term treatments, and a five-year period for evaluating the outcomes of paving actions.

    With these two approaches established, the team used them to plan 30 years of maintenance across the Iowa U.S. Route network. They then measured the subsequent road quality and emissions.

    Their case study found that the MIT approach offered substantial benefits. Pavement-related greenhouse gas emissions would fall by around 20 percent across the network over the whole period. Pavement performance improved as well. To achieve the same level of road quality as the MIT approach, the baseline approach would need a 32 percent greater budget.

    “It’s worth noting,” says Guo, “that since conventional practices employ less effective allocation tools, the difference between them and the CSHub approach should be even larger in practice.”

    Much of the improvement derived from the precision of the CSHub planning model. But the three treatment strategies also play a key role.

    “We’ve found that a mix of asphalt and concrete paving materials allows DOTs to not only find materials best-suited to certain projects, but also mitigates the risk of material price volatility over time,” says Kirchain.

    It’s a similar story with a mix of paving actions. Employing a mix of short- and long-term fixes gives DOTs the flexibility to choose the right action for the right project.

    The final strategy, a long-term evaluation period, enables DOTs to see the entire scope of their choices. If the ramifications of a decision are predicted over only five years, many long-term implications won’t be considered. Expanding the window for planning, then, can introduce beneficial, long-term options.

    It’s not surprising that paving decisions are daunting to make; their impacts on the environment, driver safety, and budget levels are long-lasting. But rather than simplify this fraught process, the CSHub method aims to reflect its complexity. The result is an approach that provides DOTs with the tools to do more with less.

    This research was supported through the MIT Concrete Sustainability Hub by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Predicting building emissions across the US

    The United States is entering a building boom. Between 2017 and 2050, it will build the equivalent of New York City 20 times over. Yet, to meet climate targets, the nation must also significantly reduce the greenhouse gas (GHG) emissions of its buildings, which comprise 27 percent of the nation’s total emissions.

    A team of current and former MIT Concrete Sustainability Hub (CSHub) researchers is addressing these conflicting demands with the aim of giving policymakers the tools and information to act. They have detailed the results of their collaboration in a recent paper in the journal Applied Energy that projects emissions for all buildings across the United States under two GHG reduction scenarios.

    Their paper found that “embodied” emissions — those from materials production and construction — would represent around a quarter of emissions between 2016 and 2050 despite extensive construction.

    Further, many regions would have varying priorities for GHG reductions; some, like the West, would benefit most from reductions to embodied emissions, while others, like parts of the Midwest, would see the greatest payoff from interventions to emissions from energy consumption. If these regional priorities were addressed aggressively, building sector emissions could be reduced by around 30 percent between 2016 and 2050.

    Quantifying contradictions

    Modern buildings are far more complex — and efficient — than their predecessors. Due to new technologies and more stringent building codes, they can offer lower energy consumption and operational emissions. And yet, more-efficient materials and improved construction standards can also generate greater embodied emissions.

    Concrete, in many ways, epitomizes this tradeoff. Though its durability can minimize energy-intensive repairs over a building’s operational life, the scale of its production means that it contributes to a large proportion of the embodied impacts in the building sector.

    As such, the team centered GHG reductions for concrete in its analysis.

    “We took a bottom-up approach, developing reference designs based on a set of residential and commercial building models,” explains Ehsan Vahidi, an assistant professor at the University of Nevada at Reno and a former CSHub postdoc. “These designs were differentiated by roof and slab insulation, HVAC efficiency, and construction materials — chiefly concrete and wood.”

    After measuring the operational and embodied GHG emissions for each reference design, the team scaled up their results to the county level and then national level based on building stock forecasts. This allowed them to estimate the emissions of the entire building sector between 2016 and 2050.

    To understand how various interventions could cut GHG emissions, researchers ran two different scenarios — a “projected” and an “ambitious” scenario — through their framework.

    The projected scenario corresponded to current trends. It assumed grid decarbonization would follow Energy Information Administration predictions; the widespread adoption of new energy codes; efficiency improvement of lighting and appliances; and, for concrete, the implementation of 50 percent low-carbon cements and binders in all new concrete construction and the adoption of full carbon capture, storage, and utilization (CCUS) of all cement and concrete emissions.

    “Our ambitious scenario was intended to reflect a future where more aggressive actions are taken to reduce GHG emissions and achieve the targets,” says Vahidi. “Therefore, the ambitious scenario took these same strategies [of the projected scenario] but featured more aggressive targets for their implementation.”

    For instance, it assumed a 33 percent reduction in grid emissions by 2050 and moved the projected deadlines for lighting and appliances and thermal insulation forward by five and 10 years, respectively. Concrete decarbonization occurred far more quickly as well.

    Reductions and variations

    The extensive growth forecast for the U.S. building sector will inevitably generate a sizable number of emissions. But how much can this figure be minimized?

    Without the implementation of any GHG reduction strategies, the team found that the building sector would emit 62 gigatons CO2 equivalent between 2016 and 2050. That’s comparable to the emissions generated from 156 trillion passenger vehicle miles traveled.

    But both GHG reduction scenarios could cut the emissions from this unmitigated, business-as-usual scenario significantly.

    Under the projected scenario, emissions would fall to 45 gigatons CO2 equivalent — a 27 percent decrease over the analysis period. The ambitious scenario would offer a further 6 percent reduction over the projected scenario, reaching 40 gigatons CO2 equivalent — like removing around 55 trillion passenger vehicle miles from the road over the period.

    “In both scenarios, the largest contributor to reductions was the greening of the energy grid,” notes Vahidi. “Other notable opportunities for reductions were from increasing the efficiency of lighting, HVAC, and appliances. Combined, these four attributes contributed to 85 percent of the emissions over the analysis period. Improvements to them offered the greatest potential emissions reductions.”

    The remaining attributes, such as thermal insulation and low-carbon concrete, had a smaller impact on emissions and, consequently, offered smaller reduction opportunities. That’s because these two attributes were only applied to new construction in the analysis, which was outnumbered by existing structures throughout the period.

    The disparities in impact between strategies aimed at new and existing structures underscore a broader finding: Despite extensive construction over the period, embodied emissions would comprise just 23 percent of cumulative emissions between 2016 and 2050, with the remainder coming primarily from operation.  

    “This is a consequence of existing structures far outnumbering new structures,” explains Jasmina Burek, a CSHub postdoc and an incoming assistant professor at the University of Massachusetts Lowell. “The operational emissions generated by all new and existing structures between 2016 and 2050 will always greatly exceed the embodied emissions of new structures at any given time, even as buildings become more efficient and the grid gets greener.”

    Yet the emissions reductions from both scenarios were not distributed evenly across the entire country. The team identified several regional variations that could have implications for how policymakers must act to reduce building sector emissions.

    “We found that western regions in the United States would see the greatest reduction opportunities from interventions to residential emissions, which would constitute 90 percent of the region’s total emissions over the analysis period,” says Vahidi.

    The predominance of residential emissions stems from the region’s ongoing population surge and its subsequent growth in housing stock. Proposed solutions would include CCUS and low-carbon binders for concrete production, and improvements to energy codes aimed at residential buildings.

    As with the West, ideal solutions for the Southeast would include CCUS, low-carbon binders, and improved energy codes.

    “In the case of Southeastern regions, interventions should equally target commercial and residential buildings, which we found were split more evenly among the building stock,” explains Burek. “Due to the stringent energy codes in both regions, interventions to operational emissions were less impactful than those to embodied emissions.”

    Much of the Midwest saw the inverse outcome. Its energy mix remains one of the most carbon-intensive in the nation and improvements to energy efficiency and the grid would have a large payoff — particularly in Missouri, Kansas, and Colorado.

    New England and California would see the smallest reductions. As their already-strict energy codes would limit further operational reductions, opportunities to reduce embodied emissions would be the most impactful.

    This tremendous regional variation uncovered by the MIT team is in many ways a reflection of the great demographic and geographic diversity of the nation as a whole. And there are still further variables to consider.

    In addition to GHG emissions, future research could consider other environmental impacts, like water consumption and air quality. Other mitigation strategies to consider include longer building lifespans, retrofitting, rooftop solar, and recycling and reuse.

    In this sense, their findings represent the lower bounds of what is possible in the building sector. And even if further improvements are ultimately possible, they’ve shown that regional variation will invariably inform those environmental impact reductions.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    Research collaboration puts climate-resilient crops in sight

    Any houseplant owner knows that changes in the amount of water or sunlight a plant receives can put it under immense stress. A dying plant brings certain disappointment to anyone with a green thumb. 

    But for farmers who make their living by successfully growing plants, and whose crops may nourish hundreds or thousands of people, the devastation of failing flora is that much greater. As climate change is poised to cause increasingly unpredictable weather patterns globally, crops may be subject to more extreme environmental conditions like droughts, fluctuating temperatures, floods, and wildfire. 

    Climate scientists and food systems researchers worry about the stress climate change may put on crops, and on global food security. In an ambitious interdisciplinary project funded by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), David Des Marais, the Gale Assistant Professor in the Department of Civil and Environmental Engineering at MIT, and Caroline Uhler, an associate professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, are investigating how plant genes communicate with one another under stress. Their research results can be used to breed plants more resilient to climate change.

    Crops in trouble

    Governing plants’ responses to environmental stress are gene regulatory networks, or GRNs, which guide the development and behaviors of living things. A GRN may be comprised of thousands of genes and proteins that all communicate with one another. GRNs help a particular cell, tissue, or organism respond to environmental changes by signaling certain genes to turn their expression on or off.

    Even seemingly minor or short-term changes in weather patterns can have large effects on crop yield and food security. An environmental trigger, like a lack of water during a crucial phase of plant development, can turn a gene on or off, and is likely to affect many others in the GRN. For example, without water, a gene enabling photosynthesis may switch off. This can create a domino effect, where the genes that rely on those regulating photosynthesis are silenced, and the cycle continues. As a result, when photosynthesis is halted, the plant may experience other detrimental side effects, like no longer being able to reproduce or defend against pathogens. The chain reaction could even kill a plant before it has the chance to be revived by a big rain.

    Des Marais says he wishes there was a way to stop those genes from completely shutting off in such a situation. To do that, scientists would need to better understand how exactly gene networks respond to different environmental triggers. Bringing light to this molecular process is exactly what he aims to do in this collaborative research effort.

    Solving complex problems across disciplines

    Despite their crucial importance, GRNs are difficult to study because of how complex and interconnected they are. Usually, to understand how a particular gene is affecting others, biologists must silence one gene and see how the others in the network respond. 

    For years, scientists have aspired to an algorithm that could synthesize the massive amount of information contained in GRNs to “identify correct regulatory relationships among genes,” according to a 2019 article in the Encyclopedia of Bioinformatics and Computational Biology. 

    “A GRN can be seen as a large causal network, and understanding the effects that silencing one gene has on all other genes requires understanding the causal relationships among the genes,” says Uhler. “These are exactly the kinds of algorithms my group develops.”

    Des Marais and Uhler’s project aims to unravel these complex communication networks and discover how to breed crops that are more resilient to the increased droughts, flooding, and erratic weather patterns that climate change is already causing globally.

    In addition to climate change, by 2050, the world will demand 70 percent more food to feed a booming population. “Food systems challenges cannot be addressed individually in disciplinary or topic area silos,” says Greg Sixt, J-WAFS’ research manager for climate and food systems. “They must be addressed in a systems context that reflects the interconnected nature of the food system.”

    Des Marais’ background is in biology, and Uhler’s in statistics. “Dave’s project with Caroline was essentially experimental,” says Renee J. Robins, J-WAFS’ executive director. “This kind of exploratory research is exactly what the J-WAFS seed grant program is for.”

    Getting inside gene regulatory networks

    Des Marais and Uhler’s work begins in a windowless basement on MIT’s campus, where 300 genetically identical Brachypodium distachyon plants grow in large, temperature-controlled chambers. The plant, which contains more than 30,000 genes, is a good model for studying important cereal crops like wheat, barley, maize, and millet. For three weeks, all plants receive the same temperature, humidity, light, and water. Then, half are slowly tapered off water, simulating drought-like conditions.

    Six days into the forced drought, the plants are clearly suffering. Des Marais’ PhD student Jie Yun takes tissues from 50 hydrated and 50 dry plants, freezes them in liquid nitrogen to immediately halt metabolic activity, grinds them up into a fine powder, and chemically separates the genetic material. The genes from all 100 samples are then sequenced at a lab across the street.

    The team is left with a spreadsheet listing the 30,000 genes found in each of the 100 plants at the moment they were frozen, and how many copies there were. Uhler’s PhD student Anastasiya Belyaeva inputs the massive spreadsheet into the computer program she developed and runs her novel algorithm. Within a few hours, the group can see which genes were most active in one condition over another, how the genes were communicating, and which were causing changes in others. 

    The methodology captures important subtleties that could allow researchers to eventually alter gene pathways and breed more resilient crops. “When you expose a plant to drought stress, it’s not like there’s some canonical response,” Des Marais says. “There’s lots of things going on. It’s turning this physiologic process up, this one down, this one didn’t exist before, and now suddenly is turned on.” 

    In addition to Des Marais and Uhler’s research, J-WAFS has funded projects in food and water from researchers in 29 departments across all five MIT schools as well as the MIT Schwarzman College of Computing. J-WAFS seed grants typically fund seven to eight new projects every year.

    “The grants are really aimed at catalyzing new ideas, providing the sort of support [for MIT researchers] to be pushing boundaries, and also bringing in faculty who may have some interesting ideas that they haven’t yet applied to water or food concerns,” Robins says. “It’s an avenue for researchers all over the Institute to apply their ideas to water and food.”

    Alison Gold is a student in MIT’s Graduate Program in Science Writing. More

  • in

    MIT appoints members of new faculty committee to drive climate action plan

    In May, responding to the world’s accelerating climate crisis, MIT issued an ambitious new plan, “Fast Forward: MIT’s Climate Action Plan for the Decade.” The plan outlines a broad array of new and expanded initiatives across campus to build on the Institute’s longstanding climate work.

    Now, to unite these varied climate efforts, maximize their impact, and identify new ways for MIT to contribute climate solutions, the Institute has appointed more than a dozen faculty members to a new committee established by the Fast Forward plan, named the Climate Nucleus.

    The committee includes leaders of a number of climate- and energy-focused departments, labs, and centers that have significant responsibilities under the plan. Its membership spans all five schools and the MIT Schwarzman College of Computing. Professors Noelle Selin and Anne White have agreed to co-chair the Climate Nucleus for a term of three years.

    “I am thrilled and grateful that Noelle and Anne have agreed to step up to this important task,” says Maria T. Zuber, MIT’s vice president for research. “Under their leadership, I’m confident that the Climate Nucleus will bring new ideas and new energy to making the strategy laid out in the climate action plan a reality.”

    The Climate Nucleus has broad responsibility for the management and implementation of the Fast Forward plan across its five areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts.

    Over the next few years, the nucleus will aim to advance MIT’s contribution to a two-track approach to decarbonizing the global economy, an approach described in the Fast Forward plan. First, humanity must go as far and as fast as it can to reduce greenhouse gas emissions using existing tools and methods. Second, societies need to invest in, invent, and deploy new tools — and promote new institutions and policies — to get the global economy to net-zero emissions by mid-century.

    The co-chairs of the nucleus bring significant climate and energy expertise, along with deep knowledge of the MIT community, to their task.

    Selin is a professor with joint appointments in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences. She is also the director of the Technology and Policy Program. She began at MIT in 2007 as a postdoc with the Center for Global Change Science and the Joint Program on the Science and Policy of Global Change. Her research uses modeling to inform decision-making on air pollution, climate change, and hazardous substances.

    “Climate change affects everything we do at MIT. For the new climate action plan to be effective, the Climate Nucleus will need to engage the entire MIT community and beyond, including policymakers as well as people and communities most affected by climate change,” says Selin. “I look forward to helping to guide this effort.”

    White is the School of Engineering’s Distinguished Professor of Engineering and the head of the Department of Nuclear Science and Engineering. She joined the MIT faculty in 2009 and has also served as the associate director of MIT’s Plasma Science and Fusion Center. Her research focuses on assessing and refining the mathematical models used in the design of fusion energy devices, such as tokamaks, which hold promise for delivering limitless zero-carbon energy.

    “The latest IPCC report underscores the fact that we have no time to lose in decarbonizing the global economy quickly. This is a problem that demands we use every tool in our toolbox — and develop new ones — and we’re committed to doing that,” says White, referring to an August 2021 report from the Intergovernmental Panel on Climate Change, a UN climate science body, that found that climate change has already affected every region on Earth and is intensifying. “We must train future technical and policy leaders, expand opportunities for students to work on climate problems, and weave sustainability into every one of MIT’s activities. I am honored to be a part of helping foster this Institute-wide collaboration.”

    A first order of business for the Climate Nucleus will be standing up three working groups to address specific aspects of climate action at MIT: climate education, climate policy, and MIT’s own carbon footprint. The working groups will be responsible for making progress on their particular areas of focus under the plan and will make recommendations to the nucleus on ways of increasing MIT’s effectiveness and impact. The working groups will also include student, staff, and alumni members, so that the entire MIT community has the opportunity to contribute to the plan’s implementation.  

    The nucleus, in turn, will report and make regular recommendations to the Climate Steering Committee, a senior-level team consisting of Zuber; Richard Lester, the associate provost for international activities; Glen Shor, the executive vice president and treasurer; and the deans of the five schools and the MIT Schwarzman College of Computing. The new plan created the Climate Steering Committee to ensure that climate efforts will receive both the high-level attention and the resources needed to succeed.

    Together the new committees and working groups are meant to form a robust new infrastructure for uniting and coordinating MIT’s climate action efforts in order to maximize their impact. They replace the Climate Action Advisory Committee, which was created in 2016 following the release of MIT’s first climate action plan.

    In addition to Selin and White, the members of the Climate Nucleus are:

    Bob Armstrong, professor in the Department of Chemical Engineering and director of the MIT Energy Initiative;
    Dara Entekhabi, professor in the departments of Civil and Environmental Engineering and Earth, Atmospheric and Planetary Sciences;
    John Fernández, professor in the Department of Architecture and director of the Environmental Solutions Initiative;
    Stefan Helmreich, professor in the Department of Anthropology;
    Christopher Knittel, professor in the MIT Sloan School of Management and director of the Center for Energy and Environmental Policy Research;
    John Lienhard, professor in the Department of Mechanical Engineering and director of the Abdul Latif Jameel Water and Food Systems Lab;
    Julie Newman, director of the Office of Sustainability and lecturer in the Department of Urban Studies and Planning;
    Elsa Olivetti, professor in the Department of Materials Science and Engineering and co-director of the Climate and Sustainability Consortium;
    Christoph Reinhart, professor in the Department of Architecture and director of the Building Technology Program;
    John Sterman, professor in the MIT Sloan School of Management and director of the Sloan Sustainability Initiative;
    Rob van der Hilst, professor and head of the Department of Earth, Atmospheric and Planetary Sciences; and
    Chris Zegras, professor and head of the Department of Urban Studies and Planning. More

  • in

    Concrete’s role in reducing building and pavement emissions

    Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.

    As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.

    But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.

    In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.

    They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets. 

    The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.

    Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.

    Sources and solutions

    The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.

    While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.

    Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.

    Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.

    Set of scenarios

    In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.

    Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.

    “We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.

    “As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”

    To reflect the complexity of these systems, their models had to have the highest resolutions possible.

    “In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.

    “To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”

    Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.

    As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.

    Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.

    “In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”

    Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.

    Results

    As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.

    “What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”

    “Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”

    In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.

    Their predictions for emissions reductions generated additional insights.  

    For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.

    As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.

    In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.

    “The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”

    These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.

    Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.

    “In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.

    “Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”

    This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.

    The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More