More stories

  • in

    New “risk triage” platform pinpoints compounding threats to US infrastructure

    Over a 36-hour period in August, Hurricane Henri delivered record rainfall in New York City, where an aging storm-sewer system was not built to handle the deluge, resulting in street flooding. Meanwhile, an ongoing drought in California continued to overburden aquifers and extend statewide water restrictions. As climate change amplifies the frequency and intensity of extreme events in the United States and around the world, and the populations and economies they threaten grow and change, there is a critical need to make infrastructure more resilient. But how can this be done in a timely, cost-effective way?

    An emerging discipline called multi-sector dynamics (MSD) offers a promising solution. MSD homes in on compounding risks and potential tipping points across interconnected natural and human systems. Tipping points occur when these systems can no longer sustain multiple, co-evolving stresses, such as extreme events, population growth, land degradation, drinkable water shortages, air pollution, aging infrastructure, and increased human demands. MSD researchers use observations and computer models to identify key precursory indicators of such tipping points, providing decision-makers with critical information that can be applied to mitigate risks and boost resilience in infrastructure and managed resources.

    At MIT, the Joint Program on the Science and Policy of Global Change has since 2018 been developing MSD expertise and modeling tools and using them to explore compounding risks and potential tipping points in selected regions of the United States. In a two-hour webinar on Sept. 15, MIT Joint Program researchers presented an overview of the program’s MSD research tool set and its applications.  

    MSD and the risk triage platform

    “Multi-sector dynamics explores interactions and interdependencies among human and natural systems, and how these systems may adapt, interact, and co-evolve in response to short-term shocks and long-term influences and stresses,” says MIT Joint Program Deputy Director C. Adam Schlosser, noting that such analysis can reveal and quantify potential risks that would likely evade detection in siloed investigations. “These systems can experience cascading effects or failures after crossing tipping points. The real question is not just where these tipping points are in each system, but how they manifest and interact across all systems.”

    To address that question, the program’s MSD researchers have developed the MIT Socio-Environmental Triage (MST) platform, now publicly available for the first time. Focused on the continental United States, the first version of the platform analyzes present-day risks related to water, land, climate, the economy, energy, demographics, health, and infrastructure, and where these compound to create risk hot spots. It’s essentially a screening-level visualization tool that allows users to examine risks, identify hot spots when combining risks, and make decisions about how to deploy more in-depth analysis to solve complex problems at regional and local levels. For example, MST can identify hot spots for combined flood and poverty risks in the lower Mississippi River basin, and thereby alert decision-makers as to where more concentrated flood-control resources are needed.

    Successive versions of the platform will incorporate projections based on the MIT Joint Program’s Integrated Global System Modeling (IGSM) framework of how different systems and stressors may co-evolve into the future and thereby change the risk landscape. This enhanced capability could help uncover cost-effective pathways for mitigating and adapting to a wide range of environmental and economic risks.  

    MSD applications

    Five webinar presentations explored how MIT Joint Program researchers are applying the program’s risk triage platform and other MSD modeling tools to identify potential tipping points and risks in five key domains: water quality, land use, economics and energy, health, and infrastructure. 

    Joint Program Principal Research Scientist Xiang Gao described her efforts to apply a high-resolution U.S. water-quality model to calculate a location-specific, water-quality index over more than 2,000 river basins in the country. By accounting for interactions among climate, agriculture, and socioeconomic systems, various water-quality measures can be obtained ranging from nitrate and phosphate levels to phytoplankton concentrations. This modeling approach advances a unique capability to identify potential water-quality risk hot spots for freshwater resources.

    Joint Program Research Scientist Angelo Gurgel discussed his MSD-based analysis of how climate change, population growth, changing diets, crop-yield improvements and other forces that drive land-use change at the global level may ultimately impact how land is used in the United States. Drawing upon national observational data and the IGSM framework, the analysis shows that while current U.S. land-use trends are projected to persist or intensify between now and 2050, there is no evidence of any concerning tipping points arising throughout this period.  

    MIT Joint Program Research Scientist Jennifer Morris presented several examples of how the risk triage platform can be used to combine existing U.S. datasets and the IGSM framework to assess energy and economic risks at the regional level. For example, by aggregating separate data streams on fossil-fuel employment and poverty, one can target selected counties for clean energy job training programs as the nation moves toward a low-carbon future. 

    “Our modeling and risk triage frameworks can provide pictures of current and projected future economic and energy landscapes,” says Morris. “They can also highlight interactions among different human, built, and natural systems, including compounding risks that occur in the same location.”  

    MIT Joint Program research affiliate Sebastian Eastham, a research scientist at the MIT Laboratory for Aviation and the Environment, described an MSD approach to the study of air pollution and public health. Linking the IGSM with an atmospheric chemistry model, Eastham ultimately aims to better understand where the greatest health risks are in the United States and how they may compound throughout this century under different policy scenarios. Using the risk triage tool to combine current risk metrics for air quality and poverty in a selected county based on current population and air-quality data, he showed how one can rapidly identify cardiovascular and other air-pollution-induced disease risk hot spots.

    Finally, MIT Joint Program research affiliate Alyssa McCluskey, a lecturer at the University of Colorado at Boulder, showed how the risk triage tool can be used to pinpoint potential risks to roadways, waterways, and power distribution lines from flooding, extreme temperatures, population growth, and other stressors. In addition, McCluskey described how transportation and energy infrastructure development and expansion can threaten critical wildlife habitats.

    Enabling comprehensive, location-specific analyses of risks and hot spots within and among multiple domains, the Joint Program’s MSD modeling tools can be used to inform policymaking and investment from the municipal to the global level.

    “MSD takes on the challenge of linking human, natural, and infrastructure systems in order to inform risk analysis and decision-making,” says Schlosser. “Through our risk triage platform and other MSD models, we plan to assess important interactions and tipping points, and to provide foresight that supports action toward a sustainable, resilient, and prosperous world.”

    This research is funded by the U.S. Department of Energy’s Office of Science as an ongoing project. More

  • in

    For campus “porosity hunters,” climate resilience is the goal

    At MIT, it’s not uncommon to see groups navigating campus with smartphones and measuring devices in hand, using the Institute as a test bed for research. During one week this summer more than a dozen students, researchers, and faculty, plus an altimeter, could be seen doing just that as they traveled across MIT to measure the points of entry into campus buildings — including windows, doors, and vents — known as a building’s porosity.

    Why measure campus building porosity?

    The group was part of the MIT Porosity Hunt, a citizen-science effort that is using the MIT campus as a place to test emerging methodologies, instruments, and data collection processes to better understand the potential impact of a changing climate — and specifically storm scenarios resulting from it — on infrastructure. The hunt is a collaborative effort between the Urban Risk Lab, led by director and associate professor of architecture and urbanism Miho Mazereeuw, and the Office of Sustainability (MITOS), aimed at supporting an MIT that is resilient to the impacts of climate change, including flooding and extreme heat events. Working over three days, members of the hunt catalogued openings in dozens of buildings across campus to better support flood mapping and resiliency planning at MIT.

    For Mazereeuw, the data collection project lies at the nexus of her work with the Urban Risk Lab and as a member of MIT’s Climate Resiliency Committee. While the lab’s mission is to “develop methods, prototypes, and technologies to embed risk reduction and preparedness into the design of cities and regions to increase resilience,” the Climate Resiliency Committee — made up of faculty, staff, and researchers — is focused on assessing, planning, and operationalizing a climate-resilient MIT. The work of both the lab and the committee is embedded in the recently released MIT Climate Resiliency Dashboard, a visualization tool that allows users to understand potential flooding impacts of a number of storm scenarios and drive decision-making.

    While the debut of the tool signaled a big advancement in resiliency planning at MIT, some, including Mazereeuw, saw an opportunity for enhancement. In working with Ken Strzepek, a MITOS Faculty Fellow and research scientist at the MIT Center for Global Change Science who was also an integral part of this work, Mazereeuw says she was surprised to learn that even the most sophisticated flood modeling treats buildings as solid blocks. With all buildings being treated the same, despite varying porosity, the dashboard is limited in some flood scenario analysis. To address this, Mazereeuw and others got to work to fill in that additional layer of data, with the citizen science efforts a key factor of that work. “Understanding the porosity of the building is important to understanding how much water actually goes in the building in these scenarios,” she explains.

    Though surveyors are often used to collect and map this type of information, Mazereeuw wanted to leverage the MIT community in order to collect data quickly while engaging students, faculty, and researchers as resiliency stewards for the campus. “It’s important for projects like this to encourage awareness,” she explains. “Generally, when something fails, we notice it, but otherwise we don’t. With climate change bringing on more uncertainty in the scale and intensity of events, we need everyone to be more aware and help us understand things like vulnerabilities.”

    To do this, MITOS and the Urban Risk Lab reached out to more than a dozen students, who were joined by faculty, staff, and researchers, to map porosity of 31 campus buildings connected by basements. The buildings were chosen based on this connectivity, understanding that water that reaches one basement could potentially flow to another.

    Urban Risk Lab research scientists Aditya Barve and Mayank Ojha aided the group’s efforts by creating a mapping app and chatbot to support consistency in reporting and ease of use. Each team member used the app to find buildings where porosity points needed to be mapped. As teams arrived at the building exteriors, they entered their location in the app, which then triggered the Facebook and LINE-powered chatbot on their phone. There, students were guided through measuring the opening, adjusting for elevation to correlate to the City of Cambridge base datum, and, based on observable features, noting the materials and quality of the opening on a one-through-three scale. Over just three days, the team, which included Mazereeuw herself, mapped 1,030 porosity points that will aid in resiliency planning and preparation on campus in a number of ways.

    “The goal is to understand various heights for flood waters around porous spots on campus,” says Mazereeuw. “But the impact can be different depending on the space. We hope this data can inform safety as well as understanding potential damage to research or disruption to campus operations from future storms.”

    The porosity data collection is complete for this round — future hunts will likely be conducted to confirm and converge data — but one team member’s work continues at the basement level of MIT. Katarina Boukin, a PhD student in civil and environmental engineering and PhD student fellow with MITOS, has been focused on methods of collecting data beneath buildings at MIT to understand how they would be impacted if flood water were to enter. “We have a number of connected basements on campus, and if one of them floods, potentially all of them do,” explains Boukin. “By looking at absolute elevation and porosity, we’re connecting the outside to the inside and tracking how much and where water may flow.” With the added data from the Porosity Hunt, a complete picture of vulnerabilities and resiliency opportunities can be shared.

    Synthesizing much of this data is where Eva Then ’21 comes in. Then was among the students who worked to capture data points over the three days and is now working in ArcGIS — an online mapping software that also powers the Climate Resiliency Dashboard — to process and visualize the data collected. Once completed, the data will be incorporated into the campus flood model to increase the accuracy of projections on the Climate Resiliency Dashboard. “Over the next decades, the model will serve as an adaptive planning tool to make campus safe and resilient amid growing climate risks,” Then says.

    For Mazereeuw, the Porosity Hunt and data collected additionally serve as a study in scalability, providing valuable insight on how similar research efforts inspired by the MIT test bed approach could be undertaken and inform policy beyond MIT. She also hopes it will inspire students to launch their own hunts in the future, becoming resiliency stewards for their campus and dorms. “Going through measuring and documenting turns on and shows a new set of goggles — you see campus and buildings in a slightly different way,” she says, “Having people look carefully and document change is a powerful tool in climate and resiliency planning.” 

    Mazereeuw also notes that recent devastating flooding events across the country, including those resulting from Hurricane Ida, have put a special focus on this work. “The loss of life that occurred in that storm, including those who died as waters flooded their basement homes  underscores the urgency of this type of research, planning, and readiness.” More

  • in

    Zeroing in on the origins of Earth’s “single most important evolutionary innovation”

    Some time in Earth’s early history, the planet took a turn toward habitability when a group of enterprising microbes known as cyanobacteria evolved oxygenic photosynthesis — the ability to turn light and water into energy, releasing oxygen in the process.

    This evolutionary moment made it possible for oxygen to eventually accumulate in the atmosphere and oceans, setting off a domino effect of diversification and shaping the uniquely habitable planet we know today.  

    Now, MIT scientists have a precise estimate for when cyanobacteria, and oxygenic photosynthesis, first originated. Their results appear today in the Proceedings of the Royal Society B.

    They developed a new gene-analyzing technique that shows that all the species of cyanobacteria living today can be traced back to a common ancestor that evolved around 2.9 billion years ago. They also found that the ancestors of cyanobacteria branched off from other bacteria around 3.4 billion years ago, with oxygenic photosynthesis likely evolving during the intervening half-billion years, during the Archean Eon.

    Interestingly, this estimate places the appearance of oxygenic photosynthesis at least 400 million years before the Great Oxidation Event, a period in which the Earth’s atmosphere and oceans first experienced a rise in oxygen. This suggests that cyanobacteria may have evolved the ability to produce oxygen early on, but that it took a while for this oxygen to really take hold in the environment.

    “In evolution, things always start small,” says lead author Greg Fournier, associate professor of geobiology in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “Even though there’s evidence for early oxygenic photosynthesis — which is the single most important and really amazing evolutionary innovation on Earth — it still took hundreds of millions of years for it to take off.”

    Fournier’s MIT co-authors include Kelsey Moore, Luiz Thiberio Rangel, Jack Payette, Lily Momper, and Tanja Bosak.

    Slow fuse, or wildfire?

    Estimates for the origin of oxygenic photosynthesis vary widely, along with the methods to trace its evolution.

    For instance, scientists can use geochemical tools to look for traces of oxidized elements in ancient rocks. These methods have found hints that oxygen was present as early as 3.5 billion years ago — a sign that oxygenic photosynthesis may have been the source, although other sources are also possible.

    Researchers have also used molecular clock dating, which uses the genetic sequences of microbes today to trace back changes in genes through evolutionary history. Based on these sequences, researchers then use models to estimate the rate at which genetic changes occur, to trace when groups of organisms first evolved. But molecular clock dating is limited by the quality of ancient fossils, and the chosen rate model, which can produce different age estimates, depending on the rate that is assumed.

    Fournier says different age estimates can imply conflicting evolutionary narratives. For instance, some analyses suggest oxygenic photosynthesis evolved very early on and progressed “like a slow fuse,” while others indicate it appeared much later and then “took off like wildfire” to trigger the Great Oxidation Event and the accumulation of oxygen in the biosphere.

    “In order for us to understand the history of habitability on Earth, it’s important for us to distinguish between these hypotheses,” he says.

    Horizontal genes

    To precisely date the origin of cyanobacteria and oxygenic photosynthesis, Fournier and his colleagues paired molecular clock dating with horizontal gene transfer — an independent method that doesn’t rely entirely on fossils or rate assumptions.

    Normally, an organism inherits a gene “vertically,” when it is passed down from the organism’s parent. In rare instances, a gene can also jump from one species to another, distantly related species. For instance, one cell may eat another, and in the process incorporate some new genes into its genome.

    When such a horizontal gene transfer history is found, it’s clear that the group of organisms that acquired the gene is evolutionarily younger than the group from which the gene originated. Fournier reasoned that such instances could be used to determine the relative ages between certain bacterial groups. The ages for these groups could then be compared with the ages that various molecular clock models predict. The model that comes closest would likely be the most accurate, and could then be used to precisely estimate the age of other bacterial species — specifically, cyanobacteria.

    Following this reasoning, the team looked for instances of horizontal gene transfer across the genomes of thousands of bacterial species, including cyanobacteria. They also used new cultures of modern cyanobacteria taken by Bosak and Moore, to more precisely use fossil cyanobacteria as calibrations. In the end, they identified 34 clear instances of horizontal gene transfer. They then found that one out of six molecular clock models consistently matched the relative ages identified in the team’s horizontal gene transfer analysis.

    Fournier ran this model to estimate the age of the “crown” group of cyanobacteria, which encompasses all the species living today and known to exhibit oxygenic photosynthesis. They found that, during the Archean eon, the crown group originated around 2.9 billion years ago, while cyanobacteria as a whole branched off from other bacteria around 3.4 billion years ago. This strongly suggests that oxygenic photosynthesis was already happening 500 million years before the Great Oxidation Event (GOE), and that cyanobacteria were producing oxygen for quite a long time before it accumulated in the atmosphere.

    The analysis also revealed that, shortly before the GOE, around 2.4 billion years ago, cyanobacteria experienced a burst of diversification. This implies that a rapid expansion of cyanobacteria may have tipped the Earth into the GOE and launched oxygen into the atmosphere.

    Fournier plans to apply horizontal gene transfer beyond cyanobacteria to pin down the origins of other elusive species.

    “This work shows that molecular clocks incorporating horizontal gene transfers (HGTs) promise to reliably provide the ages of groups across the entire tree of life, even for ancient microbes that have left no fossil record … something that was previously impossible,” Fournier says. 

    This research was supported, in part, by the Simons Foundation and the National Science Foundation. More

  • in

    Making roadway spending more sustainable

    The share of federal spending on infrastructure has reached an all-time low, falling from 30 percent in 1960 to just 12 percent in 2018.

    While the nation’s ailing infrastructure will require more funding to reach its full potential, recent MIT research finds that more sustainable and higher performing roads are still possible even with today’s limited budgets.

    The research, conducted by a team of current and former MIT Concrete Sustainability Hub (MIT CSHub) scientists and published in Transportation Research D, finds that a set of innovative planning strategies could improve pavement network environmental and performance outcomes even if budgets don’t increase.

    The paper presents a novel budget allocation tool and pairs it with three innovative strategies for managing pavement networks: a mix of paving materials, a mix of short- and long-term paving actions, and a long evaluation period for those actions.

    This novel approach offers numerous benefits. When applied to a 30-year case study of the Iowa U.S. Route network, the MIT CSHub model and management strategies cut emissions by 20 percent while sustaining current levels of road quality. Achieving this with a conventional planning approach would require the state to spend 32 percent more than it does today. The key to its success is the consideration of a fundamental — but fraught — aspect of pavement asset management: uncertainty.

    Predicting unpredictability

    The average road must last many years and support the traffic of thousands — if not millions — of vehicles. Over that time, a lot can change. Material prices may fluctuate, budgets may tighten, and traffic levels may intensify. Climate (and climate change), too, can hasten unexpected repairs.

    Managing these uncertainties effectively means looking long into the future and anticipating possible changes.

    “Capturing the impacts of uncertainty is essential for making effective paving decisions,” explains Fengdi Guo, the paper’s lead author and a departing CSHub research assistant.

    “Yet, measuring and relating these uncertainties to outcomes is also computationally intensive and expensive. Consequently, many DOTs [departments of transportation] are forced to simplify their analysis to plan maintenance — often resulting in suboptimal spending and outcomes.”

    To give DOTs accessible tools to factor uncertainties into their planning, CSHub researchers have developed a streamlined planning approach. It offers greater specificity and is paired with several new pavement management strategies.

    The planning approach, known as Probabilistic Treatment Path Dependence (PTPD), is based on machine learning and was devised by Guo.

    “Our PTPD model is composed of four steps,” he explains. “These steps are, in order, pavement damage prediction; treatment cost prediction; budget allocation; and pavement network condition evaluation.”

    The model begins by investigating every segment in an entire pavement network and predicting future possibilities for pavement deterioration, cost, and traffic.

    “We [then] run thousands of simulations for each segment in the network to determine the likely cost and performance outcomes for each initial and subsequent sequence, or ‘path,’ of treatment actions,” says Guo. “The treatment paths with the best cost and performance outcomes are selected for each segment, and then across the network.”

    The PTPD model not only seeks to minimize costs to agencies but also to users — in this case, drivers. These user costs can come primarily in the form of excess fuel consumption due to poor road quality.

    “One improvement in our analysis is the incorporation of electric vehicle uptake into our cost and environmental impact predictions,” Randolph Kirchain, a principal research scientist at MIT CSHub and MIT Materials Research Laboratory (MRL) and one of the paper’s co-authors. “Since the vehicle fleet will change over the next several decades due to electric vehicle adoption, we made sure to consider how these changes might impact our predictions of excess energy consumption.”

    After developing the PTPD model, Guo wanted to see how the efficacy of various pavement management strategies might differ. To do this, he developed a sophisticated deterioration prediction model.

    A novel aspect of this deterioration model is its treatment of multiple deterioration metrics simultaneously. Using a multi-output neural network, a tool of artificial intelligence, the model can predict several forms of pavement deterioration simultaneously, thereby, accounting for their correlations among one another.

    The MIT team selected two key metrics to compare the effectiveness of various treatment paths: pavement quality and greenhouse gas emissions. These metrics were then calculated for all pavement segments in the Iowa network.

    Improvement through variation

     The MIT model can help DOTs make better decisions, but that decision-making is ultimately constrained by the potential options considered.

    Guo and his colleagues, therefore, sought to expand current decision-making paradigms by exploring a broad set of network management strategies and evaluating them with their PTPD approach. Based on that evaluation, the team discovered that networks had the best outcomes when the management strategy includes using a mix of paving materials, a variety of long- and short-term paving repair actions (treatments), and longer time periods on which to base paving decisions.

    They then compared this proposed approach with a baseline management approach that reflects current, widespread practices: the use of solely asphalt materials, short-term treatments, and a five-year period for evaluating the outcomes of paving actions.

    With these two approaches established, the team used them to plan 30 years of maintenance across the Iowa U.S. Route network. They then measured the subsequent road quality and emissions.

    Their case study found that the MIT approach offered substantial benefits. Pavement-related greenhouse gas emissions would fall by around 20 percent across the network over the whole period. Pavement performance improved as well. To achieve the same level of road quality as the MIT approach, the baseline approach would need a 32 percent greater budget.

    “It’s worth noting,” says Guo, “that since conventional practices employ less effective allocation tools, the difference between them and the CSHub approach should be even larger in practice.”

    Much of the improvement derived from the precision of the CSHub planning model. But the three treatment strategies also play a key role.

    “We’ve found that a mix of asphalt and concrete paving materials allows DOTs to not only find materials best-suited to certain projects, but also mitigates the risk of material price volatility over time,” says Kirchain.

    It’s a similar story with a mix of paving actions. Employing a mix of short- and long-term fixes gives DOTs the flexibility to choose the right action for the right project.

    The final strategy, a long-term evaluation period, enables DOTs to see the entire scope of their choices. If the ramifications of a decision are predicted over only five years, many long-term implications won’t be considered. Expanding the window for planning, then, can introduce beneficial, long-term options.

    It’s not surprising that paving decisions are daunting to make; their impacts on the environment, driver safety, and budget levels are long-lasting. But rather than simplify this fraught process, the CSHub method aims to reflect its complexity. The result is an approach that provides DOTs with the tools to do more with less.

    This research was supported through the MIT Concrete Sustainability Hub by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    A new method for removing lead from drinking water

    Engineers at MIT have developed a new approach to removing lead or other heavy-metal contaminants from water, in a process that they say is far more energy-efficient than any other currently used system, though there are others under development that come close. Ultimately, it might be used to treat lead-contaminated water supplies at the home level, or to treat contaminated water from some chemical or industrial processes.

    The new system is the latest in a series of applications based on initial findings six years ago by members of the same research team, initially developed for desalination of seawater or brackish water, and later adapted for removing radioactive compounds from the cooling water of nuclear power plants. The new version is the first such method that might be applicable for treating household water supplies, as well as industrial uses.

    The findings are published today in the journal Environmental Science and Technology – Water, in a paper by MIT graduate students Huanhuan Tian, Mohammad Alkhadra, and Kameron Conforti, and professor of chemical engineering Martin Bazant.

    “It’s notoriously difficult to remove toxic heavy metal that’s persistent and present in a lot of different water sources,” Alkhadra says. “Obviously there are competing methods today that do this function, so it’s a matter of which method can do it at lower cost and more reliably.”

    The biggest challenge in trying to remove lead is that it is generally present in such tiny concentrations, vastly exceeded by other elements or compounds. For example, sodium is typically present in drinking water at a concentration of tens of parts per million, whereas lead can be highly toxic at just a few parts per billion. Most existing processes, such as reverse osmosis or distillation, remove everything at once, Alkhadra explains. This not only takes much more energy than would be needed for a selective removal, but it’s counterproductive since small amounts of elements such as sodium and magnesium are actually essential for healthy drinking water.

    The new approach is to use a process called shock electrodialysis, in which an electric field is used to produce a shockwave inside a pipe carrying the contaminated water. The shockwave separates the liquid into two streams, selectively pulling certain electrically charged atoms, or ions, toward one side of the flow by tuning the properties of the shockwave to match the target ions, while leaving a stream of relatively pure water on the other side. The stream containing the concentrated lead ions can then be easily separated out using a mechanical barrier in the pipe.

    In principle, “this makes the process much cheaper,” Bazant says, “because the electrical energy that you’re putting in to do the separation is really going after the high-value target, which is the lead. You’re not wasting a lot of energy removing the sodium.” Because the lead is present at such low concentration, “there’s not a lot of current involved in removing those ions, so this can be a very cost-effective way.”

    The process still has its limitations, as it has only been demonstrated at small laboratory scale and at quite slow flow rates. Scaling up the process to make it practical for in-home use will require further research, and larger-scale industrial uses will take even longer. But it could be practical within a few years for some home-based systems, Bazant says.

    For example, a home whose water supply is heavily contaminated with lead might have a system in the cellar that slowly processes a stream of water, filling a tank with lead-free water to be used for drinking and cooking, while leaving most of the water untreated for uses like toilet flushing or watering the lawn. Such uses might be appropriate as an interim measure for places like Flint, Michigan, where the water, mostly contaminated by the distribution pipes, will take many years to remediate through pipe replacements.

    The process could also be adapted for some industrial uses such as cleaning water produced in mining or drilling operations, so that the treated water can be safely disposed of or reused. And in some cases, this could also provide a way of recovering metals that contaminate water but could actually be a valuable product if they were separated out; for example, some such minerals could be used to process semiconductors or pharmaceuticals or other high-tech products, the researchers say.

    Direct comparisons of the economics of such a system versus existing methods is difficult, Bazant says, because in filtration systems, for example, the costs are mainly for replacing the filter materials, which quickly clog up and become unusable, whereas in this system the costs are mostly for the ongoing energy input, which is very small. At this point, the shock electrodialysis system has been operated for several weeks, but it’s too soon to estimate the real-world longevity of such a system, he says.

    Developing the process into a scalable commercial product will take some time, but “we have shown how this could be done, from a technical standpoint,” Bazant says. “The main issue would be on the economic side,” he adds. That includes figuring out the most appropriate applications and developing specific configurations that would meet those uses. “We do have a reasonable idea of how to scale this up. So it’s a question of having the resources,” which might be a role for a startup company rather than an academic research lab, he adds.

    “I think this is an exciting result,” he says, “because it shows that we really can address this important application” of cleaning the lead from drinking water. For example, he says, there are places now that perform desalination of seawater using reverse osmosis, but they have to run this expensive process twice in a row, first to get the salt out, and then again to remove the low-level but highly toxic contaminants like lead. This new process might be used instead of the second round of reverse osmosis, at a far lower expenditure of energy.

    The research received support from a MathWorks Engineering Fellowship and a fellowship awarded by MIT’s Abdul Latif Jameel Water and Food Systems Lab, funded by Xylem, Inc. More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Research collaboration puts climate-resilient crops in sight

    Any houseplant owner knows that changes in the amount of water or sunlight a plant receives can put it under immense stress. A dying plant brings certain disappointment to anyone with a green thumb. 

    But for farmers who make their living by successfully growing plants, and whose crops may nourish hundreds or thousands of people, the devastation of failing flora is that much greater. As climate change is poised to cause increasingly unpredictable weather patterns globally, crops may be subject to more extreme environmental conditions like droughts, fluctuating temperatures, floods, and wildfire. 

    Climate scientists and food systems researchers worry about the stress climate change may put on crops, and on global food security. In an ambitious interdisciplinary project funded by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), David Des Marais, the Gale Assistant Professor in the Department of Civil and Environmental Engineering at MIT, and Caroline Uhler, an associate professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, are investigating how plant genes communicate with one another under stress. Their research results can be used to breed plants more resilient to climate change.

    Crops in trouble

    Governing plants’ responses to environmental stress are gene regulatory networks, or GRNs, which guide the development and behaviors of living things. A GRN may be comprised of thousands of genes and proteins that all communicate with one another. GRNs help a particular cell, tissue, or organism respond to environmental changes by signaling certain genes to turn their expression on or off.

    Even seemingly minor or short-term changes in weather patterns can have large effects on crop yield and food security. An environmental trigger, like a lack of water during a crucial phase of plant development, can turn a gene on or off, and is likely to affect many others in the GRN. For example, without water, a gene enabling photosynthesis may switch off. This can create a domino effect, where the genes that rely on those regulating photosynthesis are silenced, and the cycle continues. As a result, when photosynthesis is halted, the plant may experience other detrimental side effects, like no longer being able to reproduce or defend against pathogens. The chain reaction could even kill a plant before it has the chance to be revived by a big rain.

    Des Marais says he wishes there was a way to stop those genes from completely shutting off in such a situation. To do that, scientists would need to better understand how exactly gene networks respond to different environmental triggers. Bringing light to this molecular process is exactly what he aims to do in this collaborative research effort.

    Solving complex problems across disciplines

    Despite their crucial importance, GRNs are difficult to study because of how complex and interconnected they are. Usually, to understand how a particular gene is affecting others, biologists must silence one gene and see how the others in the network respond. 

    For years, scientists have aspired to an algorithm that could synthesize the massive amount of information contained in GRNs to “identify correct regulatory relationships among genes,” according to a 2019 article in the Encyclopedia of Bioinformatics and Computational Biology. 

    “A GRN can be seen as a large causal network, and understanding the effects that silencing one gene has on all other genes requires understanding the causal relationships among the genes,” says Uhler. “These are exactly the kinds of algorithms my group develops.”

    Des Marais and Uhler’s project aims to unravel these complex communication networks and discover how to breed crops that are more resilient to the increased droughts, flooding, and erratic weather patterns that climate change is already causing globally.

    In addition to climate change, by 2050, the world will demand 70 percent more food to feed a booming population. “Food systems challenges cannot be addressed individually in disciplinary or topic area silos,” says Greg Sixt, J-WAFS’ research manager for climate and food systems. “They must be addressed in a systems context that reflects the interconnected nature of the food system.”

    Des Marais’ background is in biology, and Uhler’s in statistics. “Dave’s project with Caroline was essentially experimental,” says Renee J. Robins, J-WAFS’ executive director. “This kind of exploratory research is exactly what the J-WAFS seed grant program is for.”

    Getting inside gene regulatory networks

    Des Marais and Uhler’s work begins in a windowless basement on MIT’s campus, where 300 genetically identical Brachypodium distachyon plants grow in large, temperature-controlled chambers. The plant, which contains more than 30,000 genes, is a good model for studying important cereal crops like wheat, barley, maize, and millet. For three weeks, all plants receive the same temperature, humidity, light, and water. Then, half are slowly tapered off water, simulating drought-like conditions.

    Six days into the forced drought, the plants are clearly suffering. Des Marais’ PhD student Jie Yun takes tissues from 50 hydrated and 50 dry plants, freezes them in liquid nitrogen to immediately halt metabolic activity, grinds them up into a fine powder, and chemically separates the genetic material. The genes from all 100 samples are then sequenced at a lab across the street.

    The team is left with a spreadsheet listing the 30,000 genes found in each of the 100 plants at the moment they were frozen, and how many copies there were. Uhler’s PhD student Anastasiya Belyaeva inputs the massive spreadsheet into the computer program she developed and runs her novel algorithm. Within a few hours, the group can see which genes were most active in one condition over another, how the genes were communicating, and which were causing changes in others. 

    The methodology captures important subtleties that could allow researchers to eventually alter gene pathways and breed more resilient crops. “When you expose a plant to drought stress, it’s not like there’s some canonical response,” Des Marais says. “There’s lots of things going on. It’s turning this physiologic process up, this one down, this one didn’t exist before, and now suddenly is turned on.” 

    In addition to Des Marais and Uhler’s research, J-WAFS has funded projects in food and water from researchers in 29 departments across all five MIT schools as well as the MIT Schwarzman College of Computing. J-WAFS seed grants typically fund seven to eight new projects every year.

    “The grants are really aimed at catalyzing new ideas, providing the sort of support [for MIT researchers] to be pushing boundaries, and also bringing in faculty who may have some interesting ideas that they haven’t yet applied to water or food concerns,” Robins says. “It’s an avenue for researchers all over the Institute to apply their ideas to water and food.”

    Alison Gold is a student in MIT’s Graduate Program in Science Writing. More

  • in

    Concrete’s role in reducing building and pavement emissions

    Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.

    As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.

    But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.

    In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.

    They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets. 

    The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.

    Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.

    Sources and solutions

    The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.

    While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.

    Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.

    Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.

    Set of scenarios

    In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.

    Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.

    “We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.

    “As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”

    To reflect the complexity of these systems, their models had to have the highest resolutions possible.

    “In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.

    “To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”

    Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.

    As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.

    Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.

    “In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”

    Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.

    Results

    As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.

    “What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”

    “Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”

    In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.

    Their predictions for emissions reductions generated additional insights.  

    For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.

    As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.

    In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.

    “The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”

    These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.

    Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.

    “In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.

    “Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”

    This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.

    The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More