More stories

  • in

    Making roadway spending more sustainable

    The share of federal spending on infrastructure has reached an all-time low, falling from 30 percent in 1960 to just 12 percent in 2018.

    While the nation’s ailing infrastructure will require more funding to reach its full potential, recent MIT research finds that more sustainable and higher performing roads are still possible even with today’s limited budgets.

    The research, conducted by a team of current and former MIT Concrete Sustainability Hub (MIT CSHub) scientists and published in Transportation Research D, finds that a set of innovative planning strategies could improve pavement network environmental and performance outcomes even if budgets don’t increase.

    The paper presents a novel budget allocation tool and pairs it with three innovative strategies for managing pavement networks: a mix of paving materials, a mix of short- and long-term paving actions, and a long evaluation period for those actions.

    This novel approach offers numerous benefits. When applied to a 30-year case study of the Iowa U.S. Route network, the MIT CSHub model and management strategies cut emissions by 20 percent while sustaining current levels of road quality. Achieving this with a conventional planning approach would require the state to spend 32 percent more than it does today. The key to its success is the consideration of a fundamental — but fraught — aspect of pavement asset management: uncertainty.

    Predicting unpredictability

    The average road must last many years and support the traffic of thousands — if not millions — of vehicles. Over that time, a lot can change. Material prices may fluctuate, budgets may tighten, and traffic levels may intensify. Climate (and climate change), too, can hasten unexpected repairs.

    Managing these uncertainties effectively means looking long into the future and anticipating possible changes.

    “Capturing the impacts of uncertainty is essential for making effective paving decisions,” explains Fengdi Guo, the paper’s lead author and a departing CSHub research assistant.

    “Yet, measuring and relating these uncertainties to outcomes is also computationally intensive and expensive. Consequently, many DOTs [departments of transportation] are forced to simplify their analysis to plan maintenance — often resulting in suboptimal spending and outcomes.”

    To give DOTs accessible tools to factor uncertainties into their planning, CSHub researchers have developed a streamlined planning approach. It offers greater specificity and is paired with several new pavement management strategies.

    The planning approach, known as Probabilistic Treatment Path Dependence (PTPD), is based on machine learning and was devised by Guo.

    “Our PTPD model is composed of four steps,” he explains. “These steps are, in order, pavement damage prediction; treatment cost prediction; budget allocation; and pavement network condition evaluation.”

    The model begins by investigating every segment in an entire pavement network and predicting future possibilities for pavement deterioration, cost, and traffic.

    “We [then] run thousands of simulations for each segment in the network to determine the likely cost and performance outcomes for each initial and subsequent sequence, or ‘path,’ of treatment actions,” says Guo. “The treatment paths with the best cost and performance outcomes are selected for each segment, and then across the network.”

    The PTPD model not only seeks to minimize costs to agencies but also to users — in this case, drivers. These user costs can come primarily in the form of excess fuel consumption due to poor road quality.

    “One improvement in our analysis is the incorporation of electric vehicle uptake into our cost and environmental impact predictions,” Randolph Kirchain, a principal research scientist at MIT CSHub and MIT Materials Research Laboratory (MRL) and one of the paper’s co-authors. “Since the vehicle fleet will change over the next several decades due to electric vehicle adoption, we made sure to consider how these changes might impact our predictions of excess energy consumption.”

    After developing the PTPD model, Guo wanted to see how the efficacy of various pavement management strategies might differ. To do this, he developed a sophisticated deterioration prediction model.

    A novel aspect of this deterioration model is its treatment of multiple deterioration metrics simultaneously. Using a multi-output neural network, a tool of artificial intelligence, the model can predict several forms of pavement deterioration simultaneously, thereby, accounting for their correlations among one another.

    The MIT team selected two key metrics to compare the effectiveness of various treatment paths: pavement quality and greenhouse gas emissions. These metrics were then calculated for all pavement segments in the Iowa network.

    Improvement through variation

     The MIT model can help DOTs make better decisions, but that decision-making is ultimately constrained by the potential options considered.

    Guo and his colleagues, therefore, sought to expand current decision-making paradigms by exploring a broad set of network management strategies and evaluating them with their PTPD approach. Based on that evaluation, the team discovered that networks had the best outcomes when the management strategy includes using a mix of paving materials, a variety of long- and short-term paving repair actions (treatments), and longer time periods on which to base paving decisions.

    They then compared this proposed approach with a baseline management approach that reflects current, widespread practices: the use of solely asphalt materials, short-term treatments, and a five-year period for evaluating the outcomes of paving actions.

    With these two approaches established, the team used them to plan 30 years of maintenance across the Iowa U.S. Route network. They then measured the subsequent road quality and emissions.

    Their case study found that the MIT approach offered substantial benefits. Pavement-related greenhouse gas emissions would fall by around 20 percent across the network over the whole period. Pavement performance improved as well. To achieve the same level of road quality as the MIT approach, the baseline approach would need a 32 percent greater budget.

    “It’s worth noting,” says Guo, “that since conventional practices employ less effective allocation tools, the difference between them and the CSHub approach should be even larger in practice.”

    Much of the improvement derived from the precision of the CSHub planning model. But the three treatment strategies also play a key role.

    “We’ve found that a mix of asphalt and concrete paving materials allows DOTs to not only find materials best-suited to certain projects, but also mitigates the risk of material price volatility over time,” says Kirchain.

    It’s a similar story with a mix of paving actions. Employing a mix of short- and long-term fixes gives DOTs the flexibility to choose the right action for the right project.

    The final strategy, a long-term evaluation period, enables DOTs to see the entire scope of their choices. If the ramifications of a decision are predicted over only five years, many long-term implications won’t be considered. Expanding the window for planning, then, can introduce beneficial, long-term options.

    It’s not surprising that paving decisions are daunting to make; their impacts on the environment, driver safety, and budget levels are long-lasting. But rather than simplify this fraught process, the CSHub method aims to reflect its complexity. The result is an approach that provides DOTs with the tools to do more with less.

    This research was supported through the MIT Concrete Sustainability Hub by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    A new method for removing lead from drinking water

    Engineers at MIT have developed a new approach to removing lead or other heavy-metal contaminants from water, in a process that they say is far more energy-efficient than any other currently used system, though there are others under development that come close. Ultimately, it might be used to treat lead-contaminated water supplies at the home level, or to treat contaminated water from some chemical or industrial processes.

    The new system is the latest in a series of applications based on initial findings six years ago by members of the same research team, initially developed for desalination of seawater or brackish water, and later adapted for removing radioactive compounds from the cooling water of nuclear power plants. The new version is the first such method that might be applicable for treating household water supplies, as well as industrial uses.

    The findings are published today in the journal Environmental Science and Technology – Water, in a paper by MIT graduate students Huanhuan Tian, Mohammad Alkhadra, and Kameron Conforti, and professor of chemical engineering Martin Bazant.

    “It’s notoriously difficult to remove toxic heavy metal that’s persistent and present in a lot of different water sources,” Alkhadra says. “Obviously there are competing methods today that do this function, so it’s a matter of which method can do it at lower cost and more reliably.”

    The biggest challenge in trying to remove lead is that it is generally present in such tiny concentrations, vastly exceeded by other elements or compounds. For example, sodium is typically present in drinking water at a concentration of tens of parts per million, whereas lead can be highly toxic at just a few parts per billion. Most existing processes, such as reverse osmosis or distillation, remove everything at once, Alkhadra explains. This not only takes much more energy than would be needed for a selective removal, but it’s counterproductive since small amounts of elements such as sodium and magnesium are actually essential for healthy drinking water.

    The new approach is to use a process called shock electrodialysis, in which an electric field is used to produce a shockwave inside a pipe carrying the contaminated water. The shockwave separates the liquid into two streams, selectively pulling certain electrically charged atoms, or ions, toward one side of the flow by tuning the properties of the shockwave to match the target ions, while leaving a stream of relatively pure water on the other side. The stream containing the concentrated lead ions can then be easily separated out using a mechanical barrier in the pipe.

    In principle, “this makes the process much cheaper,” Bazant says, “because the electrical energy that you’re putting in to do the separation is really going after the high-value target, which is the lead. You’re not wasting a lot of energy removing the sodium.” Because the lead is present at such low concentration, “there’s not a lot of current involved in removing those ions, so this can be a very cost-effective way.”

    The process still has its limitations, as it has only been demonstrated at small laboratory scale and at quite slow flow rates. Scaling up the process to make it practical for in-home use will require further research, and larger-scale industrial uses will take even longer. But it could be practical within a few years for some home-based systems, Bazant says.

    For example, a home whose water supply is heavily contaminated with lead might have a system in the cellar that slowly processes a stream of water, filling a tank with lead-free water to be used for drinking and cooking, while leaving most of the water untreated for uses like toilet flushing or watering the lawn. Such uses might be appropriate as an interim measure for places like Flint, Michigan, where the water, mostly contaminated by the distribution pipes, will take many years to remediate through pipe replacements.

    The process could also be adapted for some industrial uses such as cleaning water produced in mining or drilling operations, so that the treated water can be safely disposed of or reused. And in some cases, this could also provide a way of recovering metals that contaminate water but could actually be a valuable product if they were separated out; for example, some such minerals could be used to process semiconductors or pharmaceuticals or other high-tech products, the researchers say.

    Direct comparisons of the economics of such a system versus existing methods is difficult, Bazant says, because in filtration systems, for example, the costs are mainly for replacing the filter materials, which quickly clog up and become unusable, whereas in this system the costs are mostly for the ongoing energy input, which is very small. At this point, the shock electrodialysis system has been operated for several weeks, but it’s too soon to estimate the real-world longevity of such a system, he says.

    Developing the process into a scalable commercial product will take some time, but “we have shown how this could be done, from a technical standpoint,” Bazant says. “The main issue would be on the economic side,” he adds. That includes figuring out the most appropriate applications and developing specific configurations that would meet those uses. “We do have a reasonable idea of how to scale this up. So it’s a question of having the resources,” which might be a role for a startup company rather than an academic research lab, he adds.

    “I think this is an exciting result,” he says, “because it shows that we really can address this important application” of cleaning the lead from drinking water. For example, he says, there are places now that perform desalination of seawater using reverse osmosis, but they have to run this expensive process twice in a row, first to get the salt out, and then again to remove the low-level but highly toxic contaminants like lead. This new process might be used instead of the second round of reverse osmosis, at a far lower expenditure of energy.

    The research received support from a MathWorks Engineering Fellowship and a fellowship awarded by MIT’s Abdul Latif Jameel Water and Food Systems Lab, funded by Xylem, Inc. More

  • in

    Study: Global cancer risk from burning organic matter comes from unregulated chemicals

    Whenever organic matter is burned, such as in a wildfire, a power plant, a car’s exhaust, or in daily cooking, the combustion releases polycyclic aromatic hydrocarbons (PAHs) — a class of pollutants that is known to cause lung cancer.

    There are more than 100 known types of PAH compounds emitted daily into the atmosphere. Regulators, however, have historically relied on measurements of a single compound, benzo(a)pyrene, to gauge a community’s risk of developing cancer from PAH exposure. Now MIT scientists have found that benzo(a)pyrene may be a poor indicator of this type of cancer risk.

    In a modeling study appearing today in the journal GeoHealth, the team reports that benzo(a)pyrene plays a small part — about 11 percent — in the global risk of developing PAH-associated cancer. Instead, 89 percent of that cancer risk comes from other PAH compounds, many of which are not directly regulated.

    Interestingly, about 17 percent of PAH-associated cancer risk comes from “degradation products” — chemicals that are formed when emitted PAHs react in the atmosphere. Many of these degradation products can in fact be more toxic than the emitted PAH from which they formed.

    The team hopes the results will encourage scientists and regulators to look beyond benzo(a)pyrene, to consider a broader class of PAHs when assessing a community’s cancer risk.

    “Most of the regulatory science and standards for PAHs are based on benzo(a)pyrene levels. But that is a big blind spot that could lead you down a very wrong path in terms of assessing whether cancer risk is improving or not, and whether it’s relatively worse in one place than another,” says study author Noelle Selin, a professor in MIT’s Institute for Data, Systems and Society, and the Department of Earth, Atmospheric and Planetary Sciences.

    Selin’s MIT co-authors include Jesse Kroll, Amy Hrdina, Ishwar Kohale, Forest White, and Bevin Engelward, and Jamie Kelly (who is now at University College London). Peter Ivatt and Mathew Evans at the University of York are also co-authors.

    Chemical pixels

    Benzo(a)pyrene has historically been the poster chemical for PAH exposure. The compound’s indicator status is largely based on early toxicology studies. But recent research suggests the chemical may not be the PAH representative that regulators have long relied upon.   

    “There has been a bit of evidence suggesting benzo(a)pyrene may not be very important, but this was from just a few field studies,” says Kelly, a former postdoc in Selin’s group and the study’s lead author.

    Kelly and his colleagues instead took a systematic approach to evaluate benzo(a)pyrene’s suitability as a PAH indicator. The team began by using GEOS-Chem, a global, three-dimensional chemical transport model that breaks the world into individual grid boxes and simulates within each box the reactions and concentrations of chemicals in the atmosphere.

    They extended this model to include chemical descriptions of how various PAH compounds, including benzo(a)pyrene, would react in the atmosphere. The team then plugged in recent data from emissions inventories and meteorological observations, and ran the model forward to simulate the concentrations of various PAH chemicals around the world over time.

    Risky reactions

    In their simulations, the researchers started with 16 relatively well-studied PAH chemicals, including benzo(a)pyrene, and traced the concentrations of these chemicals, plus the concentration of their degradation products over two generations, or chemical transformations. In total, the team evaluated 48 PAH species.

    They then compared these concentrations with actual concentrations of the same chemicals, recorded by monitoring stations around the world. This comparison was close enough to show that the model’s concentration predictions were realistic.

    Then within each model’s grid box, the researchers related the concentration of each PAH chemical to its associated cancer risk; to do this, they had to develop a new method based on previous studies in the literature to avoid double-counting risk from the different chemicals. Finally, they overlaid population density maps to predict the number of cancer cases globally, based on the concentration and toxicity of a specific PAH chemical in each location.

    Dividing the cancer cases by population produced the cancer risk associated with that chemical. In this way, the team calculated the cancer risk for each of the 48 compounds, then determined each chemical’s individual contribution to the total risk.

    This analysis revealed that benzo(a)pyrene had a surprisingly small contribution, of about 11 percent, to the overall risk of developing cancer from PAH exposure globally. Eighty-nine percent of cancer risk came from other chemicals. And 17 percent of this risk arose from degradation products.

    “We see places where you can find concentrations of benzo(a)pyrene are lower, but the risk is higher because of these degradation products,” Selin says. “These products can be orders of magnitude more toxic, so the fact that they’re at tiny concentrations doesn’t mean you can write them off.”

    When the researchers compared calculated PAH-associated cancer risks around the world, they found significant differences depending on whether that risk calculation was based solely on concentrations of benzo(a)pyrene or on a region’s broader mix of PAH compounds.

    “If you use the old method, you would find the lifetime cancer risk is 3.5 times higher in Hong Kong versus southern India, but taking into account the differences in PAH mixtures, you get a difference of 12 times,” Kelly says. “So, there’s a big difference in the relative cancer risk between the two places. And we think it’s important to expand the group of compounds that regulators are thinking about, beyond just a single chemical.”

    The team’s study “provides an excellent contribution to better understanding these ubiquitous pollutants,” says Elisabeth Galarneau, an air quality expert and PhD research scientist in Canada’s Department of the Environment. “It will be interesting to see how these results compare to work being done elsewhere … to pin down which (compounds) need to be tracked and considered for the protection of human and environmental health.”

    This research was conducted in MIT’s Superfund Research Center and is supported in part by the National Institute of Environmental Health Sciences Superfund Basic Research Program, and the National Institutes of Health. More

  • in

    Research collaboration puts climate-resilient crops in sight

    Any houseplant owner knows that changes in the amount of water or sunlight a plant receives can put it under immense stress. A dying plant brings certain disappointment to anyone with a green thumb. 

    But for farmers who make their living by successfully growing plants, and whose crops may nourish hundreds or thousands of people, the devastation of failing flora is that much greater. As climate change is poised to cause increasingly unpredictable weather patterns globally, crops may be subject to more extreme environmental conditions like droughts, fluctuating temperatures, floods, and wildfire. 

    Climate scientists and food systems researchers worry about the stress climate change may put on crops, and on global food security. In an ambitious interdisciplinary project funded by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), David Des Marais, the Gale Assistant Professor in the Department of Civil and Environmental Engineering at MIT, and Caroline Uhler, an associate professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, are investigating how plant genes communicate with one another under stress. Their research results can be used to breed plants more resilient to climate change.

    Crops in trouble

    Governing plants’ responses to environmental stress are gene regulatory networks, or GRNs, which guide the development and behaviors of living things. A GRN may be comprised of thousands of genes and proteins that all communicate with one another. GRNs help a particular cell, tissue, or organism respond to environmental changes by signaling certain genes to turn their expression on or off.

    Even seemingly minor or short-term changes in weather patterns can have large effects on crop yield and food security. An environmental trigger, like a lack of water during a crucial phase of plant development, can turn a gene on or off, and is likely to affect many others in the GRN. For example, without water, a gene enabling photosynthesis may switch off. This can create a domino effect, where the genes that rely on those regulating photosynthesis are silenced, and the cycle continues. As a result, when photosynthesis is halted, the plant may experience other detrimental side effects, like no longer being able to reproduce or defend against pathogens. The chain reaction could even kill a plant before it has the chance to be revived by a big rain.

    Des Marais says he wishes there was a way to stop those genes from completely shutting off in such a situation. To do that, scientists would need to better understand how exactly gene networks respond to different environmental triggers. Bringing light to this molecular process is exactly what he aims to do in this collaborative research effort.

    Solving complex problems across disciplines

    Despite their crucial importance, GRNs are difficult to study because of how complex and interconnected they are. Usually, to understand how a particular gene is affecting others, biologists must silence one gene and see how the others in the network respond. 

    For years, scientists have aspired to an algorithm that could synthesize the massive amount of information contained in GRNs to “identify correct regulatory relationships among genes,” according to a 2019 article in the Encyclopedia of Bioinformatics and Computational Biology. 

    “A GRN can be seen as a large causal network, and understanding the effects that silencing one gene has on all other genes requires understanding the causal relationships among the genes,” says Uhler. “These are exactly the kinds of algorithms my group develops.”

    Des Marais and Uhler’s project aims to unravel these complex communication networks and discover how to breed crops that are more resilient to the increased droughts, flooding, and erratic weather patterns that climate change is already causing globally.

    In addition to climate change, by 2050, the world will demand 70 percent more food to feed a booming population. “Food systems challenges cannot be addressed individually in disciplinary or topic area silos,” says Greg Sixt, J-WAFS’ research manager for climate and food systems. “They must be addressed in a systems context that reflects the interconnected nature of the food system.”

    Des Marais’ background is in biology, and Uhler’s in statistics. “Dave’s project with Caroline was essentially experimental,” says Renee J. Robins, J-WAFS’ executive director. “This kind of exploratory research is exactly what the J-WAFS seed grant program is for.”

    Getting inside gene regulatory networks

    Des Marais and Uhler’s work begins in a windowless basement on MIT’s campus, where 300 genetically identical Brachypodium distachyon plants grow in large, temperature-controlled chambers. The plant, which contains more than 30,000 genes, is a good model for studying important cereal crops like wheat, barley, maize, and millet. For three weeks, all plants receive the same temperature, humidity, light, and water. Then, half are slowly tapered off water, simulating drought-like conditions.

    Six days into the forced drought, the plants are clearly suffering. Des Marais’ PhD student Jie Yun takes tissues from 50 hydrated and 50 dry plants, freezes them in liquid nitrogen to immediately halt metabolic activity, grinds them up into a fine powder, and chemically separates the genetic material. The genes from all 100 samples are then sequenced at a lab across the street.

    The team is left with a spreadsheet listing the 30,000 genes found in each of the 100 plants at the moment they were frozen, and how many copies there were. Uhler’s PhD student Anastasiya Belyaeva inputs the massive spreadsheet into the computer program she developed and runs her novel algorithm. Within a few hours, the group can see which genes were most active in one condition over another, how the genes were communicating, and which were causing changes in others. 

    The methodology captures important subtleties that could allow researchers to eventually alter gene pathways and breed more resilient crops. “When you expose a plant to drought stress, it’s not like there’s some canonical response,” Des Marais says. “There’s lots of things going on. It’s turning this physiologic process up, this one down, this one didn’t exist before, and now suddenly is turned on.” 

    In addition to Des Marais and Uhler’s research, J-WAFS has funded projects in food and water from researchers in 29 departments across all five MIT schools as well as the MIT Schwarzman College of Computing. J-WAFS seed grants typically fund seven to eight new projects every year.

    “The grants are really aimed at catalyzing new ideas, providing the sort of support [for MIT researchers] to be pushing boundaries, and also bringing in faculty who may have some interesting ideas that they haven’t yet applied to water or food concerns,” Robins says. “It’s an avenue for researchers all over the Institute to apply their ideas to water and food.”

    Alison Gold is a student in MIT’s Graduate Program in Science Writing. More

  • in

    Concrete’s role in reducing building and pavement emissions

    Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.

    As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.

    But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.

    In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.

    They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets. 

    The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.

    Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.

    Sources and solutions

    The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.

    While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.

    Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.

    Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.

    Set of scenarios

    In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.

    Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.

    “We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.

    “As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”

    To reflect the complexity of these systems, their models had to have the highest resolutions possible.

    “In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.

    “To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”

    Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.

    As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.

    Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.

    “In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”

    Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.

    Results

    As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.

    “What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”

    “Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”

    In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.

    Their predictions for emissions reductions generated additional insights.  

    For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.

    As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.

    In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.

    “The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”

    These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.

    Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.

    “In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.

    “Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”

    This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.

    The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More

  • in

    Researchers design sensors to rapidly detect plant hormones

    Researchers from the Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, and their local collaborators from Temasek Life Sciences Laboratory (TLL) and Nanyang Technological University (NTU), have developed the first-ever nanosensor to enable rapid testing of synthetic auxin plant hormones. The novel nanosensors are safer and less tedious than existing techniques for testing plants’ response to compounds such as herbicide, and can be transformative in improving agricultural production and our understanding of plant growth.

    The scientists designed sensors for two plant hormones — 1-naphthalene acetic acid (NAA) and 2,4-dichlorophenoxyacetic acid (2,4-D) — which are used extensively in the farming industry for regulating plant growth and as herbicides, respectively. Current methods to detect NAA and 2,4-D cause damage to plants, and are unable to provide real-time in vivo monitoring and information.

    Based on the concept of corona phase molecular recognition (​​CoPhMoRe) pioneered by the Strano Lab at SMART DiSTAP and MIT, the new sensors are able to detect the presence of NAA and 2,4-D in living plants at a swift pace, providing plant information in real-time, without causing any harm. The team has successfully tested both sensors on a number of everyday crops including pak choi, spinach, and rice across various planting mediums such as soil, hydroponic, and plant tissue culture.

    Explained in a paper titled “Nanosensor Detection of Synthetic Auxins In Planta using Corona Phase Molecular Recognition” published in the journal ACS Sensors, the research can facilitate more efficient use of synthetic auxins in agriculture and hold tremendous potential to advance plant biology study.

    “Our CoPhMoRe technique has previously been used to detect compounds such as hydrogen peroxide and heavy-metal pollutants like arsenic — but this is the first successful case of CoPhMoRe sensors developed for detecting plant phytohormones that regulate plant growth and physiology, such as sprays to prevent premature flowering and dropping of fruits,” says DiSTAP co-lead principal investigator Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT. “This technology can replace current state-of-the-art sensing methods which are laborious, destructive, and unsafe.”

    Of the two sensors developed by the research team, the 2,4-D nanosensor also showed the ability to detect herbicide susceptibility, enabling farmers and agricultural scientists to quickly find out how vulnerable or resistant different plants are to herbicides without the need to monitor crop or weed growth over days. “This could be incredibly beneficial in revealing the mechanism behind how 2,4-D works within plants and why crops develop herbicide resistance,” says DiSTAP and TLL Principal Investigator Rajani Sarojam.

    “Our research can help the industry gain a better understanding of plant growth dynamics and has the potential to completely change how the industry screens for herbicide resistance, eliminating the need to monitor crop or weed growth over days,” says Mervin Chun-Yi Ang, a research scientist at DiSTAP. “It can be applied across a variety of plant species and planting mediums, and could easily be used in commercial setups for rapid herbicide susceptibility testing, such as urban farms.”

    NTU Professor Mary Chan-Park Bee Eng says, “Using nanosensors for in planta detection eliminates the need for extensive extraction and purification processes, which saves time and money. They also use very low-cost electronics, which makes them easily adaptable for commercial setups.”

    The team says their research can lead to future development of real-time nanosensors for other dynamic plant hormones and metabolites in living plants as well.

    The development of the nanosensor, optical detection system, and image processing algorithms for this study was done by SMART, NTU, and MIT, while TLL validated the nanosensors and provided knowledge of plant biology and plant signaling mechanisms. The research is carried out by SMART and supported by NRF under its Campus for Research Excellence And Technological Enterprise (CREATE) program.

    DiSTAP is one of the five interdisciplinary research roups in SMART. The DiSTAP program addresses deep problems in food production in Singapore and the world by developing a suite of impactful and novel analytical, genetic, and biosynthetic technologies. The goal is to fundamentally change how plant biosynthetic pathways are discovered, monitored, engineered, and ultimately translated to meet the global demand for food and nutrients.

    Scientists from MIT, TTL, NTU, and National University of Singapore (NUS) are collaboratively developing new tools for the continuous measurement of important plant metabolites and hormones for novel discovery, deeper understanding and control of plant biosynthetic pathways in ways not yet possible, especially in the context of green leafy vegetables; leveraging these new techniques to engineer plants with highly desirable properties for global food security, including high yield density production, drought, and pathogen resistance and biosynthesis of high-value commercial products; developing tools for producing hydrophobic food components in industry-relevant microbes; developing novel microbial and enzymatic technologies to produce volatile organic compounds that can protect and/or promote growth of leafy vegetables; and applying these technologies to improve urban farming.

    DiSTAP is led by Michael Strano and Singapore co-lead principal investigator Professor Chua Nam Hai.

    SMART was established by MIT, in partnership with the NRF, in 2007. SMART, the first entity in CREATE, serves as an intellectual and innovation hub for research interactions between MIT and Singapore, undertaking cutting-edge research projects in areas of interest to both. SMART currently comprises an Innovation Center and five interdisciplinary research groups: Antimicrobial Resistance (AMR), Critical Analytics for Manufacturing Personalized-Medicine (CAMP), DiSTAP, Future Urban Mobility (FM), and Low Energy Electronic Systems (LEES). SMART is funded by the NRF. More

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More