More stories

  • in

    Research collaboration puts climate-resilient crops in sight

    Any houseplant owner knows that changes in the amount of water or sunlight a plant receives can put it under immense stress. A dying plant brings certain disappointment to anyone with a green thumb. 

    But for farmers who make their living by successfully growing plants, and whose crops may nourish hundreds or thousands of people, the devastation of failing flora is that much greater. As climate change is poised to cause increasingly unpredictable weather patterns globally, crops may be subject to more extreme environmental conditions like droughts, fluctuating temperatures, floods, and wildfire. 

    Climate scientists and food systems researchers worry about the stress climate change may put on crops, and on global food security. In an ambitious interdisciplinary project funded by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), David Des Marais, the Gale Assistant Professor in the Department of Civil and Environmental Engineering at MIT, and Caroline Uhler, an associate professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, are investigating how plant genes communicate with one another under stress. Their research results can be used to breed plants more resilient to climate change.

    Crops in trouble

    Governing plants’ responses to environmental stress are gene regulatory networks, or GRNs, which guide the development and behaviors of living things. A GRN may be comprised of thousands of genes and proteins that all communicate with one another. GRNs help a particular cell, tissue, or organism respond to environmental changes by signaling certain genes to turn their expression on or off.

    Even seemingly minor or short-term changes in weather patterns can have large effects on crop yield and food security. An environmental trigger, like a lack of water during a crucial phase of plant development, can turn a gene on or off, and is likely to affect many others in the GRN. For example, without water, a gene enabling photosynthesis may switch off. This can create a domino effect, where the genes that rely on those regulating photosynthesis are silenced, and the cycle continues. As a result, when photosynthesis is halted, the plant may experience other detrimental side effects, like no longer being able to reproduce or defend against pathogens. The chain reaction could even kill a plant before it has the chance to be revived by a big rain.

    Des Marais says he wishes there was a way to stop those genes from completely shutting off in such a situation. To do that, scientists would need to better understand how exactly gene networks respond to different environmental triggers. Bringing light to this molecular process is exactly what he aims to do in this collaborative research effort.

    Solving complex problems across disciplines

    Despite their crucial importance, GRNs are difficult to study because of how complex and interconnected they are. Usually, to understand how a particular gene is affecting others, biologists must silence one gene and see how the others in the network respond. 

    For years, scientists have aspired to an algorithm that could synthesize the massive amount of information contained in GRNs to “identify correct regulatory relationships among genes,” according to a 2019 article in the Encyclopedia of Bioinformatics and Computational Biology. 

    “A GRN can be seen as a large causal network, and understanding the effects that silencing one gene has on all other genes requires understanding the causal relationships among the genes,” says Uhler. “These are exactly the kinds of algorithms my group develops.”

    Des Marais and Uhler’s project aims to unravel these complex communication networks and discover how to breed crops that are more resilient to the increased droughts, flooding, and erratic weather patterns that climate change is already causing globally.

    In addition to climate change, by 2050, the world will demand 70 percent more food to feed a booming population. “Food systems challenges cannot be addressed individually in disciplinary or topic area silos,” says Greg Sixt, J-WAFS’ research manager for climate and food systems. “They must be addressed in a systems context that reflects the interconnected nature of the food system.”

    Des Marais’ background is in biology, and Uhler’s in statistics. “Dave’s project with Caroline was essentially experimental,” says Renee J. Robins, J-WAFS’ executive director. “This kind of exploratory research is exactly what the J-WAFS seed grant program is for.”

    Getting inside gene regulatory networks

    Des Marais and Uhler’s work begins in a windowless basement on MIT’s campus, where 300 genetically identical Brachypodium distachyon plants grow in large, temperature-controlled chambers. The plant, which contains more than 30,000 genes, is a good model for studying important cereal crops like wheat, barley, maize, and millet. For three weeks, all plants receive the same temperature, humidity, light, and water. Then, half are slowly tapered off water, simulating drought-like conditions.

    Six days into the forced drought, the plants are clearly suffering. Des Marais’ PhD student Jie Yun takes tissues from 50 hydrated and 50 dry plants, freezes them in liquid nitrogen to immediately halt metabolic activity, grinds them up into a fine powder, and chemically separates the genetic material. The genes from all 100 samples are then sequenced at a lab across the street.

    The team is left with a spreadsheet listing the 30,000 genes found in each of the 100 plants at the moment they were frozen, and how many copies there were. Uhler’s PhD student Anastasiya Belyaeva inputs the massive spreadsheet into the computer program she developed and runs her novel algorithm. Within a few hours, the group can see which genes were most active in one condition over another, how the genes were communicating, and which were causing changes in others. 

    The methodology captures important subtleties that could allow researchers to eventually alter gene pathways and breed more resilient crops. “When you expose a plant to drought stress, it’s not like there’s some canonical response,” Des Marais says. “There’s lots of things going on. It’s turning this physiologic process up, this one down, this one didn’t exist before, and now suddenly is turned on.” 

    In addition to Des Marais and Uhler’s research, J-WAFS has funded projects in food and water from researchers in 29 departments across all five MIT schools as well as the MIT Schwarzman College of Computing. J-WAFS seed grants typically fund seven to eight new projects every year.

    “The grants are really aimed at catalyzing new ideas, providing the sort of support [for MIT researchers] to be pushing boundaries, and also bringing in faculty who may have some interesting ideas that they haven’t yet applied to water or food concerns,” Robins says. “It’s an avenue for researchers all over the Institute to apply their ideas to water and food.”

    Alison Gold is a student in MIT’s Graduate Program in Science Writing. More

  • in

    MIT appoints members of new faculty committee to drive climate action plan

    In May, responding to the world’s accelerating climate crisis, MIT issued an ambitious new plan, “Fast Forward: MIT’s Climate Action Plan for the Decade.” The plan outlines a broad array of new and expanded initiatives across campus to build on the Institute’s longstanding climate work.

    Now, to unite these varied climate efforts, maximize their impact, and identify new ways for MIT to contribute climate solutions, the Institute has appointed more than a dozen faculty members to a new committee established by the Fast Forward plan, named the Climate Nucleus.

    The committee includes leaders of a number of climate- and energy-focused departments, labs, and centers that have significant responsibilities under the plan. Its membership spans all five schools and the MIT Schwarzman College of Computing. Professors Noelle Selin and Anne White have agreed to co-chair the Climate Nucleus for a term of three years.

    “I am thrilled and grateful that Noelle and Anne have agreed to step up to this important task,” says Maria T. Zuber, MIT’s vice president for research. “Under their leadership, I’m confident that the Climate Nucleus will bring new ideas and new energy to making the strategy laid out in the climate action plan a reality.”

    The Climate Nucleus has broad responsibility for the management and implementation of the Fast Forward plan across its five areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts.

    Over the next few years, the nucleus will aim to advance MIT’s contribution to a two-track approach to decarbonizing the global economy, an approach described in the Fast Forward plan. First, humanity must go as far and as fast as it can to reduce greenhouse gas emissions using existing tools and methods. Second, societies need to invest in, invent, and deploy new tools — and promote new institutions and policies — to get the global economy to net-zero emissions by mid-century.

    The co-chairs of the nucleus bring significant climate and energy expertise, along with deep knowledge of the MIT community, to their task.

    Selin is a professor with joint appointments in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences. She is also the director of the Technology and Policy Program. She began at MIT in 2007 as a postdoc with the Center for Global Change Science and the Joint Program on the Science and Policy of Global Change. Her research uses modeling to inform decision-making on air pollution, climate change, and hazardous substances.

    “Climate change affects everything we do at MIT. For the new climate action plan to be effective, the Climate Nucleus will need to engage the entire MIT community and beyond, including policymakers as well as people and communities most affected by climate change,” says Selin. “I look forward to helping to guide this effort.”

    White is the School of Engineering’s Distinguished Professor of Engineering and the head of the Department of Nuclear Science and Engineering. She joined the MIT faculty in 2009 and has also served as the associate director of MIT’s Plasma Science and Fusion Center. Her research focuses on assessing and refining the mathematical models used in the design of fusion energy devices, such as tokamaks, which hold promise for delivering limitless zero-carbon energy.

    “The latest IPCC report underscores the fact that we have no time to lose in decarbonizing the global economy quickly. This is a problem that demands we use every tool in our toolbox — and develop new ones — and we’re committed to doing that,” says White, referring to an August 2021 report from the Intergovernmental Panel on Climate Change, a UN climate science body, that found that climate change has already affected every region on Earth and is intensifying. “We must train future technical and policy leaders, expand opportunities for students to work on climate problems, and weave sustainability into every one of MIT’s activities. I am honored to be a part of helping foster this Institute-wide collaboration.”

    A first order of business for the Climate Nucleus will be standing up three working groups to address specific aspects of climate action at MIT: climate education, climate policy, and MIT’s own carbon footprint. The working groups will be responsible for making progress on their particular areas of focus under the plan and will make recommendations to the nucleus on ways of increasing MIT’s effectiveness and impact. The working groups will also include student, staff, and alumni members, so that the entire MIT community has the opportunity to contribute to the plan’s implementation.  

    The nucleus, in turn, will report and make regular recommendations to the Climate Steering Committee, a senior-level team consisting of Zuber; Richard Lester, the associate provost for international activities; Glen Shor, the executive vice president and treasurer; and the deans of the five schools and the MIT Schwarzman College of Computing. The new plan created the Climate Steering Committee to ensure that climate efforts will receive both the high-level attention and the resources needed to succeed.

    Together the new committees and working groups are meant to form a robust new infrastructure for uniting and coordinating MIT’s climate action efforts in order to maximize their impact. They replace the Climate Action Advisory Committee, which was created in 2016 following the release of MIT’s first climate action plan.

    In addition to Selin and White, the members of the Climate Nucleus are:

    Bob Armstrong, professor in the Department of Chemical Engineering and director of the MIT Energy Initiative;
    Dara Entekhabi, professor in the departments of Civil and Environmental Engineering and Earth, Atmospheric and Planetary Sciences;
    John Fernández, professor in the Department of Architecture and director of the Environmental Solutions Initiative;
    Stefan Helmreich, professor in the Department of Anthropology;
    Christopher Knittel, professor in the MIT Sloan School of Management and director of the Center for Energy and Environmental Policy Research;
    John Lienhard, professor in the Department of Mechanical Engineering and director of the Abdul Latif Jameel Water and Food Systems Lab;
    Julie Newman, director of the Office of Sustainability and lecturer in the Department of Urban Studies and Planning;
    Elsa Olivetti, professor in the Department of Materials Science and Engineering and co-director of the Climate and Sustainability Consortium;
    Christoph Reinhart, professor in the Department of Architecture and director of the Building Technology Program;
    John Sterman, professor in the MIT Sloan School of Management and director of the Sloan Sustainability Initiative;
    Rob van der Hilst, professor and head of the Department of Earth, Atmospheric and Planetary Sciences; and
    Chris Zegras, professor and head of the Department of Urban Studies and Planning. More

  • in

    Concrete’s role in reducing building and pavement emissions

    Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.

    As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.

    But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.

    In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.

    They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets. 

    The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.

    Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.

    Sources and solutions

    The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.

    While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.

    Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.

    Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.

    Set of scenarios

    In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.

    Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.

    “We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.

    “As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”

    To reflect the complexity of these systems, their models had to have the highest resolutions possible.

    “In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.

    “To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”

    Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.

    As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.

    Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.

    “In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”

    Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.

    Results

    As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.

    “What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”

    “Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”

    In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.

    Their predictions for emissions reductions generated additional insights.  

    For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.

    As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.

    In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.

    “The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”

    These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.

    Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.

    “In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.

    “Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”

    This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.

    The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.

    The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation. More

  • in

    3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking

    The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?

    Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.

    Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?

    A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.

    Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.

    Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).

    Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.

    Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.

    To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself. 

    Q. What are the major challenges to implementing your technology in California?

    A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.

    The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.

    Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.

    Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?

    A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.

    Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.

    Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.

    In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.

    With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector. More

  • in

    Making catalytic surfaces more active to help decarbonize fuels and chemicals

    Electrochemical reactions that are accelerated using catalysts lie at the heart of many processes for making and using fuels, chemicals, and materials — including storing electricity from renewable energy sources in chemical bonds, an important capability for decarbonizing transportation fuels. Now, research at MIT could open the door to ways of making certain catalysts more active, and thus enhancing the efficiency of such processes.

    A new production process yielded catalysts that increased the efficiency of the chemical reactions by fivefold, potentially enabling useful new processes in biochemistry, organic chemistry, environmental chemistry, and electrochemistry. The findings are described today in the journal Nature Catalysis, in a paper by Yang Shao-Horn, an MIT professor of mechanical engineering and of materials science and engineering, and a member of the Research Lab of Electronics (RLE); Tao Wang, a postdoc in RLE; Yirui Zhang, a graduate student in the Department of Mechanical Engineering; and five others.

    The process involves adding a layer of what’s called an ionic liquid in between a gold or platinum catalyst and a chemical feedstock. Catalysts produced with this method could potentially enable much more efficient conversion of hydrogen fuel to power devices such as fuel cells, or more efficient conversion of carbon dioxide into fuels.

    “There is an urgent need to decarbonize how we power transportation beyond light-duty vehicles, how we make fuels, and how we make materials and chemicals,” says Shao-Horn, emphasizing the pressing call to reduce carbon emissions highlighted in the latest IPCC report on climate change. This new approach to enhancing catalytic activity could provide an important step in that direction, she says.

    Using hydrogen in electrochemical devices such as fuel cells is one promising approach to decarbonizing fields such as aviation and heavy-duty vehicles, and the new process may help to make such uses practical. At present, the oxygen reduction reaction that powers such fuel cells is limited by its inefficiency. Previous attempts to improve that efficiency have focused on choosing different catalyst materials or modifying their surface compositions and structure.

    In this research, however, instead of modifying the solid surfaces, the team added a thin layer in between the catalyst and the electrolyte, the active material that participates in the chemical reaction. The ionic liquid layer, they found, regulates the activity of protons that help to increase the rate of the chemical reactions taking place on the interface.

    Because there is a great variety of such ionic liquids to choose from, it’s possible to “tune” proton activity and the reaction rates to match the energetics needed for processes involving proton transfer, which can be used to make fuels and chemicals through reactions with oxygen.

    “The proton activity and the barrier for proton transfer is governed by the ionic liquid layer, and so there’s a great tuneability in terms of catalytic activity for reactions involving proton and electron transfer,” Shao-Horn says. And the effect is produced by a vanishingly thin layer of the liquid, just a few nanometers thick, above which is a much thicker layer of the liquid that is to undergo the reaction.

    “I think this concept is novel and important,” says Wang, the paper’s first author, “because people know the proton activity is important in many electrochemistry reactions, but it’s very challenging to study.” That’s because in a water environment, there are so many interactions between neighboring water molecules involved that it’s very difficult to separate out which reactions are taking place. By using an ionic liquid, whose ions can each only form a single bond with the intermediate material, it became possible to study the reactions in detail, using infrared spectroscopy.

    As a result, Wang says, “Our finding highlights the critical role that interfacial electrolytes, in particular the intermolecular hydrogen bonding, can play in enhancing the activity of the electro-catalytic process. It also provides fundamental insights into proton transfer mechanisms at a quantum mechanical level, which can push the frontiers of knowing how protons and electrons interact at catalytic interfaces.”

    “The work is also exciting because it gives people a design principle for how they can tune the catalysts,” says Zhang. “We need some species right at a ‘sweet spot’ — not too active or too inert — to enhance the reaction rate.”

    With some of these techniques, says Reshma Rao, a recent doctoral graduate from MIT and now a postdoc at Imperial College, London, who is also a co-author of the paper, “we see up to a five-times increase in activity. I think the most exciting part of this research is the way it opens up a whole new dimension in the way we think about catalysis.” The field had hit “a kind of roadblock,” she says, in finding ways to design better materials. By focusing on the liquid layer rather than the surface of the material, “that’s kind of a whole different way of looking at this problem, and opens up a whole new dimension, a whole new axis along which we can change things and optimize some of these reaction rates.”

    The team also included Botao Huang, Bin Cai, and Livia Giordano in the MIT’s Research Laboratory of Electronics, and Shi-Gang Sun at Xiamen University in China. The work was supported by the Toyota Research Institute, and used the National Science Foundation’s Extreme Science and Engineering Environment. More

  • in

    Mitigating hazards with vulnerability in mind

    From tropical storms to landslides, the form and frequency of natural hazards vary widely. But the feelings of vulnerability they can provoke are universal.

    Growing up in hazard-prone cities, Ipek Bensu Manav, a civil and environmental engineering PhD candidate with the MIT Concrete Sustainability Hub (CSHub), noticed that this vulnerability was always at the periphery. Today, she’s studying vulnerability, in both its engineering and social dimensions, with the aim of promoting more hazard-resilient communities.

    Her research at CSHub has taken her across the country to attend impactful conferences and allowed her to engage with prominent experts and decision-makers in the realm of resilience. But more fundamentally, it has also taken her beyond the conventional bounds of engineering, reshaping her understanding of the practice.

    From her time in Miami, Florida, and Istanbul, Turkey, Manav is no stranger to natural hazards. Istanbul, which suffered a devastating earthquake in 1999, is predicted to experience an equally violent tremor in the near future, while Miami ranks among the top cities in the U.S. in terms of natural disaster risk due to its vulnerability to hurricanes.

    “Growing up in Miami, I’d always hear about hurricane season on the news,” recounts Manav, “While in Istanbul there was a constant fear about the next big earthquake. Losing people and [witnessing] those kinds of events instilled in me a desire to tame nature.”

    It was this desire to “push the bounds of what is possible” — and to protect lives in the process — that motivated Manav to study civil engineering at Boğaziçi University. Her studies there affirmed her belief in the formidable power of engineering to “outsmart nature.”

    This, in part, led her to continue her studies at MIT CSHub — a team of interdisciplinary researchers who study how to achieve resilient and sustainable infrastructure. Her role at CSHub has given her the opportunity to study resilience in depth. It has also challenged her understanding of natural disasters — and whether they are “natural” at all.

    “Over the past few decades, some policy choices have increased the risk of experiencing disasters,” explains Manav. “An increasingly popular sentiment among resilience researchers is that natural disasters are not ‘natural,’ but are actually man-made. At CSHub we believe there is an opportunity to do better with the growing knowledge and engineering and policy research.”

    As a part of the CSHub portfolio, Manav’s research looks not just at resilient engineering, but the engineering of resilient communities.

    Her work draws on a metric developed at CSHub known as city texture, which is a measurement of the rectilinearity of a city’s layout. City texture, Manav and her colleagues have found, is a versatile and informative measurement. By capturing a city’s order or disorder, it can predict variations in wind flow — variations currently too computationally intensive for most cities to easily render.  

    Manav has derived this metric for her native South Florida. A city texture analysis she conducted there found that numerous census tracts could experience wind speeds 50 percent greater than currently predicted. Mitigating these wind variations could lead to some $697 million in savings annually.

    Such enormous hazard losses and the growing threat of climate change have presented her with a new understanding of engineering.

    “With resilience and climate change at the forefront of engineering, the focus has shifted,” she explains, “from defying limits and building impressive structures to making structures that adapt to the changing environment around us.”

    Witnessing this shift has reoriented her relationship with engineering. Rather than viewing it as a distinct science, she has begun to place it in its broader social and political context — and to recognize how those social and political dynamics often determine engineering outcomes.

    “When I started grad school, I often felt ‘Oh this is an engineering problem. I can engineer a solution’,” recounts Manav. “But as I’ve read more about resilience, I’ve realized that it’s just as much a concern of politics and policy as it is of engineering.”

    She attributes her awareness of policy to MIT CSHub’s collaboration with the Portland Cement Association and the Ready Mixed Concrete Research & Education Foundation. The commitment of the concrete and cement industries to resilient construction has exposed her to the myriad policies that dictate the resilience of communities.

    “Spending time with our partners made me realize how much of a policy issue [resilience] is,” she explains. “And working with them has provided me with a seat at the table with the people engaged in resilience.”

    Opportunities for engagement have been plentiful. She has attended numerous conferences and met with leaders in the realm of sustainability and resilience, including the International Code Council (ICC), Smart Home America, and Strengthen Alabama Homes.

    Some opportunities have proven particularly fortuitous. When attending a presentation hosted by the ICC and the National Association for the Advancement of Colored People (NAACP) that highlighted people of color working on building codes, Manav felt inspired to reach out to the presenters. Soon after, she found herself collaborating with them on a policy report on resilience in communities of color.

    “For me, it was a shifting point, going from prophesizing about what we could be doing, to observing what is being done. It was a very humbling experience,” she says. “Having worked in this lab made me feel more comfortable stepping outside of my comfort zone and reaching out.”

    Manav credits this growing confidence to her mentorship at CSHub. More than just providing support, CSHub Co-director Randy Kirchain has routinely challenged her and inspired further growth.

    “There have been countless times that I’ve reached out to him because I was feeling unsure of myself or my ideas,” says Manav. “And he’s offered clarity and assurance.”

    Before her first conference, she recalls Kirchain staying in the office well into the evening to help her practice and hone her presentation. He’s also advocated for her on research projects to ensure that her insight is included and that she receives the credit she deserves. But most of all, he’s been a great person to work with.

    “Randy is a lighthearted, funny, and honest person to be around,” recounts Manav. “He builds in me the confidence to dive straight into whatever task I’m tackling.”

    That current task is related to equity. Inspired by her conversations with members of the NAACP, Manav has introduced a new dimension to her research — social vulnerability.

    In contrast to place vulnerability, which captures the geographical susceptibility to hazards, social vulnerability captures the extent to which residents have the resources to respond to and recover from hazard events. Household income could act as a proxy for these resources, and the spread of household income across geographies and demographics can help derive metrics of place and social vulnerability. And these metrics matter.

    “Selecting different metrics favors different people when distributing hazard mitigation and recovery funds,” explains Manav. “If we’re looking at just the dollar value of losses, then wealthy households with more valuable properties disproportionally benefit. But, conversely, if we look at losses as a percentage of income, we’re going to prioritize low-income households that might not necessarily have the resources to recover.”

    Manav has incorporated metrics of social vulnerability into her city texture loss estimations. The resulting approach could predict unmitigated damage, estimate subsequent hazard losses, and measure the disparate impact of those losses on low-income and socially vulnerable communities.

    Her hope is that this streamlined approach could change how funds are disbursed and give communities the tools to solve the entwined challenges of climate change and equity.

    The city texture work Manav has adopted is quite different from the gravity-defying engineering that drew her to the field. But she’s found that it is often more pragmatic and impactful.

    Rather than mastering the elements, she’s learning how to adapt to them and help others do the same. Solutions to climate change, she’s discovered, demand the collaboration of numerous parties — as well as a willingness to confront one’s own vulnerabilities and make the decision to reach out.  More

  • in

    J-WAFS announces 2021 Solutions Grants for commercializing water and food technologies

    The Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) recently announced the 2021 J-WAFS Solutions grant recipients. The J-WAFS Solutions program aims to propel MIT water- and food-related research toward commercialization. Grant recipients receive one year of financial support, as well as mentorship, networking, and guidance from industry experts, to begin their journey into the commercial world — whether that be in the form of bringing innovative products to market or launching cutting-edge startup companies. 

    This year, three projects will receive funding across water, food, and agriculture spaces. The winning projects will advance nascent technologies for off-grid refrigeration, portable water filtration, and dairy waste recycling. Each provides an efficient, accessible solution to the respective challenge being addressed.

    Since the start of the J-WAFS Solutions program in 2015, grants have provided instrumental support in creating a number of key MIT startups that focus on major water and food challenges. A 2015-16 grant helped the team behind Via Separations develop their business plan to massively decarbonize industrial separations processes. Other successful J-WAFS Solutions alumni include researchers who created a low-cost water filter made from tree branches and the team that launched the startup Xibus Systems, which is developing a handheld food safety sensor.

    “New technological advances are being made at MIT every day, and J-WAFS Solutions grants provide critical resources and support for these technologies to make it to market so that they can transform our local and global water and food systems,” says J-WAFS Executive Director Renee Robins. “This year’s grant recipients offer innovative tools that will provide more accessible food storage for smallholder farmers in places like Africa, safer drinking water, and a new approach to recycling food waste,” Robins notes. She adds, “J-WAFS is excited to work with these teams, and we look forward to seeing their impact on the water and food sectors.”

    The J-WAFS Solutions program is implemented in collaboration with Community Jameel, the global philanthropic organization founded by Mohammed Jameel ’78, and is supported by the MIT Venture Mentoring Service and the iCorps New England Regional Innovation Node at MIT.

    Mobile evaporative cooling rooms for vegetable preservation

    Food waste is a persistent problem across food systems supply chains, as 30-50 percent of food produced is lost before it reaches the table. The problem is compounded in areas without access to the refrigeration necessary to store food after it is harvested. Hot and dry climates in particular struggle to preserve food before it reaches consumers. A team led by Daniel Frey, faculty director for research at MIT D-Lab and professor of mechanical engineering, has pioneered a new approach to enable farmers to better preserve their produce and improve access to nutritious food in the community. The team includes Leon Glicksman, professor of building technology and mechanical engineering, and Eric Verploegen, a research engineer in MIT D-Lab.

    Instead of relying on traditional refrigeration with high energy and cost requirements, the team is utilizing forced-air evaporative cooling chambers. Their design, based on retrofitting shipping containers, will provide a lower-cost, better-performing solution enabling farmers to chill their produce without access to power. The research team was previously funded by J-WAFS through two different grants in 2019 to develop the off-grid technology in collaboration with researchers at the University of Nairobi and the Collectives for Integrated Livelihood Initiatives (CInI), Jamshedpur. Now, the cooling rooms are ready for pilot testing, which the MIT team will conduct with rural farmers in Kenya and India. The MIT team will deploy and test the storage chambers through collaborations with two Kenyan social enterprises and a nongovernmental organization in Gujarat, India. 

    Off-grid portable ion concentration polarization desalination unit

    Shrinking aquifers, polluted rivers, and increased drought are making fresh drinking water increasingly scarce, driving the need for improved desalination technologies. The water purifiers market, which was $45 billion in 2019, is expected to grow to $90.1 billion in 2025. However, current products on the market are limited in scope, in that they are designed to treat water that is already relatively low in salinity, and do not account for lead contamination or other technical challenges. A better solution is required to ensure access to clean and safe drinking water in the face of water shortages. 

    A team led by Jongyoon Han, professor of biological engineering and electrical engineering at MIT, has developed a portable desalination unit that utilizes an ion concentration polarization process. The compact and lightweight unit has the ability to remove dissolved and suspended solids from brackish water at a rate of one liter per hour, both in installed and remote field settings. The unit was featured in an award-winning video in the 2021 J-WAFS World Water Day Video Competition: MIT Research for a Water Secure Future. The team plans to develop the next-generation prototype of the desalination unit alongside a mass-production strategy and business model.

    Converting dairy industry waste into food and feed ingredients

    One of the trendiest foods in the last decade, Greek yogurt, has a hidden dark side: acid whey. This low-pH, liquid by-product of yogurt production has been a growing problem for producers, as untreated disposal of the whey can pose environmental risks due to its high organic content and acidic odor.

    With an estimated 3 million tons of acid whey generated in the United States each year, MIT researchers saw an opportunity to turn waste into a valuable resource for our food systems. Led by the Willard Henry Dow Professor in Chemical Engineering, Gregory Stephanopoulos, and Anthony J. Sinskey, professor of microbiology, the researchers are utilizing metabolic engineering to turn acid whey into carotenoids, the yellow and orange organic pigments found naturally in carrots, autumn leaves, and salmon. The team is hoping that these carotenoids can be utilized as food supplements or feed additives to make the most of what otherwise would have been wasted. More

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More