More stories

  • in

    Using plant biology to address climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fourth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    The impact of our changing climate on agriculture and food security — and how contemporary agriculture contributes to climate change — is at the forefront of MIT’s multidisciplinary project “Revolutionizing agriculture with low-emissions, resilient crops.” The project The project is one of five flagship winners in the Climate Grand Challenges competition, and brings together researchers from the departments of Biology, Biological Engineering, Chemical Engineering, and Civil and Environmental Engineering.

    “Our team’s research seeks to address two connected challenges: first, the need to reduce the greenhouse gas emissions produced by agricultural fertilizer; second, the fact that the yields of many current agricultural crops will decrease, due to the effects of climate change on plant metabolism,” says the project’s faculty lead, Christopher Voigt, the Daniel I.C. Wang Professor in MIT’s Department of Biological Engineering. “We are pursuing six interdisciplinary projects that are each key to our overall goal of developing low-emissions methods for fertilizing plants that are bioengineered to be more resilient and productive in a changing climate.”

    Whitehead Institute members Mary Gehring and Jing-Ke Weng, plant biologists who are also associate professors in MIT’s Department of Biology, will lead two of those projects.

    Promoting crop resilience

    For most of human history, climate change occurred gradually, over hundreds or thousands of years. That pace allowed plants to adapt to variations in temperature, precipitation, and atmospheric composition. However, human-driven climate change has occurred much more quickly, and crop plants have suffered: Crop yields are down in many regions, as is seed protein content in cereal crops.

    “If we want to ensure an abundant supply of nutritious food for the world, we need to develop fundamental mechanisms for bioengineering a wide variety of crop plants that will be both hearty and nutritious in the face of our changing climate,” says Gehring. In her previous work, she has shown that many aspects of plant reproduction and seed development are controlled by epigenetics — that is, by information outside of the DNA sequence. She has been using that knowledge and the research methods she has developed to identify ways to create varieties of seed-producing plants that are more productive and resilient than current food crops.

    But plant biology is complex, and while it is possible to develop plants that integrate robustness-enhancing traits by combining dissimilar parental strains, scientists are still learning how to ensure that the new traits are carried forward from one generation to the next. “Plants that carry the robustness-enhancing traits have ‘hybrid vigor,’ and we believe that the perpetuation of those traits is controlled by epigenetics,” Gehring explains. “Right now, some food crops, like corn, can be engineered to benefit from hybrid vigor, but those traits are not inherited. That’s why farmers growing many of today’s most productive varieties of corn must purchase and plant new batches of seeds each year. Moreover, many important food crops have not yet realized the benefits of hybrid vigor.”

    The project Gehring leads, “Developing Clonal Seed Production to Fix Hybrid Vigor,” aims to enable food crop plants to create seeds that are both more robust and genetically identical to the parent — and thereby able to pass beneficial traits from generation to generation.

    The process of clonal (or asexual) production of seeds that are genetically identical to the maternal parent is called apomixis. Gehring says, “Because apomixis is present in 400 flowering plant species — about 1 percent of flowering plant species — it is probable that genes and signaling pathways necessary for apomixis are already present within crop plants. Our challenge is to tweak those genes and pathways so that the plant switches reproduction from sexual to asexual.”

    The project will leverage the fact that genes and pathways related to autonomous asexual development of the endosperm — a seed’s nutritive tissue — exist in the model plant Arabidopsis thaliana. In previous work on Arabidopsis, Gehring’s lab researched a specific gene that, when misregulated, drives development of an asexual endosperm-like material. “Normally, that seed would not be viable,” she notes. “But we believe that by epigenetic tuning of the expression of additional relevant genes, we will enable the plant to retain that material — and help achieve apomixis.”

    If Gehring and her colleagues succeed in creating a gene-expression “formula” for introducing endosperm apomixis into a wide range of crop plants, they will have made a fundamental and important achievement. Such a method could be applied throughout agriculture to create and perpetuate new crop breeds able to withstand their changing environments while requiring less fertilizer and fewer pesticides.

    Creating “self-fertilizing” crops

    Roughly a quarter of greenhouse gas (GHG) emissions in the United States are a product of agriculture. Fertilizer production and use accounts for one third of those emissions and includes nitrous oxide, which has heat-trapping capacity 298-fold stronger than carbon dioxide, according to a 2018 Frontiers in Plant Science study. Most artificial fertilizer production also consumes huge quantities of natural gas and uses minerals mined from nonrenewable resources. After all that, much of the nitrogen fertilizer becomes runoff that pollutes local waterways. For those reasons, this Climate Grand Challenges flagship project aims to greatly reduce use of human-made fertilizers.

    One tantalizing approach is to cultivate cereal crop plants — which account for about 75 percent of global food production — capable of drawing nitrogen from metabolic interactions with bacteria in the soil. Whitehead Institute’s Weng leads an effort to do just that: genetically bioengineer crops such as corn, rice, and wheat to, essentially, create their own fertilizer through a symbiotic relationship with nitrogen-fixing microbes.

    “Legumes such as bean and pea plants can form root nodules through which they receive nitrogen from rhizobia bacteria in exchange for carbon,” Weng explains. “This metabolic exchange means that legumes release far less greenhouse gas — and require far less investment of fossil energy — than do cereal crops, which use a huge portion of the artificially produced nitrogen fertilizers employed today.

    “Our goal is to develop methods for transferring legumes’ ‘self-fertilizing’ capacity to cereal crops,” Weng says. “If we can, we will revolutionize the sustainability of food production.”

    The project — formally entitled “Mimicking legume-rhizobia symbiosis for fertilizer production in cereals” — will be a multistage, five-year effort. It draws on Weng’s extensive studies of metabolic evolution in plants and his identification of molecules involved in formation of the root nodules that permit exchanges between legumes and nitrogen-fixing bacteria. It also leverages his expertise in reconstituting specific signaling and metabolic pathways in plants.

    Weng and his colleagues will begin by deciphering the full spectrum of small-molecule signaling processes that occur between legumes and rhizobium bacteria. Then they will genetically engineer an analogous system in nonlegume crop plants. Next, using state-of-the-art metabolomic methods, they will identify which small molecules excreted from legume roots prompt a nitrogen/carbon exchange from rhizobium bacteria. Finally, the researchers will genetically engineer the biosynthesis of those molecules in the roots of nonlegume plants and observe their effect on the rhizobium bacteria surrounding the roots.

    While the project is complex and technically challenging, its potential is staggering. “Focusing on corn alone, this could reduce the production and use of nitrogen fertilizer by 160,000 tons,” Weng notes. “And it could halve the related emissions of nitrous oxide gas.” More

  • in

    Empowering people to adapt on the frontlines of climate change

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the fifth in a five-part series highlighting the most promising concepts to emerge from the competition and the interdisciplinary research teams behind them.

    In the coastal south of Bangladesh, rice paddies that farmers could once harvest three times a year lie barren. Sea-level rise brings saltwater to the soil, ruining the staple crop. It’s one of many impacts, and inequities, of climate change. Despite producing less than 1 percent of global carbon emissions, Bangladesh is suffering more than most countries. Rising seas, heat waves, flooding, and cyclones threaten 90 million people.

    A platform being developed in a collaboration between MIT and BRAC, a Bangladesh-based global development organization, aims to inform and empower climate-threatened communities to proactively adapt to a changing future. Selected as one of five MIT Climate Grand Challenges flagship projects, the Climate Resilience Early Warning System (CREWSnet) will forecast the local impacts of climate change on people’s lives, homes, and livelihoods. These forecasts will guide BRAC’s development of climate-resiliency programs to help residents prepare for and adapt to life-altering conditions.

    “The communities that CREWSnet will focus on have done little to contribute to the problem of climate change in the first place. However, because of socioeconomic situations, they may be among the most vulnerable. We hope that by providing state-of-the-art projections and sharing them broadly with communities, and working through partners like BRAC, we can help improve the capacity of local communities to adapt to climate change, significantly,” says Elfatih Eltahir, the H.M. King Bhumibol Professor in the Department of Civil and Environmental Engineering.

    Eltahir leads the project with John Aldridge and Deborah Campbell in the Humanitarian Assistance and Disaster Relief Systems Group at Lincoln Laboratory. Additional partners across MIT include the Center for Global Change Science; the Department of Earth, Atmospheric and Planetary Sciences; the Joint Program on the Science and Policy of Global Change; and the Abdul Latif Jameel Poverty Action Lab. 

    Predicting local risks

    CREWSnet’s forecasts rely upon a sophisticated model, developed in Eltahir’s research group over the past 25 years, called the MIT Regional Climate Model. This model zooms in on climate processes at local scales, at a resolution as granular as 6 miles. In Bangladesh’s population-dense cities, a 6-mile area could encompass tens, or even hundreds, of thousands of people. The model takes into account the details of a region’s topography, land use, and coastline to predict changes in local conditions.

    When applying this model over Bangladesh, researchers found that heat waves will get more severe and more frequent over the next 30 years. In particular, wet-bulb temperatures, which indicate the ability for humans to cool down by sweating, will rise to dangerous levels rarely observed today, particularly in western, inland cities.

    Such hot spots exacerbate other challenges predicted to worsen near Bangladesh’s coast. Rising sea levels and powerful cyclones are eroding and flooding coastal communities, causing saltwater to surge into land and freshwater. This salinity intrusion is detrimental to human health, ruins drinking water supplies, and harms crops, livestock, and aquatic life that farmers and fishermen depend on for food and income.

    CREWSnet will fuse climate science with forecasting tools that predict the social and economic impacts to villages and cities. These forecasts — such as how often a crop season may fail, or how far floodwaters will reach — can steer decision-making.

    “What people need to know, whether they’re a governor or head of a household, is ‘What is going to happen in my area, and what decisions should I make for the people I’m responsible for?’ Our role is to integrate this science and technology together into a decision support system,” says Aldridge, whose group at Lincoln Laboratory specializes in this area. Most recently, they transitioned a hurricane-evacuation planning system to the U.S. government. “We know that making decisions based on climate change requires a deep level of trust. That’s why having a powerful partner like BRAC is so important,” he says.

    Testing interventions

    Established 50 years ago, just after Bangladesh’s independence, BRAC works in every district of the nation to provide social services that help people rise from extreme poverty. Today, it is one of the world’s largest nongovernmental organizations, serving 110 million people across 11 countries in Asia and Africa, but its success is cultivated locally.

    “BRAC is thrilled to partner with leading researchers at MIT to increase climate resilience in Bangladesh and provide a model that can be scaled around the globe,” says Donella Rapier, president and CEO of BRAC USA. “Locally led climate adaptation solutions that are developed in partnership with communities are urgently needed, particularly in the most vulnerable regions that are on the frontlines of climate change.”

    CREWSnet will help BRAC identify communities most vulnerable to forecasted impacts. In these areas, they will share knowledge and innovate or bolster programs to improve households’ capacity to adapt.

    Many climate initiatives are already underway. One program equips homes to filter and store rainwater, as salinity intrusion makes safe drinking water hard to access. Another program is building resilient housing, able to withstand 120-mile-per-hour winds, that can double as local shelters during cyclones and flooding. Other services are helping farmers switch to different livestock or crops better suited for wetter or saltier conditions (e.g., ducks instead of chickens, or salt-tolerant rice), providing interest-free loans to enable this change.

    But adapting in place will not always be possible, for example in areas predicted to be submerged or unbearably hot by midcentury. “Bangladesh is working on identifying and developing climate-resilient cities and towns across the country, as closer-by alternative destinations as compared to moving to Dhaka, the overcrowded capital of Bangladesh,” says Campbell. “CREWSnet can help identify regions better suited for migration, and climate-resilient adaptation strategies for those regions.” At the same time, BRAC’s Climate Bridge Fund is helping to prepare cities for climate-induced migration, building up infrastructure and financial services for people who have been displaced.

    Evaluating impact

    While CREWSnet’s goal is to enable action, it can’t quite measure the impact of those actions. The Abdul Latif Jameel Poverty Action Lab (J-PAL), a development economics program in the MIT School of Humanities, Arts, and Social Sciences, will help evaluate the effectiveness of the climate-adaptation programs.

    “We conduct randomized controlled trials, similar to medical trials, that help us understand if a program improved people’s lives,” says Claire Walsh, the project director of the King Climate Action Initiative at J-PAL. “Once CREWSnet helps BRAC implement adaptation programs, we will generate scientific evidence on their impacts, so that BRAC and CREWSnet can make a case to funders and governments to expand effective programs.”

    The team aspires to bring CREWSnet to other nations disproportionately impacted by climate change. “Our vision is to have this be a globally extensible capability,” says Campbell. CREWSnet’s name evokes another early-warning decision-support system, FEWSnet, that helped organizations address famine in eastern Africa in the 1980s. Today it is a pillar of food-security planning around the world.

    CREWSnet hopes for a similar impact in climate change planning. Its selection as an MIT Climate Grand Challenges flagship project will inject the project with more funding and resources, momentum that will also help BRAC’s fundraising. The team plans to deploy CREWSnet to southwestern Bangladesh within five years.

    “The communities that we are aspiring to reach with CREWSnet are deeply aware that their lives are changing — they have been looking climate change in the eye for many years. They are incredibly resilient, creative, and talented,” says Ashley Toombs, the external affairs director for BRAC USA. “As a team, we are excited to bring this system to Bangladesh. And what we learn together, we will apply at potentially even larger scales.” More

  • in

    MIT engineers introduce the Oreometer

    When you twist open an Oreo cookie to get to the creamy center, you’re mimicking a standard test in rheology — the study of how a non-Newtonian material flows when twisted, pressed, or otherwise stressed. MIT engineers have now subjected the sandwich cookie to rigorous materials tests to get to the center of a tantalizing question: Why does the cookie’s cream stick to just one wafer when twisted apart?

    “There’s the fascinating problem of trying to get the cream to distribute evenly between the two wafers, which turns out to be really hard,” says Max Fan, an undergraduate in MIT’s Department of Mechanical Engineering.

    In pursuit of an answer, the team subjected cookies to standard rheology tests in the lab and found that no matter the flavor or amount of stuffing, the cream at the center of an Oreo almost always sticks to one wafer when twisted open. Only for older boxes of cookies does the cream sometimes separate more evenly between both wafers.

    The researchers also measured the torque required to twist open an Oreo, and found it to be similar to the torque required to turn a doorknob and about 1/10th what’s needed to twist open a bottlecap. The cream’s failure stress — i.e. the force per area required to get the cream to flow, or deform — is twice that of cream cheese and peanut butter, and about the same magnitude as mozzarella cheese. Judging from the cream’s response to stress, the team classifies its texture as “mushy,” rather than brittle, tough, or rubbery.

    So, why does the cookie’s cream glom to one side rather than splitting evenly between both? The manufacturing process may be to blame.

    “Videos of the manufacturing process show that they put the first wafer down, then dispense a ball of cream onto that wafer before putting the second wafer on top,” says Crystal Owens, an MIT mechanical engineering PhD candidate who studies the properties of complex fluids. “Apparently that little time delay may make the cream stick better to the first wafer.”

    The team’s study isn’t simply a sweet diversion from bread-and-butter research; it’s also an opportunity to make the science of rheology accessible to others. To that end, the researchers have designed a 3D-printable “Oreometer” — a simple device that firmly grasps an Oreo cookie and uses pennies and rubber bands to control the twisting force that progressively twists the cookie open. Instructions for the tabletop device can be found here.

    The new study, “On Oreology, the fracture and flow of ‘milk’s favorite cookie,’” appears today in Kitchen Flows, a special issue of the journal Physics of Fluids. It was conceived of early in the Covid-19 pandemic, when many scientists’ labs were closed or difficult to access. In addition to Owens and Fan, co-authors are mechanical engineering professors Gareth McKinley and A. John Hart.

    Confection connection

    A standard test in rheology places a fluid, slurry, or other flowable material onto the base of an instrument known as a rheometer. A parallel plate above the base can be lowered onto the test material. The plate is then twisted as sensors track the applied rotation and torque.

    Owens, who regularly uses a laboratory rheometer to test fluid materials such as 3D-printable inks, couldn’t help noting a similarity with sandwich cookies. As she writes in the new study:

    “Scientifically, sandwich cookies present a paradigmatic model of parallel plate rheometry in which a fluid sample, the cream, is held between two parallel plates, the wafers. When the wafers are counter-rotated, the cream deforms, flows, and ultimately fractures, leading to separation of the cookie into two pieces.”

    While Oreo cream may not appear to possess fluid-like properties, it is considered a “yield stress fluid” — a soft solid when unperturbed that can start to flow under enough stress, the way toothpaste, frosting, certain cosmetics, and concrete do.

    Curious as to whether others had explored the connection between Oreos and rheology, Owens found mention of a 2016 Princeton University study in which physicists first reported that indeed, when twisting Oreos by hand, the cream almost always came off on one wafer.

    “We wanted to build on this to see what actually causes this effect and if we could control it if we mounted the Oreos carefully onto our rheometer,” she says.

    Play video

    Cookie twist

    In an experiment that they would repeat for multiple cookies of various fillings and flavors, the researchers glued an Oreo to both the top and bottom plates of a rheometer and applied varying degrees of torque and angular rotation, noting the values  that successfully twisted each cookie apart. They plugged the measurements into equations to calculate the cream’s viscoelasticity, or flowability. For each experiment, they also noted the cream’s “post-mortem distribution,” or where the cream ended up after twisting open.

    In all, the team went through about 20 boxes of Oreos, including regular, Double Stuf, and Mega Stuf levels of filling, and regular, dark chocolate, and “golden” wafer flavors. Surprisingly, they found that no matter the amount of cream filling or flavor, the cream almost always separated onto one wafer.

    “We had expected an effect based on size,” Owens says. “If there was more cream between layers, it should be easier to deform. But that’s not actually the case.”

    Curiously, when they mapped each cookie’s result to its original position in the box, they noticed the cream tended to stick to the inward-facing wafer: Cookies on the left side of the box twisted such that the cream ended up on the right wafer, whereas cookies on the right side separated with cream mostly on the left wafer. They suspect this box distribution may be a result of post-manufacturing environmental effects, such as heating or jostling that may cause cream to peel slightly away from the outer wafers, even before twisting.

    The understanding gained from the properties of Oreo cream could potentially be applied to the design of other complex fluid materials.

    “My 3D printing fluids are in the same class of materials as Oreo cream,” she says. “So, this new understanding can help me better design ink when I’m trying to print flexible electronics from a slurry of carbon nanotubes, because they deform in almost exactly the same way.”

    As for the cookie itself, she suggests that if the inside of Oreo wafers were more textured, the cream might grip better onto both sides and split more evenly when twisted.

    “As they are now, we found there’s no trick to twisting that would split the cream evenly,” Owens concludes.

    This research was supported, in part, by the MIT UROP program and by the National Defense Science and Engineering Graduate Fellowship Program. More

  • in

    Looking forward to forecast the risks of a changing climate

    On April 11, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This article is the third in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    Extreme weather events that were once considered rare have become noticeably less so, from intensifying hurricane activity in the North Atlantic to wildfires generating massive clouds of ozone-damaging smoke. But current climate models are unprepared when it comes to estimating the risk that these increasingly extreme events pose — and without adequate modeling, governments are left unable to take necessary precautions to protect their communities.

    MIT Department of Earth, Atmospheric and Planetary Science (EAPS) Professor Paul O’Gorman researches this trend by studying how climate affects the atmosphere and incorporating what he learns into climate models to improve their accuracy. One particular focus for O’Gorman has been changes in extreme precipitation and midlatitude storms that hit areas like New England.

    “These extreme events are having a lot of impact, but they’re also difficult to model or study,” he says. Seeing the pressing need for better climate models that can be used to develop preparedness plans and climate change mitigation strategies, O’Gorman and collaborators Kerry Emanuel, the Cecil and Ida Green Professor of Atmospheric Science in EAPS, and Miho Mazereeuw, associate professor in MIT’s Department of Architecture, are leading an interdisciplinary group of scientists, engineers, and designers to tackle this problem with their MIT Climate Grand Challenges flagship project, “Preparing for a new world of weather and climate extremes.”

    “We know already from observations and from climate model predictions that weather and climate extremes are changing and will change more,” O’Gorman says. “The grand challenge is preparing for those changing extremes.”

    Their proposal is one of five flagship projects recently announced by the MIT Climate Grand Challenges initiative — an Institute-wide effort catalyzing novel research and engineering innovations to address the climate crisis. Selected from a field of almost 100 submissions, the team will receive additional funding and exposure to help accelerate and scale their project goals. Other MIT collaborators on the proposal include researchers from the School of Engineering, the School of Architecture and Planning, the Office of Sustainability, the Center for Global Change Science, and the Institute for Data, Systems and Society.

    Weather risk modeling

    Fifteen years ago, Kerry Emanuel developed a simple hurricane model. It was based on physics equations, rather than statistics, and could run in real time, making it useful for modeling risk assessment. Emanuel wondered if similar models could be used for long-term risk assessment of other things, such as changes in extreme weather because of climate change.

    “I discovered, somewhat to my surprise and dismay, that almost all extant estimates of long-term weather risks in the United States are based not on physical models, but on historical statistics of the hazards,” says Emanuel. “The problem with relying on historical records is that they’re too short; while they can help estimate common events, they don’t contain enough information to make predictions for more rare events.”

    Another limitation of weather risk models which rely heavily on statistics: They have a built-in assumption that the climate is static.

    “Historical records rely on the climate at the time they were recorded; they can’t say anything about how hurricanes grow in a warmer climate,” says Emanuel. The models rely on fixed relationships between events; they assume that hurricane activity will stay the same, even while science is showing that warmer temperatures will most likely push typical hurricane activity beyond the tropics and into a much wider band of latitudes.

    As a flagship project, the goal is to eliminate this reliance on the historical record by emphasizing physical principles (e.g., the laws of thermodynamics and fluid mechanics) in next-generation models. The downside to this is that there are many variables that have to be included. Not only are there planetary-scale systems to consider, such as the global circulation of the atmosphere, but there are also small-scale, extremely localized events, like thunderstorms, that influence predictive outcomes.

    Trying to compute all of these at once is costly and time-consuming — and the results often can’t tell you the risk in a specific location. But there is a way to correct for this: “What’s done is to use a global model, and then use a method called downscaling, which tries to infer what would happen on very small scales that aren’t properly resolved by the global model,” explains O’Gorman. The team hopes to improve downscaling techniques so that they can be used to calculate the risk of very rare but impactful weather events.

    Global climate models, or general circulation models (GCMs), Emanuel explains, are constructed a bit like a jungle gym. Like the playground bars, the Earth is sectioned in an interconnected three-dimensional framework — only it’s divided 100 to 200 square kilometers at a time. Each node comprises a set of computations for characteristics like wind, rainfall, atmospheric pressure, and temperature within its bounds; the outputs of each node are connected to its neighbor. This framework is useful for creating a big picture idea of Earth’s climate system, but if you tried to zoom in on a specific location — like, say, to see what’s happening in Miami or Mumbai — the connecting nodes are too far apart to make predictions on anything specific to those areas.

    Scientists work around this problem by using downscaling. They use the same blueprint of the jungle gym, but within the nodes they weave a mesh of smaller features, incorporating equations for things like topography and vegetation or regional meteorological models to fill in the blanks. By creating a finer mesh over smaller areas they can predict local effects without needing to run the entire global model.

    Of course, even this finer-resolution solution has its trade-offs. While we might be able to gain a clearer picture of what’s happening in a specific region by nesting models within models, it can still make for a computing challenge to crunch all that data at once, with the trade-off being expense and time, or predictions that are limited to shorter windows of duration — where GCMs can be run considering decades or centuries, a particularly complex local model may be restricted to predictions on timescales of just a few years at a time.

    “I’m afraid that most of the downscaling at present is brute force, but I think there’s room to do it in better ways,” says Emanuel, who sees the problem of finding new and novel methods of achieving this goal as an intellectual challenge. “I hope that through the Grand Challenges project we might be able to get students, postdocs, and others interested in doing this in a very creative way.”

    Adapting to weather extremes for cities and renewable energy

    Improving climate modeling is more than a scientific exercise in creativity, however. There’s a very real application for models that can accurately forecast risk in localized regions.

    Another problem is that progress in climate modeling has not kept up with the need for climate mitigation plans, especially in some of the most vulnerable communities around the globe.

    “It is critical for stakeholders to have access to this data for their own decision-making process. Every community is composed of a diverse population with diverse needs, and each locality is affected by extreme weather events in unique ways,” says Mazereeuw, the director of the MIT Urban Risk Lab. 

    A key piece of the team’s project is building on partnerships the Urban Risk Lab has developed with several cities to test their models once they have a usable product up and running. The cities were selected based on their vulnerability to increasing extreme weather events, such as tropical cyclones in Broward County, Florida, and Toa Baja, Puerto Rico, and extratropical storms in Boston, Massachusetts, and Cape Town, South Africa.

    In their proposal, the team outlines a variety of deliverables that the cities can ultimately use in their climate change preparations, with ideas such as online interactive platforms and workshops with stakeholders — such as local governments, developers, nonprofits, and residents — to learn directly what specific tools they need for their local communities. By doing so, they can craft plans addressing different scenarios in their region, involving events such as sea-level rise or heat waves, while also providing information and means of developing adaptation strategies for infrastructure under these conditions that will be the most effective and efficient for them.

    “We are acutely aware of the inequity of resources both in mitigating impacts and recovering from disasters. Working with diverse communities through workshops allows us to engage a lot of people, listen, discuss, and collaboratively design solutions,” says Mazereeuw.

    By the end of five years, the team is hoping that they’ll have better risk assessment and preparedness tool kits, not just for the cities that they’re partnering with, but for others as well.

    “MIT is well-positioned to make progress in this area,” says O’Gorman, “and I think it’s an important problem where we can make a difference.” More

  • in

    Developing electricity-powered, low-emissions alternatives to carbon-intensive industrial processes

    On April 11, 2022, MIT announced five multiyear flagship projects in the first-ever Climate Grand Challenges, a new initiative to tackle complex climate problems and deliver breakthrough solutions to the world as quickly as possible. This is the second article in a five-part series highlighting the most promising concepts to emerge from the competition, and the interdisciplinary research teams behind them.

    One of the biggest leaps that humankind could take to drastically lower greenhouse gas emissions globally would be the complete decarbonization of industry. But without finding low-cost, environmentally friendly substitutes for industrial materials, the traditional production of steel, cement, ammonia, and ethylene will continue pumping out billions of tons of carbon annually; these sectors alone are responsible for at least one third of society’s global greenhouse gas emissions. 

    A major problem is that industrial manufacturers, whose success depends on reliable, cost-efficient, and large-scale production methods, are too heavily invested in processes that have historically been powered by fossil fuels to quickly switch to new alternatives. It’s a machine that kicked on more than 100 years ago, and which MIT electrochemical engineer Yet-Ming Chiang says we can’t shut off without major disruptions to the world’s massive supply chain of these materials. What’s needed, Chiang says, is a broader, collaborative clean energy effort that takes “targeted fundamental research, all the way through to pilot demonstrations that greatly lowers the risk for adoption of new technology by industry.”

    This would be a new approach to decarbonization of industrial materials production that relies on largely unexplored but cleaner electrochemical processes. New production methods could be optimized and integrated into the industrial machine to make it run on low-cost, renewable electricity in place of fossil fuels. 

    Recognizing this, Chiang, the Kyocera Professor in the Department of Materials Science and Engineering, teamed with research collaborator Bilge Yildiz, the Breene M. Kerr Professor of Nuclear Science and Engineering and professor of materials science and engineering, with key input from Karthish Manthiram, visiting professor in the Department of Chemical Engineering, to submit a project proposal to the MIT Climate Grand Challenges. Their plan: to create an innovation hub on campus that would bring together MIT researchers individually investigating decarbonization of steel, cement, ammonia, and ethylene under one roof, combining research equipment and directly collaborating on new methods to produce these four key materials.

    Many researchers across MIT have already signed on to join the effort, including Antoine Allanore, associate professor of metallurgy, who specializes in the development of sustainable materials and manufacturing processes, and Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in the Department of Materials Science and Engineering, who is an expert in materials economics and sustainability. Other MIT faculty currently involved include Fikile Brushett, Betar Gallant, Ahmed Ghoniem, William Green, Jeffrey Grossman, Ju Li, Yuriy Román-Leshkov, Yang Shao-Horn, Robert Stoner, Yogesh Surendranath, Timothy Swager, and Kripa Varanasi.

    “The team we brought together has the expertise needed to tackle these challenges, including electrochemistry — using electricity to decarbonize these chemical processes — and materials science and engineering, process design and scale-up technoeconomic analysis, and system integration, which is all needed for this to go out from our labs to the field,” says Yildiz.

    Selected from a field of more than 100 proposals, their Center for Electrification and Decarbonization of Industry (CEDI) will be the first such institute worldwide dedicated to testing and scaling the most innovative and promising technologies in sustainable chemicals and materials. CEDI will work to facilitate rapid translation of lab discoveries into affordable, scalable industry solutions, with potential to offset as much as 15 percent of greenhouse gas emissions. The team estimates that some CEDI projects already underway could be commercialized within three years.

    “The real timeline is as soon as possible,” says Chiang.

    To achieve CEDI’s ambitious goals, a physical location is key, staffed with permanent faculty, as well as undergraduates, graduate students, and postdocs. Yildiz says the center’s success will depend on engaging student researchers to carry forward with research addressing the biggest ongoing challenges to decarbonization of industry.

    “We are training young scientists, students, on the learned urgency of the problem,” says Yildiz. “We empower them with the skills needed, and even if an individual project does not find the implementation in the field right away, at least, we would have trained the next generation that will continue to go after them in the field.”

    Chiang’s background in electrochemistry showed him how the efficiency of cement production could benefit from adopting clean electricity sources, and Yildiz’s work on ethylene, the source of plastic and one of industry’s most valued chemicals, has revealed overlooked cost benefits to switching to electrochemical processes with less expensive starting materials. With industry partners, they hope to continue these lines of fundamental research along with Allanore, who is focused on electrifying steel production, and Manthiram, who is developing new processes for ammonia. Olivetti will focus on understanding risks and barriers to implementation. This multilateral approach aims to speed up the timeline to industry adoption of new technologies at the scale needed for global impact.

    “One of the points of emphasis in this whole center is going to be applying technoeconomic analysis of what it takes to be successful at a technical and economic level, as early in the process as possible,” says Chiang.

    The impact of large-scale industry adoption of clean energy sources in these four key areas that CEDI plans to target first would be profound, as these sectors are currently responsible for 7.5 billion tons of emissions annually. There is the potential for even greater impact on emissions as new knowledge is applied to other industrial products beyond the initial four targets of steel, cement, ammonia, and ethylene. Meanwhile, the center will stand as a hub to attract new industry, government stakeholders, and research partners to collaborate on urgently needed solutions, both newly arising and long overdue.

    When Chiang and Yildiz first met to discuss ideas for MIT Climate Grand Challenges, they decided they wanted to build a climate research center that functioned unlike any other to help pivot large industry toward decarbonization. Beyond considering how new solutions will impact industry’s bottom line, CEDI will also investigate unique synergies that could arise from the electrification of industry, like processes that would create new byproducts that could be the feedstock to other industry processes, reducing waste and increasing efficiencies in the larger system. And because industry is so good at scaling, those added benefits would be widespread, finally replacing century-old technologies with critical updates designed to improve production and markedly reduce industry’s carbon footprint sooner rather than later.

    “Everything we do, we’re going to try to do with urgency,” Chiang says. “The fundamental research will be done with urgency, and the transition to commercialization, we’re going to do with urgency.” More

  • in

    Structures considered key to gene expression are surprisingly fleeting

    In human chromosomes, DNA is coated by proteins to form an exceedingly long beaded string. This “string” is folded into numerous loops, which are believed to help cells control gene expression and facilitate DNA repair, among other functions. A new study from MIT suggests that these loops are very dynamic and shorter-lived than previously thought.

    In the new study, the researchers were able to monitor the movement of one stretch of the genome in a living cell for about two hours. They saw that this stretch was fully looped for only 3 to 6 percent of the time, with the loop lasting for only about 10 to 30 minutes. The findings suggest that scientists’ current understanding of how loops influence gene expression may need to be revised, the researchers say.

    “Many models in the field have been these pictures of static loops regulating these processes. What our new paper shows is that this picture is not really correct,” says Anders Sejr Hansen, the Underwood-Prescott Career Development Assistant Professor of Biological Engineering at MIT. “We suggest that the functional state of these domains is much more dynamic.”

    Hansen is one of the senior authors of the new study, along with Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Christoph Zechner, a group leader at the Max Planck Institute of Molecular Cell Biology and Genetics in Dresden, Germany, and the Center for Systems Biology Dresden. MIT postdoc Michele Gabriele, recent Harvard University PhD recipient Hugo Brandão, and MIT graduate student Simon Grosse-Holz are the lead authors of the paper, which appears today in Science.

    Out of the loop

    Using computer simulations and experimental data, scientists including Mirny’s group at MIT have shown that loops in the genome are formed by a process called extrusion, in which a molecular motor promotes the growth of progressively larger loops. The motor stops each time it encounters a “stop sign” on DNA. The motor that extrudes such loops is a protein complex called cohesin, while the DNA-bound protein CTCF serves as the stop sign. These cohesin-mediated loops between CTCF sites were seen in previous experiments.

    However, those experiments only offered a snapshot of a moment in time, with no information on how the loops change over time. In their new study, the researchers developed techniques that allowed them to fluorescently label CTCF DNA sites so they could image the DNA loops over several hours. They also created a new computational method that can infer the looping events from the imaging data.

    “This method was crucial for us to distinguish signal from noise in our experimental data and quantify looping,” Zechner says. “We believe that such approaches will become increasingly important for biology as we continue to push the limits of detection with experiments.”

    The researchers used their method to image a stretch of the genome in mouse embryonic stem cells. “If we put our data in the context of one cell division cycle, which lasts about 12 hours, the fully formed loop only actually exists for about 20 to 45 minutes, or about 3 to 6 percent of the time,” Grosse-Holz says.

    “If the loop is only present for such a tiny period of the cell cycle and very short-lived, we shouldn’t think of this fully looped state as being the primary regulator of gene expression,” Hansen says. “We think we need new models for how the 3D structure of the genome regulates gene expression, DNA repair, and other functional downstream processes.”

    While fully formed loops were rare, the researchers found that partially extruded loops were present about 92 percent of the time. These smaller loops have been difficult to observe with the previous methods of detecting loops in the genome.

    “In this study, by integrating our experimental data with polymer simulations, we have now been able to quantify the relative extents of the unlooped, partially extruded, and fully looped states,” Brandão says.

    “Since these interactions are very short, but very frequent, the previous methodologies were not able to fully capture their dynamics,” Gabriele adds. “With our new technique, we can start to resolve transitions between fully looped and unlooped states.”

    Play video

    The researchers hypothesize that these partial loops may play more important roles in gene regulation than fully formed loops. Strands of DNA run along each other as loops begin to form and then fall apart, and these interactions may help regulatory elements such as enhancers and gene promoters find each other.

    “More than 90 percent of the time, there are some transient loops, and presumably what’s important is having those loops that are being perpetually extruded,” Mirny says. “The process of extrusion itself may be more important than the fully looped state that only occurs for a short period of time.”

    More loops to study

    Since most of the other loops in the genome are weaker than the one the researchers studied in this paper, they suspect that many other loops will also prove to be highly transient. They now plan to use their new technique study some of those other loops, in a variety of cell types.

    “There are about 10,000 of these loops, and we’ve looked at one,” Hansen says. “We have a lot of indirect evidence to suggest that the results would be generalizable, but we haven’t demonstrated that. Using the technology platform we’ve set up, which combines new experimental and computational methods, we can begin to approach other loops in the genome.”

    The researchers also plan to investigate the role of specific loops in disease. Many diseases, including a neurodevelopmental disorder called FOXG1 syndrome, could be linked to faulty loop dynamics. The researchers are now studying how both the normal and mutated form of the FOXG1 gene, as well as the cancer-causing gene MYC, are affected by genome loop formation.

    The research was funded by the National Institutes of Health, the National Science Foundation, the Mathers Foundation, a Pew-Stewart Cancer Research Scholar grant, the Chaires d’excellence Internationale Blaise Pascal, an American-Italian Cancer Foundation research scholarship, and the Max Planck Institute for Molecular Cell Biology and Genetics. More

  • in

    A new heat engine with no moving parts is as efficient as a steam turbine

    Engineers at MIT and the National Renewable Energy Laboratory (NREL) have designed a heat engine with no moving parts. Their new demonstrations show that it converts heat to electricity with over 40 percent efficiency — a performance better than that of traditional steam turbines.

    The heat engine is a thermophotovoltaic (TPV) cell, similar to a solar panel’s photovoltaic cells, that passively captures high-energy photons from a white-hot heat source and converts them into electricity. The team’s design can generate electricity from a heat source of between 1,900 to 2,400 degrees Celsius, or up to about 4,300 degrees Fahrenheit.

    The researchers plan to incorporate the TPV cell into a grid-scale thermal battery. The system would absorb excess energy from renewable sources such as the sun and store that energy in heavily insulated banks of hot graphite. When the energy is needed, such as on overcast days, TPV cells would convert the heat into electricity, and dispatch the energy to a power grid.

    With the new TPV cell, the team has now successfully demonstrated the main parts of the system in separate, small-scale experiments. They are working to integrate the parts to demonstrate a fully operational system. From there, they hope to scale up the system to replace fossil-fuel-driven power plants and enable a fully decarbonized power grid, supplied entirely by renewable energy.

    “Thermophotovoltaic cells were the last key step toward demonstrating that thermal batteries are a viable concept,” says Asegun Henry, the Robert N. Noyce Career Development Professor in MIT’s Department of Mechanical Engineering. “This is an absolutely critical step on the path to proliferate renewable energy and get to a fully decarbonized grid.”

    Henry and his collaborators have published their results today in the journal Nature. Co-authors at MIT include Alina LaPotin, Kevin Schulte, Kyle Buznitsky, Colin Kelsall, Andrew Rohskopf, and Evelyn Wang, the Ford Professor of Engineering and head of the Department of Mechanical Engineering, along with collaborators at NREL in Golden, Colorado.

    Jumping the gap

    More than 90 percent of the world’s electricity comes from sources of heat such as coal, natural gas, nuclear energy, and concentrated solar energy. For a century, steam turbines have been the industrial standard for converting such heat sources into electricity.

    On average, steam turbines reliably convert about 35 percent of a heat source into electricity, with about 60 percent representing the highest efficiency of any heat engine to date. But the machinery depends on moving parts that are temperature- limited. Heat sources higher than 2,000 degrees Celsius, such as Henry’s proposed thermal battery system, would be too hot for turbines.

    In recent years, scientists have looked into solid-state alternatives — heat engines with no moving parts, that could potentially work efficiently at higher temperatures.

    “One of the advantages of solid-state energy converters are that they can operate at higher temperatures with lower maintenance costs because they have no moving parts,” Henry says. “They just sit there and reliably generate electricity.”

    Thermophotovoltaic cells offered one exploratory route toward solid-state heat engines. Much like solar cells, TPV cells could be made from semiconducting materials with a particular bandgap — the gap between a material’s valence band and its conduction band. If a photon with a high enough energy is absorbed by the material, it can kick an electron across the bandgap, where the electron can then conduct, and thereby generate electricity — doing so without moving rotors or blades.

    To date, most TPV cells have only reached efficiencies of around 20 percent, with the record at 32 percent, as they have been made of relatively low-bandgap materials that convert lower-temperature, low-energy photons, and therefore convert energy less efficiently.

    Catching light

    In their new TPV design, Henry and his colleagues looked to capture higher-energy photons from a higher-temperature heat source, thereby converting energy more efficiently. The team’s new cell does so with higher-bandgap materials and multiple junctions, or material layers, compared with existing TPV designs.

    The cell is fabricated from three main regions: a high-bandgap alloy, which sits over a slightly lower-bandgap alloy, underneath which is a mirror-like layer of gold. The first layer captures a heat source’s highest-energy photons and converts them into electricity, while lower-energy photons that pass through the first layer are captured by the second and converted to add to the generated voltage. Any photons that pass through this second layer are then reflected by the mirror, back to the heat source, rather than being absorbed as wasted heat.

    The team tested the cell’s efficiency by placing it over a heat flux sensor — a device that directly measures the heat absorbed from the cell. They exposed the cell to a high-temperature lamp and concentrated the light onto the cell. They then varied the bulb’s intensity, or temperature, and observed how the cell’s power efficiency — the amount of power it produced, compared with the heat it absorbed — changed with temperature. Over a range of 1,900 to 2,400 degrees Celsius, the new TPV cell maintained an efficiency of around 40 percent.

    “We can get a high efficiency over a broad range of temperatures relevant for thermal batteries,” Henry says.

    The cell in the experiments is about a square centimeter. For a grid-scale thermal battery system, Henry envisions the TPV cells would have to scale up to about 10,000 square feet (about a quarter of a football field), and would operate in climate-controlled warehouses to draw power from huge banks of stored solar energy. He points out that an infrastructure exists for making large-scale photovoltaic cells, which could also be adapted to manufacture TPVs.

    “There’s definitely a huge net positive here in terms of sustainability,” Henry says. “The technology is safe, environmentally benign in its life cycle, and can have a tremendous impact on abating carbon dioxide emissions from electricity production.”

    This research was supported, in part, by the U.S. Department of Energy. More

  • in

    Engineers enlist AI to help scale up advanced solar cell manufacturing

    Perovskites are a family of materials that are currently the leading contender to potentially replace today’s silicon-based solar photovoltaics. They hold the promise of panels that are far thinner and lighter, that could be made with ultra-high throughput at room temperature instead of at hundreds of degrees, and that are cheaper and easier to transport and install. But bringing these materials from controlled laboratory experiments into a product that can be manufactured competitively has been a long struggle.

    Manufacturing perovskite-based solar cells involves optimizing at least a dozen or so variables at once, even within one particular manufacturing approach among many possibilities. But a new system based on a novel approach to machine learning could speed up the development of optimized production methods and help make the next generation of solar power a reality.

    The system, developed by researchers at MIT and Stanford University over the last few years, makes it possible to integrate data from prior experiments, and information based on personal observations by experienced workers, into the machine learning process. This makes the outcomes more accurate and has already led to the manufacturing of perovskite cells with an energy conversion efficiency of 18.5 percent, a competitive level for today’s market.

    The research is reported today in the journal Joule, in a paper by MIT professor of mechanical engineering Tonio Buonassisi, Stanford professor of materials science and engineering Reinhold Dauskardt, recent MIT research assistant Zhe Liu, Stanford doctoral graduate Nicholas Rolston, and three others.

    Perovskites are a group of layered crystalline compounds defined by the configuration of the atoms in their crystal lattice. There are thousands of such possible compounds and many different ways of making them. While most lab-scale development of perovskite materials uses a spin-coating technique, that’s not practical for larger-scale manufacturing, so companies and labs around the world have been searching for ways of translating these lab materials into a practical, manufacturable product.

    “There’s always a big challenge when you’re trying to take a lab-scale process and then transfer it to something like a startup or a manufacturing line,” says Rolston, who is now an assistant professor at Arizona State University. The team looked at a process that they felt had the greatest potential, a method called rapid spray plasma processing, or RSPP.

    The manufacturing process would involve a moving roll-to-roll surface, or series of sheets, on which the precursor solutions for the perovskite compound would be sprayed or ink-jetted as the sheet rolled by. The material would then move on to a curing stage, providing a rapid and continuous output “with throughputs that are higher than for any other photovoltaic technology,” Rolston says.

    “The real breakthrough with this platform is that it would allow us to scale in a way that no other material has allowed us to do,” he adds. “Even materials like silicon require a much longer timeframe because of the processing that’s done. Whereas you can think of [this approach as more] like spray painting.”

    Within that process, at least a dozen variables may affect the outcome, some of them more controllable than others. These include the composition of the starting materials, the temperature, the humidity, the speed of the processing path, the distance of the nozzle used to spray the material onto a substrate, and the methods of curing the material. Many of these factors can interact with each other, and if the process is in open air, then humidity, for example, may be uncontrolled. Evaluating all possible combinations of these variables through experimentation is impossible, so machine learning was needed to help guide the experimental process.

    But while most machine-learning systems use raw data such as measurements of the electrical and other properties of test samples, they don’t typically incorporate human experience such as qualitative observations made by the experimenters of the visual and other properties of the test samples, or information from other experiments reported by other researchers. So, the team found a way to incorporate such outside information into the machine learning model, using a probability factor based on a mathematical technique called Bayesian Optimization.

    Using the system, he says, “having a model that comes from experimental data, we can find out trends that we weren’t able to see before.” For example, they initially had trouble adjusting for uncontrolled variations in humidity in their ambient setting. But the model showed them “that we could overcome our humidity challenges by changing the temperature, for instance, and by changing some of the other knobs.”

    The system now allows experimenters to much more rapidly guide their process in order to optimize it for a given set of conditions or required outcomes. In their experiments, the team focused on optimizing the power output, but the system could also be used to simultaneously incorporate other criteria, such as cost and durability — something members of the team are continuing to work on, Buonassisi says.

    The researchers were encouraged by the Department of Energy, which sponsored the work, to commercialize the technology, and they’re currently focusing on tech transfer to existing perovskite manufacturers. “We are reaching out to companies now,” Buonassisi says, and the code they developed has been made freely available through an open-source server. “It’s now on GitHub, anyone can download it, anyone can run it,” he says. “We’re happy to help companies get started in using our code.”

    Already, several companies are gearing up to produce perovskite-based solar panels, even though they are still working out the details of how to produce them, says Liu, who is now at the Northwestern Polytechnical University in Xi’an, China. He says companies there are not yet doing large-scale manufacturing, but instead starting with smaller, high-value applications such as building-integrated solar tiles where appearance is important. Three of these companies “are on track or are being pushed by investors to manufacture 1 meter by 2-meter rectangular modules [comparable to today’s most common solar panels], within two years,” he says.

    ‘The problem is, they don’t have a consensus on what manufacturing technology to use,” Liu says. The RSPP method, developed at Stanford, “still has a good chance” to be competitive, he says. And the machine learning system the team developed could prove to be important in guiding the optimization of whatever process ends up being used.

    “The primary goal was to accelerate the process, so it required less time, less experiments, and less human hours to develop something that is usable right away, for free, for industry,” he says.

    “Existing work on machine-learning-driven perovskite PV fabrication largely focuses on spin-coating, a lab-scale technique,” says Ted Sargent, University Professor at the University of Toronto, who was not associated with this work, which he says demonstrates “a workflow that is readily adapted to the deposition techniques that dominate the thin-film industry. Only a handful of groups have the simultaneous expertise in engineering and computation to drive such advances.” Sargent adds that this approach “could be an exciting advance for the manufacture of a broader family of materials” including LEDs, other PV technologies, and graphene, “in short, any industry that uses some form of vapor or vacuum deposition.” 

    The team also included Austin Flick and Thomas Colburn at Stanford and Zekun Ren at the Singapore-MIT Alliance for Science and Technology (SMART). In addition to the Department of Energy, the work was supported by a fellowship from the MIT Energy Initiative, the Graduate Research Fellowship Program from the National Science Foundation, and the SMART program. More