More stories

  • in

    Study: The ocean’s color is changing as a consequence of climate change

    The ocean’s color has changed significantly over the last 20 years, and the global trend is likely a consequence of human-induced climate change, report scientists at MIT, the National Oceanography Center in the U.K., and elsewhere.  

    In a study appearing today in Nature, the team writes that they have detected changes in ocean color over the past two decades that cannot be explained by natural, year-to-year variability alone. These color shifts, though subtle to the human eye, have occurred over 56 percent of the world’s oceans — an expanse that is larger than the total land area on Earth.

    In particular, the researchers found that tropical ocean regions near the equator have become steadily greener over time. The shift in ocean color indicates that ecosystems within the surface ocean must also be changing, as the color of the ocean is a literal reflection of the organisms and materials in its waters.

    At this point, the researchers cannot say how exactly marine ecosystems are changing to reflect the shifting color. But they are pretty sure of one thing: Human-induced climate change is likely the driver.

    “I’ve been running simulations that have been telling me for years that these changes in ocean color are going to happen,” says study co-author Stephanie Dutkiewicz, senior research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences and the Center for Global Change Science. “To actually see it happening for real is not surprising, but frightening. And these changes are consistent with man-induced changes to our climate.”

    “This gives additional evidence of how human activities are affecting life on Earth over a huge spatial extent,” adds lead author B. B. Cael PhD ’19 of the National Oceanography Center in Southampton, U.K. “It’s another way that humans are affecting the biosphere.”

    The study’s co-authors also include Stephanie Henson of the National Oceanography Center, Kelsey Bisson at Oregon State University, and Emmanuel Boss of the University of Maine.

    Above the noise

    The ocean’s color is a visual product of whatever lies within its upper layers. Generally, waters that are deep blue reflect very little life, whereas greener waters indicate the presence of ecosystems, and mainly phytoplankton — plant-like microbes that are abundant in upper ocean and that contain the green pigment chlorophyll. The pigment helps plankton harvest sunlight, which they use to capture carbon dioxide from the atmosphere and convert it into sugars.

    Phytoplankton are the foundation of the marine food web that sustains progressively more complex organisms, on up to krill, fish, and seabirds and marine mammals. Phytoplankton are also a powerful muscle in the ocean’s ability to capture and store carbon dioxide. Scientists are therefore keen to monitor phytoplankton across the surface oceans and to see how these essential communities might respond to climate change. To do so, scientists have tracked changes in chlorophyll, based on the ratio of how much blue versus green light is reflected from the ocean surface, which can be monitored from space

    But around a decade ago, Henson, who is a co-author of the current study, published a paper with others, which showed that, if scientists were tracking chlorophyll alone, it would take at least 30 years of continuous monitoring to detect any trend that was driven specifically by climate change. The reason, the team argued, was that the large, natural variations in chlorophyll from year to year would overwhelm any anthropogenic influence on chlorophyll concentrations. It would therefore take several decades to pick out a meaningful, climate-change-driven signal amid the normal noise.

    In 2019, Dutkiewicz and her colleagues published a separate paper, showing through a new model that the natural variation in other ocean colors is much smaller compared to that of chlorophyll. Therefore, any signal of climate-change-driven changes should be easier to detect over the smaller, normal variations of other ocean colors. They predicted that such changes should be apparent within 20, rather than 30 years of monitoring.

    “So I thought, doesn’t it make sense to look for a trend in all these other colors, rather than in chlorophyll alone?” Cael says. “It’s worth looking at the whole spectrum, rather than just trying to estimate one number from bits of the spectrum.”

     The power of seven

    In the current study, Cael and the team analyzed measurements of ocean color taken by the Moderate Resolution Imaging Spectroradiometer (MODIS) aboard the Aqua satellite, which has been monitoring ocean color for 21 years. MODIS takes measurements in seven visible wavelengths, including the two colors researchers traditionally use to estimate chlorophyll.

    The differences in color that the satellite picks up are too subtle for human eyes to differentiate. Much of the ocean appears blue to our eye, whereas the true color may contain a mix of subtler wavelengths, from blue to green and even red.

    Cael carried out a statistical analysis using all seven ocean colors measured by the satellite from 2002 to 2022 together. He first looked at how much the seven colors changed from region to region during a given year, which gave him an idea of their natural variations. He then zoomed out to see how these annual variations in ocean color changed over a longer stretch of two decades. This analysis turned up a clear trend, above the normal year-to-year variability.

    To see whether this trend is related to climate change, he then looked to Dutkiewicz’s model from 2019. This model simulated the Earth’s oceans under two scenarios: one with the addition of greenhouse gases, and the other without it. The greenhouse-gas model predicted that a significant trend should show up within 20 years and that this trend should cause changes to ocean color in about 50 percent of the world’s surface oceans — almost exactly what Cael found in his analysis of real-world satellite data.

    “This suggests that the trends we observe are not a random variation in the Earth system,” Cael says. “This is consistent with anthropogenic climate change.”

    The team’s results show that monitoring ocean colors beyond chlorophyll could give scientists a clearer, faster way to detect climate-change-driven changes to marine ecosystems.

    “The color of the oceans has changed,” Dutkiewicz says. “And we can’t say how. But we can say that changes in color reflect changes in plankton communities, that will impact everything that feeds on plankton. It will also change how much the ocean will take up carbon, because different types of plankton have different abilities to do that. So, we hope people take this seriously. It’s not only models that are predicting these changes will happen. We can now see it happening, and the ocean is changing.”

    This research was supported, in part, by NASA. More

  • in

    Megawatt electrical motor designed by MIT engineers could help electrify aviation

    Aviation’s huge carbon footprint could shrink significantly with electrification. To date, however, only small all-electric planes have gotten off the ground. Their electric motors generate hundreds of kilowatts of power. To electrify larger, heavier jets, such as commercial airliners, megawatt-scale motors are required. These would be propelled by hybrid or turbo-electric propulsion systems where an electrical machine is coupled with a gas turbine aero-engine.

    To meet this need, a team of MIT engineers is now creating a 1-megawatt motor that could be a key stepping stone toward electrifying larger aircraft. The team has designed and tested the major components of the motor, and shown through detailed computations that the coupled components can work as a whole to generate one megawatt of power, at a weight and size competitive with current small aero-engines.

    For all-electric applications, the team envisions the motor could be paired with a source of electricity such as a battery or a fuel cell. The motor could then turn the electrical energy into mechanical work to power a plane’s propellers. The electrical machine could also be paired with a traditional turbofan jet engine to run as a hybrid propulsion system, providing electric propulsion during certain phases of a flight.

    “No matter what we use as an energy carrier — batteries, hydrogen, ammonia, or sustainable aviation fuel — independent of all that, megawatt-class motors will be a key enabler for greening aviation,” says Zoltan Spakovszky, the T. Wilson Professor in Aeronautics and the Director of the Gas Turbine Laboratory (GTL) at MIT, who leads the project.

    Spakovszky and members of his team, along with industry collaborators, will present their work at a special session of the American Institute of Aeronautics and Astronautics – Electric Aircraft Technologies Symposium (EATS) at the Aviation conference in June.

    The MIT team is composed of faculty, students, and research staff from GTL and the MIT Laboratory for Electromagnetic and Electronic Systems: Henry Andersen Yuankang Chen, Zachary Cordero, David Cuadrado,  Edward Greitzer, Charlotte Gump, James Kirtley, Jr., Jeffrey Lang, David Otten, David Perreault, and Mohammad Qasim,  along with Marc Amato of Innova-Logic LLC. The project is sponsored by Mitsubishi Heavy Industries (MHI).

    Heavy stuff

    To prevent the worst impacts from human-induced climate change, scientists have determined that global emissions of carbon dioxide must reach net zero by 2050. Meeting this target for aviation, Spakovszky says, will require “step-change achievements” in the design of unconventional aircraft, smart and flexible fuel systems, advanced materials, and safe and efficient electrified propulsion. Multiple aerospace companies are focused on electrified propulsion and the design of megawatt-scale electric machines that are powerful and light enough to propel passenger aircraft.

    “There is no silver bullet to make this happen, and the devil is in the details,” Spakovszky says. “This is hard engineering, in terms of co-optimizing individual components and making them compatible with each other while maximizing overall performance. To do this means we have to push the boundaries in materials, manufacturing, thermal management, structures and rotordynamics, and power electronics”

    Broadly speaking, an electric motor uses electromagnetic force to generate motion. Electric motors, such as those that power the fan in your laptop, use electrical energy — from a battery or power supply — to generate a magnetic field, typically through copper coils. In response, a magnet, set near the coils, then spins in the direction of the generated field and can then drive a fan or propeller.

    Electric machines have been around for over 150 years, with the understanding that, the bigger the appliance or vehicle, the larger the copper coils  and the magnetic rotor, making the machine heavier. The more power the electrical machine generates, the more heat it produces, which requires additional elements to keep the components cool — all of which can take up space and add significant weight to the system, making it challenging for airplane applications.

    “Heavy stuff doesn’t go on airplanes,” Spakovszky says. “So we had to come up with a compact, lightweight, and powerful architecture.”

    Good trajectory

    As designed, the MIT electric motor and power electronics are each about the size of a checked suitcase weighing less than an adult passenger.

    The motor’s main components are: a high-speed rotor, lined with an array of magnets with varying orientation of polarity; a compact low-loss stator that fits inside the rotor and contains an intricate array of copper windings; an advanced heat exchanger that keeps the components cool while transmitting the torque of the machine; and a distributed power electronics system, made from 30 custom-built circuit boards, that precisely change the currents running through each of the stator’s copper windings, at high frequency.

    “I believe this is the first truly co-optimized integrated design,” Spakovszky says. “Which means we did a very extensive design space exploration where all considerations from thermal management, to rotor dynamics, to power electronics and electrical machine architecture were assessed in an integrated way to find out what is the best possible combination to get the required specific power at one megawatt.”

    As a whole system, the motor is designed such that the distributed circuit boards are close coupled with the electrical machine to minimize transmission loss and to allow effective air cooling through the integrated heat exchanger.

    “This is a high-speed machine, and to keep it rotating while creating torque, the magnetic fields have to be traveling very quickly, which we can do through our circuit boards switching at high frequency,” Spakovszky says.

    To mitigate risk, the team has built and tested each of the major components individually, and shown that they can operate as designed and at conditions exceeding normal operational demands. The researchers plan to assemble the first fully working electric motor, and start testing it in the fall.

    “The electrification of aircraft has been on a steady rise,” says Phillip Ansell, director of the Center for Sustainable Aviation at the University of Illinois Urbana-Champaign, who was not involved in the project. “This group’s design uses a wonderful combination of conventional and cutting-edge methods for electric machine development, allowing it to offer both robustness and efficiency to meet the practical needs of aircraft of the future.”

    Once the MIT team can demonstrate the electric motor as a whole, they say the design could power regional aircraft and could also be a companion to conventional jet engines, to enable hybrid-electric propulsion systems. The team also envision that multiple one-megawatt motors could power multiple fans distributed along the wing on future aircraft configurations. Looking ahead, the foundations of the one-megawatt electrical machine design could potentially be scaled up to multi-megawatt motors, to power larger passenger planes.

    “I think we’re on a good trajectory,” says Spakovszky, whose group and research have focused on more than just gas turbines. “We are not electrical engineers by training, but addressing the 2050 climate grand challenge is of utmost importance; working with electrical engineering faculty, staff and students for this goal can draw on MIT’s breadth of technologies so the whole is greater than the sum of the parts. So we are reinventing ourselves in new areas. And MIT gives you the opportunity to do that.” More

  • in

    3 Questions: Can disused croplands help mitigate climate change?

    As the world struggles to meet internationally agreed targets for reducing greenhouse gas emissions, methods of removing carbon dioxide such as reforestation of cleared areas have become an increasingly important strategy. But little attention has been paid to the potential for abandoned or marginal croplands to be restored to natural vegetation as an additional carbon sink, say MIT assistant professor of civil and environmental engineering César Terrer, recent visiting MIT doctoral student Stephen M. Bell, and six others, in a recent open-access paper in the journal Nature Communications. Here, Terrer and Bell explain the potential use of these “post-agricultural” lands to help in the fight against damaging climate change.

    Q: How significant is the potential of unused agricultural lands as a carbon sink to help mitigate climate change?

    Bell: We know of these huge instances of land abandonment and post-agricultural succession throughout history, like following the collapse of major cities from ancient Mesopotamia to the Mayans. And when the Europeans arrived in the Americas in the 15th century, so many people died and so much forest grew back on abandoned farmland that it helped cool the entire planet and was potentially a driver of the coldest part of the so-called “Little Ice Age” period.

    Today, we have abandoned farmland all over the Mediterranean region, where I did my PhD field work. As young people left rural areas for the cities throughout the 20th century, farmers couldn’t pass on their land to anyone, and the land succeeded back into shrub lands and forests. The biggest recent example of abandonment is for sure the collapse of the Soviet Union, where an estimated 60 million hectares of forest regrew when support for collective farming stopped, resulting in one of the largest carbon sinks ever attributed to a single event.

    So, when we look back at the past, we know there’s potential. Of course, these are huge events, and no one is proposing to replicate anything like that. We need to use land for multiple purposes, but looking back at these big examples, we know there is potential for abandoned or restored agricultural land to be carbon sinks. And so that tells us to dig deeper into this question and get a better idea of realistic scenarios, a better understanding of the climate change mitigation potential of agricultural cessation in the most strategic places.

    Terrer: More than 115 billion tons of carbon have been lost from soils due to agricultural practices that disturb soil integrity — such as tilling, monoculture farming, removing crop residue, excessive use of fertilizers and pesticides, and over-grazing. To put this into perspective, the amount of carbon lost is equivalent to the total CO2 emissions ever produced in the United States.

    Our current research synthesizes field data from thousands of experiments, aiming to understand the factors that influence soil carbon accrual in abandoned croplands transitioning back to forests or natural grasslands. We’re working to quantify the potential for carbon sequestration in these soils over 30-, 50-, and 100-year time frames and mapping the areas with the greatest potential for carbon storage. This includes both increases in soil carbon and in vegetation biomass.

    Q: What are some of the key uncertainties in evaluating this potential for unused cropland to serve as a carbon sink, and how could those uncertainties be addressed?

    Bell: We use this word uncertainties in two ways. Specifically, the longevity of potential recarbonization, and the intensity of the potential recarbonization. Those are two factors, two aspects that we need to quantify to reduce our uncertainty.

    So, how long will the land recarbonize, regardless of the intensity? If the carbon level is going up, that’s good. If there’s more carbon increasing in the soil, we know that it came from somewhere, it came from the atmosphere. But how long does that happen? We know soil can get saturated. It can reach its carbon capacity limit, it won’t continue to increase the carbon stock, and the recarbonization curve will flatten out. When does that happen? Is it after a hundred years? Is it after 20 years?

    But the world’s soils are very diverse and complex, so what might be true in one place is not true in another place. It may take a longer time to reach saturation for more fertile soils in the Midwest U.S. than less fertile soils in the Southwest, for example. Alternatively, sometimes soils in drier areas like in the Southwest may never reach true saturation if they are degraded and have stalled recovery following abandonment.

    The second uncertainty is intensity: How high on the y-axis on the chart of recarbonization does saturation occur? With the analogy comparing U.S. soils, you might have a relatively huge carbon increase on an abandoned farm in the Southwest, but because the soil is not very carbon-rich it’s not a large increase in absolute terms. In the Midwest, there might only be a small relative increase, but that increase could be much more in total than in the Southwest. These are just nuances to keep in mind as we look at this at the global scale.

    These nuances are essentially uncertainties. Soil carbon responses to agricultural land abandonment is complicated, and unfortunately it hasn’t been studied in much detail so far. We need to reduce those uncertainties to get a better understanding of the recarbonization potential. This is easier said than done because not only do we have these temporal data uncertainties, but we also have spatial uncertainties. We don’t have very good maps of past and present post-agricultural landscapes.

    Q: Can this potential use of post-agricultural lands be implemented without putting global food supplies at risk? How can these needs be balanced?

    Terrer: As to whether utilizing post-agricultural lands for carbon sequestration can be implemented without jeopardizing global food supplies, and how to balance these needs, our recent research provides valuable insights.

    The challenge, of course, lies in balancing cropland restoration for climate mitigation with food security for a growing global population. Abandoned croplands represent an opportunity for carbon sequestration without impacting active agricultural lands. However, the available area of abandoned croplands is insufficient to make a substantial impact on climate mitigation on its own.

    Thus, our proposal also emphasizes the importance of closing yield gaps, which involves increasing crop production per hectare to its theoretical limits. This would enable us to maintain or even increase global crop yields using only a fraction of the currently cultivated area, allowing the remaining land to be dedicated to climate mitigation efforts. By pursuing this strategy, we estimate that over half of the amount of soil carbon lost so far due to agriculture could be recovered, while ensuring food security for the world’s population. More

  • in

    Inaugural J-WAFS Grand Challenge aims to develop enhanced crop variants and move them from lab to land

    According to MIT’s charter, established in 1861, part of the Institute’s mission is to advance the “development and practical application of science in connection with arts, agriculture, manufactures, and commerce.” Today, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) is one of the driving forces behind water and food-related research on campus, much of which relates to agriculture. In 2022, J-WAFS established the Water and Food Grand Challenge Grant to inspire MIT researchers to work toward a water-secure and food-secure future for our changing planet. Not unlike MIT’s Climate Grand Challenges, the J-WAFS Grand Challenge seeks to leverage multiple areas of expertise, programs, and Institute resources. The initial call for statements of interests returned 23 letters from MIT researchers spanning 18 departments, labs, and centers. J-WAFS hosted workshops for the proposers to present and discuss their initial ideas. These were winnowed down to a smaller set of invited concept papers, followed by the final proposal stage. 

    Today, J-WAFS is delighted to report that the inaugural J-WAFS Grand Challenge Grant has been awarded to a team of researchers led by Professor Matt Shoulders and research scientist Robert Wilson of the Department of Chemistry. A panel of expert, external reviewers highly endorsed their proposal, which tackles a longstanding problem in crop biology — how to make photosynthesis more efficient. The team will receive $1.5 million over three years to facilitate a multistage research project that combines cutting-edge innovations in synthetic and computational biology. If successful, this project could create major benefits for agriculture and food systems worldwide.

    “Food systems are a major source of global greenhouse gas emissions, and they are also increasingly vulnerable to the impacts of climate change. That’s why when we talk about climate change, we have to talk about food systems, and vice versa,” says Maria T. Zuber, MIT’s vice president for research. “J-WAFS is central to MIT’s efforts to address the interlocking challenges of climate, water, and food. This new grant program aims to catalyze innovative projects that will have real and meaningful impacts on water and food. I congratulate Professor Shoulders and the rest of the research team on being the inaugural recipients of this grant.”

    Shoulders will work with Bryan Bryson, associate professor of biological engineering, as well as Bin Zhang, associate professor of chemistry, and Mary Gehring, a professor in the Department of Biology and the Whitehead Institute for Biomedical Research. Robert Wilson from the Shoulders lab will be coordinating the research effort. The team at MIT will work with outside collaborators Spencer Whitney, a professor from the Australian National University, and Ahmed Badran, an assistant professor at the Scripps Research Institute. A milestone-based collaboration will also take place with Stephen Long, a professor from the University of Illinois at Urbana-Champaign. The group consists of experts in continuous directed evolution, machine learning, molecular dynamics simulations, translational plant biochemistry, and field trials.

    “This project seeks to fundamentally improve the RuBisCO enzyme that plants use to convert carbon dioxide into the energy-rich molecules that constitute our food,” says J-WAFS Director John H. Lienhard V. “This difficult problem is a true grand challenge, calling for extensive resources. With J-WAFS’ support, this long-sought goal may finally be achieved through MIT’s leading-edge research,” he adds.

    RuBisCO: No, it’s not a new breakfast cereal; it just might be the key to an agricultural revolution

    A growing global population, the effects of climate change, and social and political conflicts like the war in Ukraine are all threatening food supplies, particularly grain crops. Current projections estimate that crop production must increase by at least 50 percent over the next 30 years to meet food demands. One key barrier to increased crop yields is a photosynthetic enzyme called Ribulose-1,5-Bisphosphate Carboxylase/Oxygenase (RuBisCO). During photosynthesis, crops use energy gathered from light to draw carbon dioxide (CO2) from the atmosphere and transform it into sugars and cellulose for growth, a process known as carbon fixation. RuBisCO is essential for capturing the CO2 from the air to initiate conversion of CO2 into energy-rich molecules like glucose. This reaction occurs during the second stage of photosynthesis, also known as the Calvin cycle. Without RuBisCO, the chemical reactions that account for virtually all carbon acquisition in life could not occur.

    Unfortunately, RuBisCO has biochemical shortcomings. Notably, the enzyme acts slowly. Many other enzymes can process a thousand molecules per second, but RuBisCO in chloroplasts fixes less than six carbon dioxide molecules per second, often limiting the rate of plant photosynthesis. Another problem is that oxygen (O2) molecules and carbon dioxide molecules are relatively similar in shape and chemical properties, and RuBisCO is unable to fully discriminate between the two. The inadvertent fixation of oxygen by RuBisCO leads to energy and carbon loss. What’s more, at higher temperatures RuBisCO reacts even more frequently with oxygen, which will contribute to decreased photosynthetic efficiency in many staple crops as our climate warms.

    The scientific consensus is that genetic engineering and synthetic biology approaches could revolutionize photosynthesis and offer protection against crop losses. To date, crop RuBisCO engineering has been impaired by technological obstacles that have limited any success in significantly enhancing crop production. Excitingly, genetic engineering and synthetic biology tools are now at a point where they can be applied and tested with the aim of creating crops with new or improved biological pathways for producing more food for the growing population.

    An epic plan for fighting food insecurity

    The 2023 J-WAFS Grand Challenge project will use state-of-the-art, transformative protein engineering techniques drawn from biomedicine to improve the biochemistry of photosynthesis, specifically focusing on RuBisCO. Shoulders and his team are planning to build what they call the Enhanced Photosynthesis in Crops (EPiC) platform. The project will evolve and design better crop RuBisCO in the laboratory, followed by validation of the improved enzymes in plants, ultimately resulting in the deployment of enhanced RuBisCO in field trials to evaluate the impact on crop yield. 

    Several recent developments make high-throughput engineering of crop RuBisCO possible. RuBisCO requires a complex chaperone network for proper assembly and function in plants. Chaperones are like helpers that guide proteins during their maturation process, shielding them from aggregation while coordinating their correct assembly. Wilson and his collaborators previously unlocked the ability to recombinantly produce plant RuBisCO outside of plant chloroplasts by reconstructing this chaperone network in Escherichia coli (E. coli). Whitney has now established that the RuBisCO enzymes from a range of agriculturally relevant crops, including potato, carrot, strawberry, and tobacco, can also be expressed using this technology. Whitney and Wilson have further developed a range of RuBisCO-dependent E. coli screens that can identify improved RuBisCO from complex gene libraries. Moreover, Shoulders and his lab have developed sophisticated in vivo mutagenesis technologies that enable efficient continuous directed evolution campaigns. Continuous directed evolution refers to a protein engineering process that can accelerate the steps of natural evolution simultaneously in an uninterrupted cycle in the lab, allowing for rapid testing of protein sequences. While Shoulders and Badran both have prior experience with cutting-edge directed evolution platforms, this will be the first time directed evolution is applied to RuBisCO from plants.

    Artificial intelligence is changing the way enzyme engineering is undertaken by researchers. Principal investigators Zhang and Bryson will leverage modern computational methods to simulate the dynamics of RuBisCO structure and explore its evolutionary landscape. Specifically, Zhang will use molecular dynamics simulations to simulate and monitor the conformational dynamics of the atoms in a protein and its programmed environment over time. This approach will help the team evaluate the effect of mutations and new chemical functionalities on the properties of RuBisCO. Bryson will employ artificial intelligence and machine learning to search the RuBisCO activity landscape for optimal sequences. The computational and biological arms of the EPiC platform will work together to both validate and inform each other’s approaches to accelerate the overall engineering effort.

    Shoulders and the group will deploy their designed enzymes in tobacco plants to evaluate their effects on growth and yield relative to natural RuBisCO. Gehring, a plant biologist, will assist with screening improved RuBisCO variants using the tobacco variety Nicotiana benthamianaI, where transient expression can be deployed. Transient expression is a speedy approach to test whether novel engineered RuBisCO variants can be correctly synthesized in leaf chloroplasts. Variants that pass this quality-control checkpoint at MIT will be passed to the Whitney Lab at the Australian National University for stable transformation into Nicotiana tabacum (tobacco), enabling robust measurements of photosynthetic improvement. In a final step, Professor Long at the University of Illinois at Urbana-Champaign will perform field trials of the most promising variants.

    Even small improvements could have a big impact

    A common criticism of efforts to improve RuBisCO is that natural evolution has not already identified a better enzyme, possibly implying that none will be found. Traditional views have speculated a catalytic trade-off between RuBisCO’s specificity factor for CO2 / O2 versus its CO2 fixation efficiency, leading to the belief that specificity factor improvements might be offset by even slower carbon fixation or vice versa. This trade-off has been suggested to explain why natural evolution has been slow to achieve a better RuBisCO. But Shoulders and the team are convinced that the EPiC platform can unlock significant overall improvements to plant RuBisCO. This view is supported by the fact that Wilson and Whitney have previously used directed evolution to improve CO2 fixation efficiency by 50 percent in RuBisCO from cyanobacteria (the ancient progenitors of plant chloroplasts) while simultaneously increasing the specificity factor. 

    The EPiC researchers anticipate that their initial variants could yield 20 percent increases in RuBisCO’s specificity factor without impairing other aspects of catalysis. More sophisticated variants could lift RuBisCO out of its evolutionary trap and display attributes not currently observed in nature. “If we achieve anywhere close to such an improvement and it translates to crops, the results could help transform agriculture,” Shoulders says. “If our accomplishments are more modest, it will still recruit massive new investments to this essential field.”

    Successful engineering of RuBisCO would be a scientific feat of its own and ignite renewed enthusiasm for improving plant CO2 fixation. Combined with other advances in photosynthetic engineering, such as improved light usage, a new green revolution in agriculture could be achieved. Long-term impacts of the technology’s success will be measured in improvements to crop yield and grain availability, as well as resilience against yield losses under higher field temperatures. Moreover, improved land productivity together with policy initiatives would assist in reducing the environmental footprint of agriculture. With more “crop per drop,” reductions in water consumption from agriculture would be a major boost to sustainable farming practices.

    “Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists and field trial experts, yielding a robust feedback loop for enzyme engineering,” Shoulders adds. “Together, this team will be able to make a concerted effort using the most modern, state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.” More

  • in

    MIT engineers devise technology to prevent fouling in photobioreactors for CO2 capture

    Algae grown in transparent tanks or tubes supplied with carbon dioxide can convert the greenhouse gas into other compounds, such as food supplements or fuels. But the process leads to a buildup of algae on the surfaces that clouds them and reduces efficiency, requiring laborious cleanout procedures every couple of weeks.

    MIT researchers have come up with a simple and inexpensive technology that could substantially limit this fouling, potentially allowing for a much more efficient and economical way of converting the unwanted greenhouse gas into useful products.

    The key is to coat the transparent containers with a material that can hold an electrostatic charge, and then applying a very small voltage to that layer. The system has worked well in lab-scale tests, and with further development might be applied to commercial production within a few years.

    The findings are being reported in the journal Advanced Functional Materials, in a paper by recent MIT graduate Victor Leon PhD ’23, professor of mechanical engineering Kripa Varanasi, former postdoc Baptiste Blanc, and undergraduate student Sophia Sonnert.

    No matter how successful efforts to reduce or eliminate carbon emissions may be, there will still be excess greenhouse gases that will remain in the atmosphere for centuries to come, continuing to affect global climate, Varanasi points out. “There’s already a lot of carbon dioxide there, so we have to look at negative emissions technologies as well,” he says, referring to ways of removing the greenhouse gas from the air or oceans, or from their sources before they get released into the air in the first place.

    When people think of biological approaches to carbon dioxide reduction, the first thought is usually of planting or protecting trees, which are indeed a crucial “sink” for atmospheric carbon. But there are others. “Marine algae account for about 50 percent of global carbon dioxide absorbed today on Earth,” Varanasi says. These algae grow anywhere from 10 to 50 times more quickly than land-based plants, and they can be grown in ponds or tanks that take up only a tenth of the land footprint of terrestrial plants.

    What’s more, the algae themselves can then be a useful product. “These algae are rich in proteins, vitamins and other nutrients,” Varanasi says, noting they could produce far more nutritional output per unit of land used than some traditional agricultural crops.

    If attached to the flue gas output of a coal or gas power plant, algae could not only thrive on the carbon dioxide as a nutrient source, but some of the microalgae species could also consume the associated nitrogen and sulfur oxides present in these emissions. “For every two or three kilograms of CO2, a kilogram of algae could be produced, and these could be used as biofuels, or for Omega-3, or food,” Varanasi says.

    Omega-3 fatty acids are a widely used food supplement, as they are an essential part of cell membranes and other tissues but cannot be made by the body and must be obtained from food. “Omega 3 is particularly attractive because it’s also a much higher-value product,” Varanasi says.

    Most algae grown commercially are cultivated in shallow ponds, while others are grown in transparent tubes called photobioreactors. The tubes can produce seven to 10 times greater yields than ponds for a given amount of land, but they face a major problem: The algae tend to build up on the transparent surfaces, requiring frequent shutdowns of the whole production system for cleaning, which can take as long as the productive part of the cycle, thus cutting overall output in half and adding to operational costs.

    The fouling also limits the design of the system. The tubes can’t be too small because the fouling would begin to block the flow of water through the bioreactor and require higher pumping rates.

    Varanasi and his team decided to try to use a natural characteristic of the algae cells to defend against fouling. Because the cells naturally carry a small negative electric charge on their membrane surface, the team figured that electrostatic repulsion could be used to push them away.

    The idea was to create a negative charge on the vessel walls, such that the electric field forces the algae cells away from the walls. To create such an electric field requires a high-performance dielectric material, which is an electrical insulator with a high “permittivity” that can produce a large change in surface charge with a smaller voltage.

    “What people have done before with applying voltage [to bioreactors] has been with conductive surfaces,” Leon explains, “but what we’re doing here is specifically with nonconductive surfaces.”

    He adds: “If it’s conductive, then you pass current and you’re kind of shocking the cells. What we’re trying to do is pure electrostatic repulsion, so the surface would be negative and the cell is negative so you get repulsion. Another way to describe it is like a force field, whereas before the cells were touching the surface and getting shocked.”

    The team worked with two different dielectric materials, silicon dioxide — essentially glass — and hafnia (hafnium oxide), both of which turned out to be far more efficient at minimizing fouling than conventional plastics used to make photobioreactors. The material can be applied in a coating that is vanishingly thin, just 10 to 20 nanometers (billionths of a meter) thick, so very little would be needed to coat a full photobioreactor system.

    “What we are excited about here is that we are able to show that purely from electrostatic interactions, we are able to control cell adhesion,” Varanasi says. “It’s almost like an on-off switch, to be able to do this.”

    Additionally, Leon says, “Since we’re using this electrostatic force, we don’t really expect it to be cell-specific, and we think there’s potential for applying it with other cells than just algae. In future work, we’d like to try using it with mammalian cells, bacteria, yeast, and so on.” It could also be used with other valuable types of algae, such as spirulina, that are widely used as food supplements.

    The same system could be used to either repel or attract cells by just reversing the voltage, depending on the particular application. Instead of algae, a similar setup might be used with human cells to produce artificial organs by producing a scaffold that could be charged to attract the cells into the right configuration, Varanasi suggests.

    “Our study basically solves this major problem of biofouling, which has been a bottleneck for photobioreactors,” he says. “With this technology, we can now really achieve the full potential” of such systems, although further development will be needed to scale up to practical, commercial systems.

    As for how soon this could be ready for widespread deployment, he says, “I don’t see why not in three years’ timeframe, if we get the right resources to be able to take this work forward.”

    The study was supported by energy company Eni S.p.A., through the MIT Energy Initiative. More

  • in

    Study: Shutting down nuclear power could increase air pollution

    Nearly 20 percent of today’s electricity in the United States comes from nuclear power. The U.S. has the largest nuclear fleet in the world, with 92 reactors scattered around the country. Many of these power plants have run for more than half a century and are approaching the end of their expected lifetimes.

    Policymakers are debating whether to retire the aging reactors or reinforce their structures to continue producing nuclear energy, which many consider a low-carbon alternative to climate-warming coal, oil, and natural gas.

    Now, MIT researchers say there’s another factor to consider in weighing the future of nuclear power: air quality. In addition to being a low carbon-emitting source, nuclear power is relatively clean in terms of the air pollution it generates. Without nuclear power, how would the pattern of air pollution shift, and who would feel its effects?

    The MIT team took on these questions in a new study appearing today in Nature Energy. They lay out a scenario in which every nuclear power plant in the country has shut down, and consider how other sources such as coal, natural gas, and renewable energy would fill the resulting energy needs throughout an entire year.

    Their analysis reveals that indeed, air pollution would increase, as coal, gas, and oil sources ramp up to compensate for nuclear power’s absence. This in itself may not be surprising, but the team has put numbers to the prediction, estimating that the increase in air pollution would have serious health effects, resulting in an additional 5,200 pollution-related deaths over a single year.

    If, however, more renewable energy sources become available to supply the energy grid, as they are expected to by the year 2030, air pollution would be curtailed, though not entirely. The team found that even under this heartier renewable scenario, there is still a slight increase in air pollution in some parts of the country, resulting in a total of 260 pollution-related deaths over one year.

    When they looked at the populations directly affected by the increased pollution, they found that Black or African American communities — a disproportionate number of whom live near fossil-fuel plants — experienced the greatest exposure.

    “This adds one more layer to the environmental health and social impacts equation when you’re thinking about nuclear shutdowns, where the conversation often focuses on local risks due to accidents and mining or long-term climate impacts,” says lead author Lyssa Freese, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

    “In the debate over keeping nuclear power plants open, air quality has not been a focus of that discussion,” adds study author Noelle Selin, a professor in MIT’s Institute for Data, Systems, and Society (IDSS) and EAPS. “What we found was that air pollution from fossil fuel plants is so damaging, that anything that increases it, such as a nuclear shutdown, is going to have substantial impacts, and for some people more than others.”

    The study’s MIT-affiliated co-authors also include Principal Research Scientist Sebastian Eastham and Guillaume Chossière SM ’17, PhD ’20, along with Alan Jenn of the University of California at Davis.

    Future phase-outs

    When nuclear power plants have closed in the past, fossil fuel use increased in response. In 1985, the closure of reactors in Tennessee Valley prompted a spike in coal use, while the 2012 shutdown of a plant in California led to an increase in natural gas. In Germany, where nuclear power has almost completely been phased out, coal-fired power increased initially to fill the gap.

    Noting these trends, the MIT team wondered how the U.S. energy grid would respond if nuclear power were completely phased out.

    “We wanted to think about what future changes were expected in the energy grid,” Freese says. “We knew that coal use was declining, and there was a lot of work already looking at the impact of what that would have on air quality. But no one had looked at air quality and nuclear power, which we also noticed was on the decline.”

    In the new study, the team used an energy grid dispatch model developed by Jenn to assess how the U.S. energy system would respond to a shutdown of nuclear power. The model simulates the production of every power plant in the country and runs continuously to estimate, hour by hour, the energy demands in 64 regions across the country.

    Much like the way the actual energy market operates, the model chooses to turn a plant’s production up or down based on cost: Plants producing the cheapest energy at any given time are given priority to supply the grid over more costly energy sources.

    The team fed the model available data on each plant’s changing emissions and energy costs throughout an entire year. They then ran the model under different scenarios, including: an energy grid with no nuclear power, a baseline grid similar to today’s that includes nuclear power, and a grid with no nuclear power that also incorporates the additional renewable sources that are expected to be added by 2030.

    They combined each simulation with an atmospheric chemistry model to simulate how each plant’s various emissions travel around the country and to overlay these tracks onto maps of population density. For populations in the path of pollution, they calculated the risk of premature death based on their degree of exposure.

    System response

    Play video

    Courtesy of the researchers, edited by MIT News

    Their analysis showed a clear pattern: Without nuclear power, air pollution worsened in general, mainly affecting regions in the East Coast, where nuclear power plants are mostly concentrated. Without those plants, the team observed an uptick in production from coal and gas plants, resulting in 5,200 pollution-related deaths across the country, compared to the baseline scenario.

    They also calculated that more people are also likely to die prematurely due to climate impacts from the increase in carbon dioxide emissions, as the grid compensates for nuclear power’s absence. The climate-related effects from this additional influx of carbon dioxide could lead to 160,000 additional deaths over the next century.

    “We need to be thoughtful about how we’re retiring nuclear power plants if we are trying to think about them as part of an energy system,” Freese says. “Shutting down something that doesn’t have direct emissions itself can still lead to increases in emissions, because the grid system will respond.”

    “This might mean that we need to deploy even more renewables, in order to fill the hole left by nuclear, which is essentially a zero-emissions energy source,” Selin adds. “Otherwise we will have a reduction in air quality that we weren’t necessarily counting on.”

    This study was supported, in part, by the U.S. Environmental Protection Agency. More

  • in

    3 Questions: Leveraging carbon uptake to lower concrete’s carbon footprint

    To secure a more sustainable and resilient future, we must take a careful look at the life cycle impacts of humanity’s most-produced building material: concrete. Carbon uptake, the process by which cement-based products sequester carbon dioxide, is key to this understanding.

    Hessam AzariJafari, the MIT Concrete Sustainability Hub’s deputy director, is deeply invested in the study of this process and its acceleration, where prudent. Here, he describes how carbon uptake is a key lever to reach a carbon-neutral concrete industry.

    Q: What is carbon uptake in cement-based products and how can it influence their properties?

    A: Carbon uptake, or carbonation, is a natural process of permanently sequestering CO2 from the atmosphere by hardened cement-based products like concretes and mortars. Through this reaction, these products form different kinds of limes or calcium carbonates. This uptake occurs slowly but significantly during two phases of the life cycle of cement-based products: the use phase and the end-of-life phase.

    In general, carbon uptake increases the compressive strength of cement-based products as it can densify the paste. At the same time, carbon uptake can impact the corrosion resistance of concrete. In concrete that is reinforced with steel, the corrosion process can be initiated if the carbonation happens extensively (e.g., the whole of the concrete cover is carbonated) and intensively (e.g., a significant proportion of the hardened cement product is carbonated). [Concrete cover is the layer distance between the surface of reinforcement and the outer surface of the concrete.]

    Q: What are the factors that influence carbon uptake?

    A: The intensity of carbon uptake depends on four major factors: the climate, the types and properties of cement-based products used, the composition of binders (cement type) used, and the geometry and exposure condition of the structure.

    In regard to climate, the humidity and temperature affect the carbon uptake rate. In very low or very high humidity conditions, the carbon uptake process is slowed. High temperatures speed the process. The local atmosphere’s carbon dioxide concentration can affect the carbon uptake rate. For example, in urban areas, carbon uptake is an order of magnitude faster than in suburban areas.

    The types and properties of cement-based products have a large influence on the rate of carbon uptake. For example, mortar (consisting of water, cement, and fine aggregates) carbonates two to four times faster than concrete (consisting of water, cement, and coarse and fine aggregates) because of its more porous structure.The carbon uptake rate of dry-cast concrete masonry units is higher than wet-cast for the same reason. In structural concrete, the process is made slower as mechanical properties are improved and the density of the hardened products’ structure increases.

    Lastly, a structure’s surface area-to-volume ratio and exposure to air and water can have ramifications for its rate of carbonation. When cement-based products are covered, carbonation may be slowed or stopped. Concrete that is exposed to fresh air while being sheltered from rain can have a larger carbon uptake compared to cement-based products that are painted or carpeted. Additionally, cement-based elements with large surface areas, like thin concrete structures or mortar layers, allow uptake to progress more extensively.

    Q: What is the role of carbon uptake in the carbon neutrality of concrete, and how should architects and engineers account for it when designing for specific applications?

    A: Carbon uptake is a part of the life cycle of any cement-based products that should be accounted for in carbon footprint calculations. Our evaluation shows the U.S. pavement network can sequester 5.8 million metric tons of CO2, of which 52 percent will be sequestered when the demolished concrete is stockpiled at its end of life.

    From one concrete structure to another, the percentage of emissions sequestered may vary. For instance, concrete bridges tend to have a lower percentage versus buildings constructed with concrete masonry. In any case, carbon uptake can influence the life cycle environmental performance of concrete.

    At the MIT Concrete Sustainability Hub, we have developed a calculator to enable construction stakeholders to estimate the carbon uptake of concrete structures during their use and end-of-life phases.

    Looking toward the future, carbon uptake’s role in the carbon neutralization of cement-based products could grow in importance. While caution should be taken in regards to uptake when reinforcing steel is embedded in concrete, there are opportunities for different stakeholders to augment carbon uptake in different cement-based products.

    Architects can influence the shape of concrete elements to increase the surface area-to-volume ratio (e.g., making “waffle” patterns on slabs and walls, or having several thin towers instead of fewer large ones on an apartment complex). Concrete manufacturers can adjust the binder type and quantity while delivering concrete that meets performance requirements. Finally, industrial ecologists and life-cycle assessment practitioners need to work on the tools and add-ons to make sure the impact of carbon is well captured when assessing the potential impacts of cement-based products in buildings and infrastructure systems.

    Currently, the cement and concrete industry is working with tech companies as well as local, state, and federal governments to lower and subsidize the code of carbon capture sequestration and neutralization. Accelerating carbon uptake where reasonable could be an additional lever to neutralize the carbon emissions of the concrete value chain.

    Carbon uptake is one more piece of the puzzle that makes concrete a sustainable choice for building in many applications. The sustainability and resilience of the future built environment lean on the use of concrete. There is still much work to be done to truly build sustainably, and understanding carbon uptake is an important place to begin. More

  • in

    Detailed images from space offer clearer picture of drought effects on plants

    “MIT is a place where dreams come true,” says César Terrer, an assistant professor in the Department of Civil and Environmental Engineering. Here at MIT, Terrer says he’s given the resources needed to explore ideas he finds most exciting, and at the top of his list is climate science. In particular, he is interested in plant-soil interactions, and how the two can mitigate impacts of climate change. In 2022, Terrer received seed grant funding from the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) to produce drought monitoring systems for farmers. The project is leveraging a new generation of remote sensing devices to provide high-resolution plant water stress at regional to global scales.

    Growing up in Granada, Spain, Terrer always had an aptitude and passion for science. He studied environmental science at the University of Murcia, where he interned in the Department of Ecology. Using computational analysis tools, he worked on modeling species distribution in response to human development. Early on in his undergraduate experience, Terrer says he regarded his professors as “superheroes” with a kind of scholarly prowess. He knew he wanted to follow in their footsteps by one day working as a faculty member in academia. Of course, there would be many steps along the way before achieving that dream. 

    Upon completing his undergraduate studies, Terrer set his sights on exciting and adventurous research roles. He thought perhaps he would conduct field work in the Amazon, engaging with native communities. But when the opportunity arose to work in Australia on a state-of-the-art climate change experiment that simulates future levels of carbon dioxide, he headed south to study how plants react to CO2 in a biome of native Australian eucalyptus trees. It was during this experience that Terrer started to take a keen interest in the carbon cycle and the capacity of ecosystems to buffer rising levels of CO2 caused by human activity.

    Around 2014, he began to delve deeper into the carbon cycle as he began his doctoral studies at Imperial College London. The primary question Terrer sought to answer during his PhD was “will plants be able to absorb predicted future levels of CO2 in the atmosphere?” To answer the question, Terrer became an early adopter of artificial intelligence, machine learning, and remote sensing to analyze data from real-life, global climate change experiments. His findings from these “ground truth” values and observations resulted in a paper in the journal Science. In it, he claimed that climate models most likely overestimated how much carbon plants will be able to absorb by the end of the century, by a factor of three. 

    After postdoctoral positions at Stanford University and the Universitat Autonoma de Barcelona, followed by a prestigious Lawrence Fellowship, Terrer says he had “too many ideas and not enough time to accomplish all those ideas.” He knew it was time to lead his own group. Not long after applying for faculty positions, he landed at MIT. 

    New ways to monitor drought

    Terrer is employing similar methods to those he used during his PhD to analyze data from all over the world for his J-WAFS project. He and postdoc Wenzhe Jiao collect data from remote sensing satellites and field experiments and use machine learning to come up with new ways to monitor drought. Terrer says Jiao is a “remote sensing wizard,” who fuses data from different satellite products to understand the water cycle. With Jiao’s hydrology expertise and Terrer’s knowledge of plants, soil, and the carbon cycle, the duo is a formidable team to tackle this project.

    According to the U.N. World Meteorological Organization, the number and duration of droughts has increased by 29 percent since 2000, as compared to the two previous decades. From the Horn of Africa to the Western United States, drought is devastating vegetation and severely stressing water supplies, compromising food production and spiking food insecurity. Drought monitoring can offer fundamental information on drought location, frequency, and severity, but assessing the impact of drought on vegetation is extremely challenging. This is because plants’ sensitivity to water deficits varies across species and ecosystems. 

    Terrer and Jiao are able to obtain a clearer picture of how drought is affecting plants by employing the latest generation of remote sensing observations, which offer images of the planet with incredible spatial and temporal resolution. Satellite products such as Sentinel, Landsat, and Planet can provide daily images from space with such high resolution that individual trees can be discerned. Along with the images and datasets from satellites, the team is using ground-based observations from meteorological data. They are also using the MIT SuperCloud at MIT Lincoln Laboratory to process and analyze all of the data sets. The J-WAFS project is among one of the first to leverage high-resolution data to quantitatively measure plant drought impacts in the United States with the hopes of expanding to a global assessment in the future.

    Assisting farmers and resource managers 

    Every week, the U.S. Drought Monitor provides a map of drought conditions in the United States. The map has zero resolution and is more of a drought recap or summary, unable to predict future drought scenarios. The lack of a comprehensive spatiotemporal evaluation of historic and future drought impacts on global vegetation productivity is detrimental to farmers both in the United States and worldwide.  

    Terrer and Jiao plan to generate metrics for plant water stress at an unprecedented resolution of 10-30 meters. This means that they will be able to provide drought monitoring maps at the scale of a typical U.S. farm, giving farmers more precise, useful data every one to two days. The team will use the information from the satellites to monitor plant growth and soil moisture, as well as the time lag of plant growth response to soil moisture. In this way, Terrer and Jiao say they will eventually be able to create a kind of “plant water stress forecast” that may be able to predict adverse impacts of drought four weeks in advance. “According to the current soil moisture and lagged response time, we hope to predict plant water stress in the future,” says Jiao. 

    The expected outcomes of this project will give farmers, land and water resource managers, and decision-makers more accurate data at the farm-specific level, allowing for better drought preparation, mitigation, and adaptation. “We expect to make our data open-access online, after we finish the project, so that farmers and other stakeholders can use the maps as tools,” says Jiao. 

    Terrer adds that the project “has the potential to help us better understand the future states of climate systems, and also identify the regional hot spots more likely to experience water crises at the national, state, local, and tribal government scales.” He also expects the project will enhance our understanding of global carbon-water-energy cycle responses to drought, with applications in determining climate change impacts on natural ecosystems as a whole. More