More stories

  • in

    Reality check on technologies to remove carbon dioxide from the air

    In 2015, 195 nations plus the European Union signed the Paris Agreement and pledged to undertake plans designed to limit the global temperature increase to 1.5 degrees Celsius. Yet in 2023, the world exceeded that target for most, if not all of, the year — calling into question the long-term feasibility of achieving that target.To do so, the world must reduce the levels of greenhouse gases in the atmosphere, and strategies for achieving levels that will “stabilize the climate” have been both proposed and adopted. Many of those strategies combine dramatic cuts in carbon dioxide (CO2) emissions with the use of direct air capture (DAC), a technology that removes CO2 from the ambient air. As a reality check, a team of researchers in the MIT Energy Initiative (MITEI) examined those strategies, and what they found was alarming: The strategies rely on overly optimistic — indeed, unrealistic — assumptions about how much CO2 could be removed by DAC. As a result, the strategies won’t perform as predicted. Nevertheless, the MITEI team recommends that work to develop the DAC technology continue so that it’s ready to help with the energy transition — even if it’s not the silver bullet that solves the world’s decarbonization challenge.DAC: The promise and the realityIncluding DAC in plans to stabilize the climate makes sense. Much work is now under way to develop DAC systems, and the technology looks promising. While companies may never run their own DAC systems, they can already buy “carbon credits” based on DAC. Today, a multibillion-dollar market exists on which entities or individuals that face high costs or excessive disruptions to reduce their own carbon emissions can pay others to take emissions-reducing actions on their behalf. Those actions can involve undertaking new renewable energy projects or “carbon-removal” initiatives such as DAC or afforestation/reforestation (planting trees in areas that have never been forested or that were forested in the past). DAC-based credits are especially appealing for several reasons, explains Howard Herzog, a senior research engineer at MITEI. With DAC, measuring and verifying the amount of carbon removed is straightforward; the removal is immediate, unlike with planting forests, which may take decades to have an impact; and when DAC is coupled with CO2 storage in geologic formations, the CO2 is kept out of the atmosphere essentially permanently — in contrast to, for example, sequestering it in trees, which may one day burn and release the stored CO2.Will current plans that rely on DAC be effective in stabilizing the climate in the coming years? To find out, Herzog and his colleagues Jennifer Morris and Angelo Gurgel, both MITEI principal research scientists, and Sergey Paltsev, a MITEI senior research scientist — all affiliated with the MIT Center for Sustainability Science and Strategy (CS3) — took a close look at the modeling studies on which those plans are based.Their investigation identified three unavoidable engineering challenges that together lead to a fourth challenge — high costs for removing a single ton of CO2 from the atmosphere. The details of their findings are reported in a paper published in the journal One Earth on Sept. 20.Challenge 1: Scaling upWhen it comes to removing CO2 from the air, nature presents “a major, non-negotiable challenge,” notes the MITEI team: The concentration of CO2 in the air is extremely low — just 420 parts per million, or roughly 0.04 percent. In contrast, the CO2 concentration in flue gases emitted by power plants and industrial processes ranges from 3 percent to 20 percent. Companies now use various carbon capture and sequestration (CCS) technologies to capture CO2 from their flue gases, but capturing CO2 from the air is much more difficult. To explain, the researchers offer the following analogy: “The difference is akin to needing to find 10 red marbles in a jar of 25,000 marbles of which 24,990 are blue [the task representing DAC] versus needing to find about 10 red marbles in a jar of 100 marbles of which 90 are blue [the task for CCS].”Given that low concentration, removing a single metric ton (tonne) of CO2 from air requires processing about 1.8 million cubic meters of air, which is roughly equivalent to the volume of 720 Olympic-sized swimming pools. And all that air must be moved across a CO2-capturing sorbent — a feat requiring large equipment. For example, one recently proposed design for capturing 1 million tonnes of CO2 per year would require an “air contactor” equivalent in size to a structure about three stories high and three miles long.Recent modeling studies project DAC deployment on the scale of 5 to 40 gigatonnes of CO2 removed per year. (A gigatonne equals 1 billion metric tonnes.) But in their paper, the researchers conclude that the likelihood of deploying DAC at the gigatonne scale is “highly uncertain.”Challenge 2: Energy requirementGiven the low concentration of CO2 in the air and the need to move large quantities of air to capture it, it’s no surprise that even the best DAC processes proposed today would consume large amounts of energy — energy that’s generally supplied by a combination of electricity and heat. Including the energy needed to compress the captured CO2 for transportation and storage, most proposed processes require an equivalent of at least 1.2 megawatt-hours of electricity for each tonne of CO2 removed.The source of that electricity is critical. For example, using coal-based electricity to drive an all-electric DAC process would generate 1.2 tonnes of CO2 for each tonne of CO2 captured. The result would be a net increase in emissions, defeating the whole purpose of the DAC. So clearly, the energy requirement must be satisfied using either low-carbon electricity or electricity generated using fossil fuels with CCS. All-electric DAC deployed at large scale — say, 10 gigatonnes of CO2 removed annually — would require 12,000 terawatt-hours of electricity, which is more than 40 percent of total global electricity generation today.Electricity consumption is expected to grow due to increasing overall electrification of the world economy, so low-carbon electricity will be in high demand for many competing uses — for example, in power generation, transportation, industry, and building operations. Using clean electricity for DAC instead of for reducing CO2 emissions in other critical areas raises concerns about the best uses of clean electricity.Many studies assume that a DAC unit could also get energy from “waste heat” generated by some industrial process or facility nearby. In the MITEI researchers’ opinion, “that may be more wishful thinking than reality.” The heat source would need to be within a few miles of the DAC plant for transporting the heat to be economical; given its high capital cost, the DAC plant would need to run nonstop, requiring constant heat delivery; and heat at the temperature required by the DAC plant would have competing uses, for example, for heating buildings. Finally, if DAC is deployed at the gigatonne per year scale, waste heat will likely be able to provide only a small fraction of the needed energy.Challenge 3: SitingSome analysts have asserted that, because air is everywhere, DAC units can be located anywhere. But in reality, siting a DAC plant involves many complex issues. As noted above, DAC plants require significant amounts of energy, so having access to enough low-carbon energy is critical. Likewise, having nearby options for storing the removed CO2 is also critical. If storage sites or pipelines to such sites don’t exist, major new infrastructure will need to be built, and building new infrastructure of any kind is expensive and complicated, involving issues related to permitting, environmental justice, and public acceptability — issues that are, in the words of the researchers, “commonly underestimated in the real world and neglected in models.”Two more siting needs must be considered. First, meteorological conditions must be acceptable. By definition, any DAC unit will be exposed to the elements, and factors like temperature and humidity will affect process performance and process availability. And second, a DAC plant will require some dedicated land — though how much is unclear, as the optimal spacing of units is as yet unresolved. Like wind turbines, DAC units need to be properly spaced to ensure maximum performance such that one unit is not sucking in CO2-depleted air from another unit.Challenge 4: CostConsidering the first three challenges, the final challenge is clear: the cost per tonne of CO2 removed is inevitably high. Recent modeling studies assume DAC costs as low as $100 to $200 per ton of CO2 removed. But the researchers found evidence suggesting far higher costs.To start, they cite typical costs for power plants and industrial sites that now use CCS to remove CO2 from their flue gases. The cost of CCS in such applications is estimated to be in the range of $50 to $150 per ton of CO2 removed. As explained above, the far lower concentration of CO2 in the air will lead to substantially higher costs.As explained under Challenge 1, the DAC units needed to capture the required amount of air are massive. The capital cost of building them will be high, given labor, materials, permitting costs, and so on. Some estimates in the literature exceed $5,000 per tonne captured per year.Then there are the ongoing costs of energy. As noted under Challenge 2, removing 1 tonne of CO2 requires the equivalent of 1.2 megawatt-hours of electricity. If that electricity costs $0.10 per kilowatt-hour, the cost of just the electricity needed to remove 1 tonne of CO2 is $120. The researchers point out that assuming such a low price is “questionable,” given the expected increase in electricity demand, future competition for clean energy, and higher costs on a system dominated by renewable — but intermittent — energy sources.Then there’s the cost of storage, which is ignored in many DAC cost estimates.Clearly, many considerations show that prices of $100 to $200 per tonne are unrealistic, and assuming such low prices will distort assessments of strategies, leading them to underperform going forward.The bottom lineIn their paper, the MITEI team calls DAC a “very seductive concept.” Using DAC to suck CO2 out of the air and generate high-quality carbon-removal credits can offset reduction requirements for industries that have hard-to-abate emissions. By doing so, DAC would minimize disruptions to key parts of the world’s economy, including air travel, certain carbon-intensive industries, and agriculture. However, the world would need to generate billions of tonnes of CO2 credits at an affordable price. That prospect doesn’t look likely. The largest DAC plant in operation today removes just 4,000 tonnes of CO2 per year, and the price to buy the company’s carbon-removal credits on the market today is $1,500 per tonne.The researchers recognize that there is room for energy efficiency improvements in the future, but DAC units will always be subject to higher work requirements than CCS applied to power plant or industrial flue gases, and there is not a clear pathway to reducing work requirements much below the levels of current DAC technologies.Nevertheless, the researchers recommend that work to develop DAC continue “because it may be needed for meeting net-zero emissions goals, especially given the current pace of emissions.” But their paper concludes with this warning: “Given the high stakes of climate change, it is foolhardy to rely on DAC to be the hero that comes to our rescue.” More

  • in

    Study finds mercury pollution from human activities is declining

    MIT researchers have some good environmental news: Mercury emissions from human activity have been declining over the past two decades, despite global emissions inventories that indicate otherwise.In a new study, the researchers analyzed measurements from all available monitoring stations in the Northern Hemisphere and found that atmospheric concentrations of mercury declined by about 10 percent between 2005 and 2020.They used two separate modeling methods to determine what is driving that trend. Both techniques pointed to a decline in mercury emissions from human activity as the most likely cause.Global inventories, on the other hand, have reported opposite trends. These inventories estimate atmospheric emissions using models that incorporate average emission rates of polluting activities and the scale of these activities worldwide.“Our work shows that it is very important to learn from actual, on-the-ground data to try and improve our models and these emissions estimates. This is very relevant for policy because, if we are not able to accurately estimate past mercury emissions, how are we going to predict how mercury pollution will evolve in the future?” says Ari Feinberg, a former postdoc in the Institute for Data, Systems, and Society (IDSS) and lead author of the study.The new results could help inform scientists who are embarking on a collaborative, global effort to evaluate pollution models and develop a more in-depth understanding of what drives global atmospheric concentrations of mercury.However, due to a lack of data from global monitoring stations and limitations in the scientific understanding of mercury pollution, the researchers couldn’t pinpoint a definitive reason for the mismatch between the inventories and the recorded measurements.“It seems like mercury emissions are moving in the right direction, and could continue to do so, which is heartening to see. But this was as far as we could get with mercury. We need to keep measuring and advancing the science,” adds co-author Noelle Selin, an MIT professor in the IDSS and the Department of Earth, Atmospheric and Planetary Sciences (EAPS).Feinberg and Selin, his MIT postdoctoral advisor, are joined on the paper by an international team of researchers that contributed atmospheric mercury measurement data and statistical methods to the study. The research appears this week in the Proceedings of the National Academy of Sciences.Mercury mismatchThe Minamata Convention is a global treaty that aims to cut human-caused emissions of mercury, a potent neurotoxin that enters the atmosphere from sources like coal-fired power plants and small-scale gold mining.The treaty, which was signed in 2013 and went into force in 2017, is evaluated every five years. The first meeting of its conference of parties coincided with disheartening news reports that said global inventories of mercury emissions, compiled in part from information from national inventories, had increased despite international efforts to reduce them.This was puzzling news for environmental scientists like Selin. Data from monitoring stations showed atmospheric mercury concentrations declining during the same period.Bottom-up inventories combine emission factors, such as the amount of mercury that enters the atmosphere when coal mined in a certain region is burned, with estimates of pollution-causing activities, like how much of that coal is burned in power plants.“The big question we wanted to answer was: What is actually happening to mercury in the atmosphere and what does that say about anthropogenic emissions over time?” Selin says.Modeling mercury emissions is especially tricky. First, mercury is the only metal that is in liquid form at room temperature, so it has unique properties. Moreover, mercury that has been removed from the atmosphere by sinks like the ocean or land can be re-emitted later, making it hard to identify primary emission sources.At the same time, mercury is more difficult to study in laboratory settings than many other air pollutants, especially due to its toxicity, so scientists have limited understanding of all chemical reactions mercury can undergo. There is also a much smaller network of mercury monitoring stations, compared to other polluting gases like methane and nitrous oxide.“One of the challenges of our study was to come up with statistical methods that can address those data gaps, because available measurements come from different time periods and different measurement networks,” Feinberg says.Multifaceted modelsThe researchers compiled data from 51 stations in the Northern Hemisphere. They used statistical techniques to aggregate data from nearby stations, which helped them overcome data gaps and evaluate regional trends.By combining data from 11 regions, their analysis indicated that Northern Hemisphere atmospheric mercury concentrations declined by about 10 percent between 2005 and 2020.Then the researchers used two modeling methods — biogeochemical box modeling and chemical transport modeling — to explore possible causes of that decline.  Box modeling was used to run hundreds of thousands of simulations to evaluate a wide array of emission scenarios. Chemical transport modeling is more computationally expensive but enables researchers to assess the impacts of meteorology and spatial variations on trends in selected scenarios.For instance, they tested one hypothesis that there may be an additional environmental sink that is removing more mercury from the atmosphere than previously thought. The models would indicate the feasibility of an unknown sink of that magnitude.“As we went through each hypothesis systematically, we were pretty surprised that we could really point to declines in anthropogenic emissions as being the most likely cause,” Selin says.Their work underscores the importance of long-term mercury monitoring stations, Feinberg adds. Many stations the researchers evaluated are no longer operational because of a lack of funding.While their analysis couldn’t zero in on exactly why the emissions inventories didn’t match up with actual data, they have a few hypotheses.One possibility is that global inventories are missing key information from certain countries. For instance, the researchers resolved some discrepancies when they used a more detailed regional inventory from China. But there was still a gap between observations and estimates.They also suspect the discrepancy might be the result of changes in two large sources of mercury that are particularly uncertain: emissions from small-scale gold mining and mercury-containing products.Small-scale gold mining involves using mercury to extract gold from soil and is often performed in remote parts of developing countries, making it hard to estimate. Yet small-scale gold mining contributes about 40 percent of human-made emissions.In addition, it’s difficult to determine how long it takes the pollutant to be released into the atmosphere from discarded products like thermometers or scientific equipment.“We’re not there yet where we can really pinpoint which source is responsible for this discrepancy,” Feinberg says.In the future, researchers from multiple countries, including MIT, will collaborate to study and improve the models they use to estimate and evaluate emissions. This research will be influential in helping that project move the needle on monitoring mercury, he says.This research was funded by the Swiss National Science Foundation, the U.S. National Science Foundation, and the U.S. Environmental Protection Agency. More

  • in

    Aligning economic and regulatory frameworks for today’s nuclear reactor technology

    Liam Hines ’22 didn’t move to Sarasota, Florida, until high school, but he’s a Floridian through and through. He jokes that he’s even got a floral shirt, what he calls a “Florida formal,” for every occasion.Which is why it broke his heart when toxic red algae used to devastate the Sunshine State’s coastline, including at his favorite beach, Caspersen. The outbreak made headline news during his high school years, with the blooms destroying marine wildlife and adversely impacting the state’s tourism-driven economy.In Florida, Hines says, environmental awareness is pretty high because everyday citizens are being directly impacted by climate change. After all, it’s hard not to worry when beautiful white sand beaches are covered in dead fish. Ongoing concerns about the climate cemented Hines’ resolve to pick a career that would have a strong “positive environmental impact.” He chose nuclear, as he saw it as “a green, low-carbon-emissions energy source with a pretty straightforward path to implementation.”

    Liam Hines: Ensuring that nuclear policy keeps up with nuclear technology.

    Undergraduate studies at MITKnowing he wanted a career in the sciences, Hines applied and got accepted to MIT for undergraduate studies in fall 2018. An orientation program hosted by the Department of Nuclear Science and Engineering (NSE) sold him on the idea of pursuing the field. “The department is just a really tight-knit community, and that really appealed to me,” Hines says.During his undergraduate years, Hines realized he needed a job to pay part of his bills. “Instead of answering calls at the dorm front desk or working in the dining halls, I decided I’m going to become a licensed nuclear operator onsite,” he says. “Reactor operations offer so much hands-on experience with real nuclear systems. It doesn’t hurt that it pays better.” Becoming a licensed nuclear reactor operator is hard work, however, involving a year-long training process studying maintenance, operations, and equipment oversight. A bonus: The job, supervising the MIT Nuclear Reactor Laboratory, taught him the fundamentals of nuclear physics and engineering.Always interested in research, Hines got an early start by exploring the regulatory challenges of advanced fusion systems. There have been questions related to licensing requirements and the safety consequences of the onsite radionuclide inventory. Hines’ undergraduate research work involved studying precedent for such fusion facilities and comparing them to experimental facilities such as the Tokamak Fusion Test Reactor at the Princeton Plasma Physics Laboratory.Doctoral focus on legal and regulatory frameworksWhen scientists want to make technologies as safe as possible, they have to do two things in concert: First they evaluate the safety of the technology, and then make sure legal and regulatory structures take into account the evolution of these advanced technologies. Hines is taking such a two-pronged approach to his doctoral work on nuclear fission systems.Under the guidance of Professor Koroush Shirvan, Hines is conducting systems modeling of various reactor cores that include graphite, and simulating operations under long time spans. He then studies radionuclide transport from low-level waste facilities — the consequences of offsite storage after 50 or 100 or even 10,000 years of storage. The work has to make sure to hit safety and engineering margins, but also tread a fine line. “You want to make sure you’re not over-engineering systems and adding undue cost, but also making sure to assess the unique hazards of these advanced technologies as accurately as possible,” Hines says.On a parallel track, under Professor Haruko Wainwright’s advisement, Hines is applying the current science on radionuclide geochemistry to track radionuclide wastes and map their profile for hazards. One of the challenges fission reactors face is that existing low-level waste regulations were fine-tuned to old reactors. Regulations have not kept up: “Now that we have new technologies with new wastes, some of the hazards of the new waste are completely missed by existing standards,” Hines says. He is working to seal these gaps.A philosophy-driven outlookHines is grateful for the dynamic learning environment at NSE. “A lot of the faculty have that go-getter attitude,” he points out, impressed by the entrepreneurial spirit on campus. “It’s made me confident to really tackle the things that I care about.”An ethics class as an undergraduate made Hines realize there were discussions in class he could apply to the nuclear realm, especially when it came to teasing apart the implications of the technology — where the devices would be built and who they would serve. He eventually went on to double-major in NSE and philosophy.The framework style of reading and reasoning involved in studying philosophy is particularly relevant in his current line of work, where he has to extract key points regarding nuclear regulatory issues. Much like philosophy discussions today that involve going over material that has been discussed for centuries and framing them through new perspectives, nuclear regulatory issues too need to take the long view.“In philosophy, we have to insert ourselves into very large conversations. Similarly, in nuclear engineering, you have to understand how to take apart the discourse that’s most relevant to your research and frame it,” Hines says. This technique is especially necessary because most of the time the nuclear regulatory issues might seem like wading in the weeds of nitty-gritty technical matters, but they can have a huge impact on the public and public perception, Hines adds.As for Florida, Hines visits every chance he can get. The red tide still surfaces but not as consistently as it once did. And since he started his job as a nuclear operator in his undergraduate days, Hines has progressed to senior reactor operator. This time around he gets to sign off on the checklists. “It’s much like when I was shift lead at Dunkin’ Donuts in high school,” Hines says, “everyone is kind of doing the same thing, but you get to be in charge for the afternoon.” More

  • in

    3 Questions: The past, present, and future of sustainability science

    It was 1978, over a decade before the word “sustainable” would infiltrate environmental nomenclature, and Ronald Prinn, MIT professor of atmospheric science, had just founded the Advanced Global Atmospheric Gases Experiment (AGAGE). Today, AGAGE provides real-time measurements for well over 50 environmentally harmful trace gases, enabling us to determine emissions at the country level, a key element in verifying national adherence to the Montreal Protocol and the Paris Accord. This, Prinn says, started him thinking about doing science that informed decision making.Much like global interest in sustainability, Prinn’s interest and involvement continued to grow into what would become three decades worth of achievements in sustainability science. The Center for Global Change Science (CGCS) and Joint Program on the Science and Policy Global Change, respectively founded and co-founded by Prinn, have recently joined forces to create the MIT School of Science’s new Center for Sustainability Science and Strategy (CS3), lead by former CGCS postdoc turned MIT professor, Noelle Selin.As he prepares to pass the torch, Prinn reflects on how far sustainability has come, and where it all began.Q: Tell us about the motivation for the MIT centers you helped to found around sustainability.A: In 1990 after I founded the Center for Global Change Science, I also co-founded the Joint Program on the Science and Policy Global Change with a very important partner, [Henry] “Jake” Jacoby. He’s now retired, but at that point he was a professor in the MIT Sloan School of Management. Together, we determined that in order to answer questions related to what we now call sustainability of human activities, you need to combine the natural and social sciences involved in these processes. Based on this, we decided to make a joint program between the CGCS and a center that he directed, the Center for Energy and Environmental Policy Research (CEEPR).It was called the “joint program” and was joint for two reasons — not only were two centers joining, but two disciplines were joining. It was not about simply doing the same science. It was about bringing a team of people together that could tackle these coupled issues of environment, human development and economy. We were the first group in the world to fully integrate these elements together.Q: What has been your most impactful contribution and what effect did it have on the greater public’s overall understanding?A: Our biggest contribution is the development, and more importantly, the application of the Integrated Global System Model [IGSM] framework, looking at human development in both developing countries and developed countries that had a significant impact on the way people thought about climate issues. With IGSM, we were able to look at the interactions among human and natural components, studying the feedbacks and impacts that climate change had on human systems; like how it would alter agriculture and other land activities, how it would alter things we derive from the ocean, and so on.Policies were being developed largely by economists or climate scientists working independently, and we started showing how the real answers and analysis required a coupling of all of these components. We showed, and I think convincingly, that what people used to study independently, must be coupled together, because the impacts of climate change and air pollution affected so many things.To address the value of policy, despite the uncertainty in climate projections, we ran multiple runs of the IGSM with and without policy, with different choices for uncertain IGSM variables. For public communication, around 2005, we introduced our signature Greenhouse Gamble interactive visualization tools; these have been renewed over time as science and policies evolved.Q: What can MIT provide now at this critical juncture in understanding climate change and its impact?A: We need to further push the boundaries of integrated global system modeling to ensure full sustainability of human activity and all of its beneficial dimensions, which is the exciting focus that the CS3 is designed to address. We need to focus on sustainability as a central core element and use it to not just analyze existing policies but to propose new ones. Sustainability is not just climate or air pollution, it’s got to do with human impacts in general. Human health is central to sustainability, and equally important to equity. We need to expand the capability for credibly assessing what the impact policies have not just on developed countries, but on developing countries, taking into account that many places around the world are at artisanal levels of their economies. They cannot be blamed for anything that is changing climate and causing air pollution and other detrimental things that are currently going on. They need our help. That’s what sustainability is in its full dimensions.Our capabilities are evolving toward a modeling system so detailed that we can find out detrimental things about policies even at local levels before investing in changing infrastructure. This is going to require collaboration among even more disciplines and creating a seamless connection between research and decision making; not just for policies enacted in the public sector, but also for decisions that are made in the private sector.  More

  • in

    Making climate models relevant for local decision-makers

    Climate models are a key technology in predicting the impacts of climate change. By running simulations of the Earth’s climate, scientists and policymakers can estimate conditions like sea level rise, flooding, and rising temperatures, and make decisions about how to appropriately respond. But current climate models struggle to provide this information quickly or affordably enough to be useful on smaller scales, such as the size of a city. Now, authors of a new open-access paper published in the Journal of Advances in Modeling Earth Systems have found a method to leverage machine learning to utilize the benefits of current climate models, while reducing the computational costs needed to run them. “It turns the traditional wisdom on its head,” says Sai Ravela, a principal research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS) who wrote the paper with EAPS postdoc Anamitra Saha. Traditional wisdomIn climate modeling, downscaling is the process of using a global climate model with coarse resolution to generate finer details over smaller regions. Imagine a digital picture: A global model is a large picture of the world with a low number of pixels. To downscale, you zoom in on just the section of the photo you want to look at — for example, Boston. But because the original picture was low resolution, the new version is blurry; it doesn’t give enough detail to be particularly useful. “If you go from coarse resolution to fine resolution, you have to add information somehow,” explains Saha. Downscaling attempts to add that information back in by filling in the missing pixels. “That addition of information can happen two ways: Either it can come from theory, or it can come from data.” Conventional downscaling often involves using models built on physics (such as the process of air rising, cooling, and condensing, or the landscape of the area), and supplementing it with statistical data taken from historical observations. But this method is computationally taxing: It takes a lot of time and computing power to run, while also being expensive. A little bit of both In their new paper, Saha and Ravela have figured out a way to add the data another way. They’ve employed a technique in machine learning called adversarial learning. It uses two machines: One generates data to go into our photo. But the other machine judges the sample by comparing it to actual data. If it thinks the image is fake, then the first machine has to try again until it convinces the second machine. The end-goal of the process is to create super-resolution data. Using machine learning techniques like adversarial learning is not a new idea in climate modeling; where it currently struggles is its inability to handle large amounts of basic physics, like conservation laws. The researchers discovered that simplifying the physics going in and supplementing it with statistics from the historical data was enough to generate the results they needed. “If you augment machine learning with some information from the statistics and simplified physics both, then suddenly, it’s magical,” says Ravela. He and Saha started with estimating extreme rainfall amounts by removing more complex physics equations and focusing on water vapor and land topography. They then generated general rainfall patterns for mountainous Denver and flat Chicago alike, applying historical accounts to correct the output. “It’s giving us extremes, like the physics does, at a much lower cost. And it’s giving us similar speeds to statistics, but at much higher resolution.” Another unexpected benefit of the results was how little training data was needed. “The fact that that only a little bit of physics and little bit of statistics was enough to improve the performance of the ML [machine learning] model … was actually not obvious from the beginning,” says Saha. It only takes a few hours to train, and can produce results in minutes, an improvement over the months other models take to run. Quantifying risk quicklyBeing able to run the models quickly and often is a key requirement for stakeholders such as insurance companies and local policymakers. Ravela gives the example of Bangladesh: By seeing how extreme weather events will impact the country, decisions about what crops should be grown or where populations should migrate to can be made considering a very broad range of conditions and uncertainties as soon as possible.“We can’t wait months or years to be able to quantify this risk,” he says. “You need to look out way into the future and at a large number of uncertainties to be able to say what might be a good decision.”While the current model only looks at extreme precipitation, training it to examine other critical events, such as tropical storms, winds, and temperature, is the next step of the project. With a more robust model, Ravela is hoping to apply it to other places like Boston and Puerto Rico as part of a Climate Grand Challenges project.“We’re very excited both by the methodology that we put together, as well as the potential applications that it could lead to,” he says.  More

  • in

    Two MIT PhD students awarded J-WAFS fellowships for their research on water

    Since 2014, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has advanced interdisciplinary research aimed at solving the world’s most pressing water and food security challenges to meet human needs. In 2017, J-WAFS established the Rasikbhai L. Meswani Water Solutions Fellowship and the J-WAFS Graduate Student Fellowship. These fellowships provide support to outstanding MIT graduate students who are pursuing research that has the potential to improve water and food systems around the world. Recently, J-WAFS awarded the 2024-25 fellowships to Jonathan Bessette and Akash Ball, two MIT PhD students dedicated to addressing water scarcity by enhancing desalination and purification processes. This work is of important relevance since the world’s freshwater supply has been steadily depleting due to the effects of climate change. In fact, one-third of the global population lacks access to safe drinking water. Bessette and Ball are focused on designing innovative solutions to enhance the resilience and sustainability of global water systems. To support their endeavors, J-WAFS will provide each recipient with funding for one academic semester for continued research and related activities.“This year, we received many strong fellowship applications,” says J-WAFS executive director Renee J. Robins. “Bessette and Ball both stood out, even in a very competitive pool of candidates. The award of the J-WAFS fellowships to these two students underscores our confidence in their potential to bring transformative solutions to global water challenges.”2024-25 Rasikbhai L. Meswani Fellowship for Water SolutionsThe Rasikbhai L. Meswani Fellowship for Water Solutions is a doctoral fellowship for students pursuing research related to water and water supply at MIT. The fellowship is made possible by Elina and Nikhil Meswani and family. Jonathan Bessette is a doctoral student in the Global Engineering and Research (GEAR) Center within the Department of Mechanical Engineering at MIT, advised by Professor Amos Winter. His research is focused on water treatment systems for the developing world, mainly desalination, or the process in which salts are removed from water. Currently, Bessette is working on designing and constructing a low-cost, deployable, community-scale desalination system for humanitarian crises.In arid and semi-arid regions, groundwater often serves as the sole water source, despite its common salinity issues. Many remote and developing areas lack reliable centralized power and water systems, making brackish groundwater desalination a vital, sustainable solution for global water scarcity. “An overlooked need for desalination is inland groundwater aquifers, rather than in coastal areas,” says Bessette. “This is because much of the population lives far enough from a coast that seawater desalination could never reach them. My work involves designing low-cost, sustainable, renewable-powered desalination technologies for highly constrained situations, such as drinking water for remote communities,” he adds.To achieve this goal, Bessette developed a batteryless, renewable electrodialysis desalination system. The technology is energy-efficient, conserves water, and is particularly suited for challenging environments, as it is decentralized and sustainable. The system offers significant advantages over the conventional reverse osmosis method, especially in terms of reduced energy consumption for treating brackish water. Highlighting Bessette’s capacity for engineering insight, his advisor noted the “simple and elegant solution” that Bessette and a staff engineer, Shane Pratt, devised that negated the need for the system to have large batteries. Bessette is now focusing on simplifying the system’s architecture to make it more reliable and cost-effective for deployment in remote areas.Growing up in upstate New York, Bessette completed a bachelor’s degree at the State University of New York at Buffalo. As an undergrad, he taught middle and high school students in low-income areas of Buffalo about engineering and sustainability. However, he cited his junior-year travel to India and his experience there measuring water contaminants in rural sites as cementing his dedication to a career addressing food, water, and sanitation challenges. In addition to his doctoral research, his commitment to these goals is further evidenced by another project he is pursuing, funded by a J-WAFS India grant, that uses low-cost, remote sensors to better understand water fetching practices. Bessette is conducting this work with fellow MIT student Gokul Sampath in order to help families in rural India gain access to safe drinking water.2024-25 J-WAFS Graduate Student Fellowship for Water and Food SolutionsThe J-WAFS Graduate Student Fellowship is supported by the J-WAFS Research Affiliate Program, which offers companies the opportunity to engage with MIT on water and food research. Current fellowship support was provided by two J-WAFS Research Affiliates: Xylem, a leading U.S.-based provider of water treatment and infrastructure solutions, and GoAigua, a Spanish company at the forefront of digital transformation in the water industry through innovative solutions. Akash Ball is a doctoral candidate in the Department of Chemical Engineering, advised by Professor Heather Kulik. His research focuses on the computational discovery of novel functional materials for energy-efficient ion separation membranes with high selectivity. Advanced membranes like these are increasingly needed for applications such as water desalination, battery recycling, and removal of heavy metals from industrial wastewater. “Climate change, water pollution, and scarce freshwater reserves cause severe water distress for about 4 billion people annually, with 2 billion in India and China’s semiarid regions,” Ball notes. “One potential solution to this global water predicament is the desalination of seawater, since seawater accounts for 97 percent of all water on Earth.”Although several commercial reverse osmosis membranes are currently available, these membranes suffer several problems, like slow water permeation, permeability-selectivity trade-off, and high fabrication costs. Metal-organic frameworks (MOFs) are porous crystalline materials that are promising candidates for highly selective ion separation with fast water transport due to high surface area, the presence of different pore windows, and the tunability of chemical functionality.In the Kulik lab, Ball is developing a systematic understanding of how MOF chemistry and pore geometry affect water transport and ion rejection rates. By the end of his PhD, Ball plans to identify existing, best-performing MOFs with unparalleled water uptake using machine learning models, propose novel hypothetical MOFs tailored to specific ion separations from water, and discover experimental design rules that enable the synthesis of next-generation membranes.  Ball’s advisor praised the creativity he brings to his research, and his leadership skills that benefit her whole lab. Before coming to MIT, Ball obtained a master’s degree in chemical engineering from the Indian Institute of Technology (IIT) Bombay and a bachelor’s degree in chemical engineering from Jadavpur University in India. During a research internship at IIT Bombay in 2018, he worked on developing a technology for in situ arsenic detection in water. Like Bessette, he noted the impact of this prior research experience on his interest in global water challenges, along with his personal experience growing up in an area in India where access to safe drinking water was not guaranteed. More

  • in

    Using deep learning to image the Earth’s planetary boundary layer

    Although the troposphere is often thought of as the closest layer of the atmosphere to the Earth’s surface, the planetary boundary layer (PBL) — the lowest layer of the troposphere — is actually the part that most significantly influences weather near the surface. In the 2018 planetary science decadal survey, the PBL was raised as an important scientific issue that has the potential to enhance storm forecasting and improve climate projections.  

    “The PBL is where the surface interacts with the atmosphere, including exchanges of moisture and heat that help lead to severe weather and a changing climate,” says Adam Milstein, a technical staff member in Lincoln Laboratory’s Applied Space Systems Group. “The PBL is also where humans live, and the turbulent movement of aerosols throughout the PBL is important for air quality that influences human health.” 

    Although vital for studying weather and climate, important features of the PBL, such as its height, are difficult to resolve with current technology. In the past four years, Lincoln Laboratory staff have been studying the PBL, focusing on two different tasks: using machine learning to make 3D-scanned profiles of the atmosphere, and resolving the vertical structure of the atmosphere more clearly in order to better predict droughts.  

    This PBL-focused research effort builds on more than a decade of related work on fast, operational neural network algorithms developed by Lincoln Laboratory for NASA missions. These missions include the Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of Smallsats (TROPICS) mission as well as Aqua, a satellite that collects data about Earth’s water cycle and observes variables such as ocean temperature, precipitation, and water vapor in the atmosphere. These algorithms retrieve temperature and humidity from the satellite instrument data and have been shown to significantly improve the accuracy and usable global coverage of the observations over previous approaches. For TROPICS, the algorithms help retrieve data that are used to characterize a storm’s rapidly evolving structures in near-real time, and for Aqua, it has helped increase forecasting models, drought monitoring, and fire prediction. 

    These operational algorithms for TROPICS and Aqua are based on classic “shallow” neural networks to maximize speed and simplicity, creating a one-dimensional vertical profile for each spectral measurement collected by the instrument over each location. While this approach has improved observations of the atmosphere down to the surface overall, including the PBL, laboratory staff determined that newer “deep” learning techniques that treat the atmosphere over a region of interest as a three-dimensional image are needed to improve PBL details further.

    “We hypothesized that deep learning and artificial intelligence (AI) techniques could improve on current approaches by incorporating a better statistical representation of 3D temperature and humidity imagery of the atmosphere into the solutions,” Milstein says. “But it took a while to figure out how to create the best dataset — a mix of real and simulated data; we needed to prepare to train these techniques.”

    The team collaborated with Joseph Santanello of the NASA Goddard Space Flight Center and William Blackwell, also of the Applied Space Systems Group, in a recent NASA-funded effort showing that these retrieval algorithms can improve PBL detail, including more accurate determination of the PBL height than the previous state of the art. 

    While improved knowledge of the PBL is broadly useful for increasing understanding of climate and weather, one key application is prediction of droughts. According to a Global Drought Snapshot report released last year, droughts are a pressing planetary issue that the global community needs to address. Lack of humidity near the surface, specifically at the level of the PBL, is the leading indicator of drought. While previous studies using remote-sensing techniques have examined the humidity of soil to determine drought risk, studying the atmosphere can help predict when droughts will happen.  

    In an effort funded by Lincoln Laboratory’s Climate Change Initiative, Milstein, along with laboratory staff member Michael Pieper, are working with scientists at NASA’s Jet Propulsion Laboratory (JPL) to use neural network techniques to improve drought prediction over the continental United States. While the work builds off of existing operational work JPL has done incorporating (in part) the laboratory’s operational “shallow” neural network approach for Aqua, the team believes that this work and the PBL-focused deep learning research work can be combined to further improve the accuracy of drought prediction. 

    “Lincoln Laboratory has been working with NASA for more than a decade on neural network algorithms for estimating temperature and humidity in the atmosphere from space-borne infrared and microwave instruments, including those on the Aqua spacecraft,” Milstein says. “Over that time, we have learned a lot about this problem by working with the science community, including learning about what scientific challenges remain. Our long experience working on this type of remote sensing with NASA scientists, as well as our experience with using neural network techniques, gave us a unique perspective.”

    According to Milstein, the next step for this project is to compare the deep learning results to datasets from the National Oceanic and Atmospheric Administration, NASA, and the Department of Energy collected directly in the PBL using radiosondes, a type of instrument flown on a weather balloon. “These direct measurements can be considered a kind of ‘ground truth’ to quantify the accuracy of the techniques we have developed,” Milstein says.

    This improved neural network approach holds promise to demonstrate drought prediction that can exceed the capabilities of existing indicators, Milstein says, and to be a tool that scientists can rely on for decades to come. More

  • in

    MIT-derived algorithm helps forecast the frequency of extreme weather

    To assess a community’s risk of extreme weather, policymakers rely first on global climate models that can be run decades, and even centuries, forward in time, but only at a coarse resolution. These models might be used to gauge, for instance, future climate conditions for the northeastern U.S., but not specifically for Boston.

    To estimate Boston’s future risk of extreme weather such as flooding, policymakers can combine a coarse model’s large-scale predictions with a finer-resolution model, tuned to estimate how often Boston is likely to experience damaging floods as the climate warms. But this risk analysis is only as accurate as the predictions from that first, coarser climate model.

    “If you get those wrong for large-scale environments, then you miss everything in terms of what extreme events will look like at smaller scales, such as over individual cities,” says Themistoklis Sapsis, the William I. Koch Professor and director of the Center for Ocean Engineering in MIT’s Department of Mechanical Engineering.

    Sapsis and his colleagues have now developed a method to “correct” the predictions from coarse climate models. By combining machine learning with dynamical systems theory, the team’s approach “nudges” a climate model’s simulations into more realistic patterns over large scales. When paired with smaller-scale models to predict specific weather events such as tropical cyclones or floods, the team’s approach produced more accurate predictions for how often specific locations will experience those events over the next few decades, compared to predictions made without the correction scheme.

    Play video

    This animation shows the evolution of storms around the northern hemisphere, as a result of a high-resolution storm model, combined with the MIT team’s corrected global climate model. The simulation improves the modeling of extreme values for wind, temperature, and humidity, which typically have significant errors in coarse scale models. Credit: Courtesy of Ruby Leung and Shixuan Zhang, PNNL

    Sapsis says the new correction scheme is general in form and can be applied to any global climate model. Once corrected, the models can help to determine where and how often extreme weather will strike as global temperatures rise over the coming years. 

    “Climate change will have an effect on every aspect of human life, and every type of life on the planet, from biodiversity to food security to the economy,” Sapsis says. “If we have capabilities to know accurately how extreme weather will change, especially over specific locations, it can make a lot of difference in terms of preparation and doing the right engineering to come up with solutions. This is the method that can open the way to do that.”

    The team’s results appear today in the Journal of Advances in Modeling Earth Systems. The study’s MIT co-authors include postdoc Benedikt Barthel Sorensen and Alexis-Tzianni Charalampopoulos SM ’19, PhD ’23, with Shixuan Zhang, Bryce Harrop, and Ruby Leung of the Pacific Northwest National Laboratory in Washington state.

    Over the hood

    Today’s large-scale climate models simulate weather features such as the average temperature, humidity, and precipitation around the world, on a grid-by-grid basis. Running simulations of these models takes enormous computing power, and in order to simulate how weather features will interact and evolve over periods of decades or longer, models average out features every 100 kilometers or so.

    “It’s a very heavy computation requiring supercomputers,” Sapsis notes. “But these models still do not resolve very important processes like clouds or storms, which occur over smaller scales of a kilometer or less.”

    To improve the resolution of these coarse climate models, scientists typically have gone under the hood to try and fix a model’s underlying dynamical equations, which describe how phenomena in the atmosphere and oceans should physically interact.

    “People have tried to dissect into climate model codes that have been developed over the last 20 to 30 years, which is a nightmare, because you can lose a lot of stability in your simulation,” Sapsis explains. “What we’re doing is a completely different approach, in that we’re not trying to correct the equations but instead correct the model’s output.”

    The team’s new approach takes a model’s output, or simulation, and overlays an algorithm that nudges the simulation toward something that more closely represents real-world conditions. The algorithm is based on a machine-learning scheme that takes in data, such as past information for temperature and humidity around the world, and learns associations within the data that represent fundamental dynamics among weather features. The algorithm then uses these learned associations to correct a model’s predictions.

    “What we’re doing is trying to correct dynamics, as in how an extreme weather feature, such as the windspeeds during a Hurricane Sandy event, will look like in the coarse model, versus in reality,” Sapsis says. “The method learns dynamics, and dynamics are universal. Having the correct dynamics eventually leads to correct statistics, for example, frequency of rare extreme events.”

    Climate correction

    As a first test of their new approach, the team used the machine-learning scheme to correct simulations produced by the Energy Exascale Earth System Model (E3SM), a climate model run by the U.S. Department of Energy, that simulates climate patterns around the world at a resolution of 110 kilometers. The researchers used eight years of past data for temperature, humidity, and wind speed to train their new algorithm, which learned dynamical associations between the measured weather features and the E3SM model. They then ran the climate model forward in time for about 36 years and applied the trained algorithm to the model’s simulations. They found that the corrected version produced climate patterns that more closely matched real-world observations from the last 36 years, not used for training.

    “We’re not talking about huge differences in absolute terms,” Sapsis says. “An extreme event in the uncorrected simulation might be 105 degrees Fahrenheit, versus 115 degrees with our corrections. But for humans experiencing this, that is a big difference.”

    When the team then paired the corrected coarse model with a specific, finer-resolution model of tropical cyclones, they found the approach accurately reproduced the frequency of extreme storms in specific locations around the world.

    “We now have a coarse model that can get you the right frequency of events, for the present climate. It’s much more improved,” Sapsis says. “Once we correct the dynamics, this is a relevant correction, even when you have a different average global temperature, and it can be used for understanding how forest fires, flooding events, and heat waves will look in a future climate. Our ongoing work is focusing on analyzing future climate scenarios.”

    “The results are particularly impressive as the method shows promising results on E3SM, a state-of-the-art climate model,” says Pedram Hassanzadeh, an associate professor who leads the Climate Extremes Theory and Data group at the University of Chicago and was not involved with the study. “It would be interesting to see what climate change projections this framework yields once future greenhouse-gas emission scenarios are incorporated.”

    This work was supported, in part, by the U.S. Defense Advanced Research Projects Agency. More