More stories

  • in

    How to solve a bottleneck for CO2 capture and conversion

    Removing carbon dioxide from the atmosphere efficiently is often seen as a crucial need for combatting climate change, but systems for removing carbon dioxide suffer from a tradeoff. Chemical compounds that efficiently remove CO₂ from the air do not easily release it once captured, and compounds that release CO₂ efficiently are not very efficient at capturing it. Optimizing one part of the cycle tends to make the other part worse.Now, using nanoscale filtering membranes, researchers at MIT have added a simple intermediate step that facilitates both parts of the cycle. The new approach could improve the efficiency of electrochemical carbon dioxide capture and release by six times and cut costs by at least 20 percent, they say.The new findings are reported today in the journal ACS Energy Letters, in a paper by MIT doctoral students Simon Rufer, Tal Joseph, and Zara Aamer, and professor of mechanical engineering Kripa Varanasi.“We need to think about scale from the get-go when it comes to carbon capture, as making a meaningful impact requires processing gigatons of CO₂,” says Varanasi. “Having this mindset helps us pinpoint critical bottlenecks and design innovative solutions with real potential for impact. That’s the driving force behind our work.”Many carbon-capture systems work using chemicals called hydroxides, which readily combine with carbon dioxide to form carbonate. That carbonate is fed into an electrochemical cell, where the carbonate reacts with an acid to form water and release carbon dioxide. The process can take ordinary air with only about 400 parts per million of carbon dioxide and generate a stream of 100 percent pure carbon dioxide, which can then be used to make fuels or other products.Both the capture and release steps operate in the same water-based solution, but the first step needs a solution with a high concentration of hydroxide ions, and the second step needs one high in carbonate ions. “You can see how these two steps are at odds,” says Varanasi. “These two systems are circulating the same sorbent back and forth. They’re operating on the exact same liquid. But because they need two different types of liquids to operate optimally, it’s impossible to operate both systems at their most efficient points.”The team’s solution was to decouple the two parts of the system and introduce a third part in between. Essentially, after the hydroxide in the first step has been mostly chemically converted to carbonate, special nanofiltration membranes then separate ions in the solution based on their charge. Carbonate ions have a charge of 2, while hydroxide ions have a charge of 1. “The nanofiltration is able to separate these two pretty well,” Rufer says.Once separated, the hydroxide ions are fed back to the absorption side of the system, while the carbonates are sent ahead to the electrochemical release stage. That way, both ends of the system can operate at their more efficient ranges. Varanasi explains that in the electrochemical release step, protons are being added to the carbonate to cause the conversion to carbon dioxide and water, but if hydroxide ions are also present, the protons will react with those ions instead, producing just water.“If you don’t separate these hydroxides and carbonates,” Rufer says, “the way the system fails is you’ll add protons to hydroxide instead of carbonate, and so you’ll just be making water rather than extracting carbon dioxide. That’s where the efficiency is lost. Using nanofiltration to prevent this was something that we aren’t aware of anyone proposing before.”Testing showed that the nanofiltration could separate the carbonate from the hydroxide solution with about 95 percent efficiency, validating the concept under realistic conditions, Rufer says. The next step was to assess how much of an effect this would have on the overall efficiency and economics of the process. They created a techno-economic model, incorporating electrochemical efficiency, voltage, absorption rate, capital costs, nanofiltration efficiency, and other factors.The analysis showed that present systems cost at least $600 per ton of carbon dioxide captured, while with the nanofiltration component added, that drops to about $450 a ton. What’s more, the new system is much more stable, continuing to operate at high efficiency even under variations in the ion concentrations in the solution. “In the old system without nanofiltration, you’re sort of operating on a knife’s edge,” Rufer says; if the concentration varies even slightly in one direction or the other, efficiency drops off drastically. “But with our nanofiltration system, it kind of acts as a buffer where it becomes a lot more forgiving. You have a much broader operational regime, and you can achieve significantly lower costs.”He adds that this approach could apply not only to the direct air capture systems they studied specifically, but also to point-source systems — which are attached directly to the emissions sources such as power plant emissions — or to the next stage of the process, converting captured carbon dioxide into useful products such as fuel or chemical feedstocks.  Those conversion processes, he says, “are also bottlenecked in this carbonate and hydroxide tradeoff.”In addition, this technology could lead to safer alternative chemistries for carbon capture, Varanasi says. “A lot of these absorbents can at times be toxic, or damaging to the environment. By using a system like ours, you can improve the reaction rate, so you can choose chemistries that might not have the best absorption rate initially but can be improved to enable safety.”Varanasi adds that “the really nice thing about this is we’ve been able to do this with what’s commercially available,” and with a system that can easily be retrofitted to existing carbon-capture installations. If the costs can be further brought down to about $200 a ton, it could be viable for widespread adoption. With ongoing work, he says, “we’re confident that we’ll have something that can become economically viable” and that will ultimately produce valuable, saleable products.Rufer notes that even today, “people are buying carbon credits at a cost of over $500 per ton. So, at this cost we’re projecting, it is already commercially viable in that there are some buyers who are willing to pay that price.” But by bringing the price down further, that should increase the number of buyers who would consider buying the credit, he says. “It’s just a question of how widespread we can make it.” Recognizing this growing market demand, Varanasi says, “Our goal is to provide industry scalable, cost-effective, and reliable technologies and systems that enable them to directly meet their decarbonization targets.”The research was supported by Shell International Exploration and Production Inc. through the MIT Energy Initiative, and the U.S. National Science Foundation, and made use of the facilities at MIT.nano. More

  • in

    How can India decarbonize its coal-dependent electric power system?

    As the world struggles to reduce climate-warming carbon emissions, India has pledged to do its part, and its success is critical: In 2023, India was the third-largest carbon emitter worldwide. The Indian government has committed to having net-zero carbon emissions by 2070.To fulfill that promise, India will need to decarbonize its electric power system, and that will be a challenge: Fully 60 percent of India’s electricity comes from coal-burning power plants that are extremely inefficient. To make matters worse, the demand for electricity in India is projected to more than double in the coming decade due to population growth and increased use of air conditioning, electric cars, and so on.Despite having set an ambitious target, the Indian government has not proposed a plan for getting there. Indeed, as in other countries, in India the government continues to permit new coal-fired power plants to be built, and aging plants to be renovated and their retirement postponed.To help India define an effective — and realistic — plan for decarbonizing its power system, key questions must be addressed. For example, India is already rapidly developing carbon-free solar and wind power generators. What opportunities remain for further deployment of renewable generation? Are there ways to retrofit or repurpose India’s existing coal plants that can substantially and affordably reduce their greenhouse gas emissions? And do the responses to those questions differ by region?With funding from IHI Corp. through the MIT Energy Initiative (MITEI), Yifu Ding, a postdoc at MITEI, and her colleagues set out to answer those questions by first using machine learning to determine the efficiency of each of India’s current 806 coal plants, and then investigating the impacts that different decarbonization approaches would have on the mix of power plants and the price of electricity in 2035 under increasingly stringent caps on emissions.First step: Develop the needed datasetAn important challenge in developing a decarbonization plan for India has been the lack of a complete dataset describing the current power plants in India. While other studies have generated plans, they haven’t taken into account the wide variation in the coal-fired power plants in different regions of the country. “So, we first needed to create a dataset covering and characterizing all of the operating coal plants in India. Such a dataset was not available in the existing literature,” says Ding.Making a cost-effective plan for expanding the capacity of a power system requires knowing the efficiencies of all the power plants operating in the system. For this study, the researchers used as their metric the “station heat rate,” a standard measurement of the overall fuel efficiency of a given power plant. The station heat rate of each plant is needed in order to calculate the fuel consumption and power output of that plant as plans for capacity expansion are being developed.Some of the Indian coal plants’ efficiencies were recorded before 2022, so Ding and her team used machine-learning models to predict the efficiencies of all the Indian coal plants operating now. In 2024, they created and posted online the first comprehensive, open-sourced dataset for all 806 power plants in 30 regions of India. The work won the 2024 MIT Open Data Prize. This dataset includes each plant’s power capacity, efficiency, age, load factor (a measure indicating how much of the time it operates), water stress, and more.In addition, they categorized each plant according to its boiler design. A “supercritical” plant operates at a relatively high temperature and pressure, which makes it thermodynamically efficient, so it produces a lot of electricity for each unit of heat in the fuel. A “subcritical” plant runs at a lower temperature and pressure, so it’s less thermodynamically efficient. Most of the Indian coal plants are still subcritical plants running at low efficiency.Next step: Investigate decarbonization optionsEquipped with their detailed dataset covering all the coal power plants in India, the researchers were ready to investigate options for responding to tightening limits on carbon emissions. For that analysis, they turned to GenX, a modeling platform that was developed at MITEI to help guide decision-makers as they make investments and other plans for the future of their power systems.Ding built a GenX model based on India’s power system in 2020, including details about each power plant and transmission network across 30 regions of the country. She also entered the coal price, potential resources for wind and solar power installations, and other attributes of each region. Based on the parameters given, the GenX model would calculate the lowest-cost combination of equipment and operating conditions that can fulfill a defined future level of demand while also meeting specified policy constraints, including limits on carbon emissions. The model and all data sources were also released as open-source tools for all viewers to use.Ding and her colleagues — Dharik Mallapragada, a former principal research scientist at MITEI who is now an assistant professor of chemical and biomolecular energy at NYU Tandon School of Engineering and a MITEI visiting scientist; and Robert J. Stoner, the founding director of the MIT Tata Center for Technology and Design and former deputy director of MITEI for science and technology — then used the model to explore options for meeting demands in 2035 under progressively tighter carbon emissions caps, taking into account region-to-region variations in the efficiencies of the coal plants, the price of coal, and other factors. They describe their methods and their findings in a paper published in the journal Energy for Sustainable Development.In separate runs, they explored plans involving various combinations of current coal plants, possible new renewable plants, and more, to see their outcome in 2035. Specifically, they assumed the following four “grid-evolution scenarios:”Baseline: The baseline scenario assumes limited onshore wind and solar photovoltaics development and excludes retrofitting options, representing a business-as-usual pathway.High renewable capacity: This scenario calls for the development of onshore wind and solar power without any supply chain constraints.Biomass co-firing: This scenario assumes the baseline limits on renewables, but here all coal plants — both subcritical and supercritical — can be retrofitted for “co-firing” with biomass, an approach in which clean-burning biomass replaces some of the coal fuel. Certain coal power plants in India already co-fire coal and biomass, so the technology is known.Carbon capture and sequestration plus biomass co-firing: This scenario is based on the same assumptions as the biomass co-firing scenario with one addition: All of the high-efficiency supercritical plants are also retrofitted for carbon capture and sequestration (CCS), a technology that captures and removes carbon from a power plant’s exhaust stream and prepares it for permanent disposal. Thus far, CCS has not been used in India. This study specifies that 90 percent of all carbon in the power plant exhaust is captured.Ding and her team investigated power system planning under each of those grid-evolution scenarios and four assumptions about carbon caps: no cap, which is the current situation; 1,000 million tons (Mt) of carbon dioxide (CO2) emissions, which reflects India’s announced targets for 2035; and two more-ambitious targets, namely 800 Mt and 500 Mt. For context, CO2 emissions from India’s power sector totaled about 1,100 Mt in 2021. (Note that transmission network expansion is allowed in all scenarios.)Key findingsAssuming the adoption of carbon caps under the four scenarios generated a vast array of detailed numerical results. But taken together, the results show interesting trends in the cost-optimal mix of generating capacity and the cost of electricity under the different scenarios.Even without any limits on carbon emissions, most new capacity additions will be wind and solar generators — the lowest-cost option for expanding India’s electricity-generation capacity. Indeed, this is observed to be the case now in India. However, the increasing demand for electricity will still require some new coal plants to be built. Model results show a 10 to 20 percent increase in coal plant capacity by 2035 relative to 2020.Under the baseline scenario, renewables are expanded up to the maximum allowed under the assumptions, implying that more deployment would be economical. More coal capacity is built, and as the cap on emissions tightens, there is also investment in natural gas power plants, as well as batteries to help compensate for the now-large amount of intermittent solar and wind generation. When a 500 Mt cap on carbon is imposed, the cost of electricity generation is twice as high as it was with no cap.The high renewable capacity scenario reduces the development of new coal capacity and produces the lowest electricity cost of the four scenarios. Under the most stringent cap — 500 Mt — onshore wind farms play an important role in bringing the cost down. “Otherwise, it’ll be very expensive to reach such stringent carbon constraints,” notes Ding. “Certain coal plants that remain run only a few hours per year, so are inefficient as well as financially unviable. But they still need to be there to support wind and solar.” She explains that other backup sources of electricity, such as batteries, are even more costly. The biomass co-firing scenario assumes the same capacity limit on renewables as in the baseline scenario, and the results are much the same, in part because the biomass replaces such a low fraction — just 20 percent — of the coal in the fuel feedstock. “This scenario would be most similar to the current situation in India,” says Ding. “It won’t bring down the cost of electricity, so we’re basically saying that adding this technology doesn’t contribute effectively to decarbonization.”But CCS plus biomass co-firing is a different story. It also assumes the limits on renewables development, yet it is the second-best option in terms of reducing costs. Under the 500 Mt cap on CO2 emissions, retrofitting for both CCS and biomass co-firing produces a 22 percent reduction in the cost of electricity compared to the baseline scenario. In addition, as the carbon cap tightens, this option reduces the extent of deployment of natural gas plants and significantly improves overall coal plant utilization. That increased utilization “means that coal plants have switched from just meeting the peak demand to supplying part of the baseline load, which will lower the cost of coal generation,” explains Ding.Some concernsWhile those trends are enlightening, the analyses also uncovered some concerns for India to consider, in particular, with the two approaches that yielded the lowest electricity costs.The high renewables scenario is, Ding notes, “very ideal.” It assumes that there will be little limiting the development of wind and solar capacity, so there won’t be any issues with supply chains, which is unrealistic. More importantly, the analyses showed that implementing the high renewables approach would create uneven investment in renewables across the 30 regions. Resources for onshore and offshore wind farms are mainly concentrated in a few regions in western and southern India. “So all the wind farms would be put in those regions, near where the rich cities are,” says Ding. “The poorer cities on the eastern side, where the coal power plants are, will have little renewable investment.”So the approach that’s best in terms of cost is not best in terms of social welfare, because it tends to benefit the rich regions more than the poor ones. “It’s like [the government will] need to consider the trade-off between energy justice and cost,” says Ding. Enacting state-level renewable generation targets could encourage a more even distribution of renewable capacity installation. Also, as transmission expansion is planned, coordination among power system operators and renewable energy investors in different regions could help in achieving the best outcome.CCS plus biomass co-firing — the second-best option for reducing prices — solves the equity problem posed by high renewables, and it assumes a more realistic level of renewable power adoption. However, CCS hasn’t been used in India, so there is no precedent in terms of costs. The researchers therefore based their cost estimates on the cost of CCS in China and then increased the required investment by 10 percent, the “first-of-a-kind” index developed by the U.S. Energy Information Administration. Based on those costs and other assumptions, the researchers conclude that coal plants with CCS could come into use by 2035 when the carbon cap for power generation is less than 1,000 Mt.But will CCS actually be implemented in India? While there’s been discussion about using CCS in heavy industry, the Indian government has not announced any plans for implementing the technology in coal-fired power plants. Indeed, India is currently “very conservative about CCS,” says Ding. “Some researchers say CCS won’t happen because it’s so expensive, and as long as there’s no direct use for the captured carbon, the only thing you can do is put it in the ground.” She adds, “It’s really controversial to talk about whether CCS will be implemented in India in the next 10 years.”Ding and her colleagues hope that other researchers and policymakers — especially those working in developing countries — may benefit from gaining access to their datasets and learning about their methods. Based on their findings for India, she stresses the importance of understanding the detailed geographical situation in a country in order to design plans and policies that are both realistic and equitable. More

  • in

    Using liquid air for grid-scale energy storage

    As the world moves to reduce carbon emissions, solar and wind power will play an increasing role on electricity grids. But those renewable sources only generate electricity when it’s sunny or windy. So to ensure a reliable power grid — one that can deliver electricity 24/7 — it’s crucial to have a means of storing electricity when supplies are abundant and delivering it later, when they’re not. And sometimes large amounts of electricity will need to be stored not just for hours, but for days, or even longer.Some methods of achieving “long-duration energy storage” are promising. For example, with pumped hydro energy storage, water is pumped from a lake to another, higher lake when there’s extra electricity and released back down through power-generating turbines when more electricity is needed. But that approach is limited by geography, and most potential sites in the United States have already been used. Lithium-ion batteries could provide grid-scale storage, but only for about four hours. Longer than that and battery systems get prohibitively expensive.A team of researchers from MIT and the Norwegian University of Science and Technology (NTNU) has been investigating a less-familiar option based on an unlikely-sounding concept: liquid air, or air that is drawn in from the surroundings, cleaned and dried, and then cooled to the point that it liquefies. “Liquid air energy storage” (LAES) systems have been built, so the technology is technically feasible. Moreover, LAES systems are totally clean and can be sited nearly anywhere, storing vast amounts of electricity for days or longer and delivering it when it’s needed. But there haven’t been conclusive studies of its economic viability. Would the income over time warrant the initial investment and ongoing costs? With funding from the MIT Energy Initiative’s Future Energy Systems Center, the researchers developed a model that takes detailed information on LAES systems and calculates when and where those systems would be economically viable, assuming future scenarios in line with selected decarbonization targets as well as other conditions that may prevail on future energy grids.They found that under some of the scenarios they modeled, LAES could be economically viable in certain locations. Sensitivity analyses showed that policies providing a subsidy on capital expenses could make LAES systems economically viable in many locations. Further calculations showed that the cost of storing a given amount of electricity with LAES would be lower than with more familiar systems such as pumped hydro and lithium-ion batteries. They conclude that LAES holds promise as a means of providing critically needed long-duration storage when future power grids are decarbonized and dominated by intermittent renewable sources of electricity.The researchers — Shaylin A. Cetegen, a PhD candidate in the MIT Department of Chemical Engineering (ChemE); Professor Emeritus Truls Gundersen of the NTNU Department of Energy and Process Engineering; and MIT Professor Emeritus Paul I. Barton of ChemE — describe their model and their findings in a new paper published in the journal Energy.The LAES technology and its benefitsLAES systems consists of three steps: charging, storing, and discharging. When supply on the grid exceeds demand and prices are low, the LAES system is charged. Air is then drawn in and liquefied. A large amount of electricity is consumed to cool and liquefy the air in the LAES process. The liquid air is then sent to highly insulated storage tanks, where it’s held at a very low temperature and atmospheric pressure. When the power grid needs added electricity to meet demand, the liquid air is first pumped to a higher pressure and then heated, and it turns back into a gas. This high-pressure, high-temperature, vapor-phase air expands in a turbine that generates electricity to be sent back to the grid.According to Cetegen, a primary advantage of LAES is that it’s clean. “There are no contaminants involved,” she says. “It takes in and releases only ambient air and electricity, so it’s as clean as the electricity that’s used to run it.” In addition, a LAES system can be built largely from commercially available components and does not rely on expensive or rare materials. And the system can be sited almost anywhere, including near other industrial processes that produce waste heat or cold that can be used by the LAES system to increase its energy efficiency.Economic viabilityIn considering the potential role of LAES on future power grids, the first question is: Will LAES systems be attractive to investors? Answering that question requires calculating the technology’s net present value (NPV), which represents the sum of all discounted cash flows — including revenues, capital expenditures, operating costs, and other financial factors — over the project’s lifetime. (The study assumed a cash flow discount rate of 7 percent.)To calculate the NPV, the researchers needed to determine how LAES systems will perform in future energy markets. In those markets, various sources of electricity are brought online to meet the current demand, typically following a process called “economic dispatch:” The lowest-cost source that’s available is always deployed next. Determining the NPV of liquid air storage therefore requires predicting how that technology will fare in future markets competing with other sources of electricity when demand exceeds supply — and also accounting for prices when supply exceeds demand, so excess electricity is available to recharge the LAES systems.For their study, the MIT and NTNU researchers designed a model that starts with a description of an LAES system, including details such as the sizes of the units where the air is liquefied and the power is recovered, and also capital expenses based on estimates reported in the literature. The model then draws on state-of-the-art pricing data that’s released every year by the National Renewable Energy Laboratory (NREL) and is widely used by energy modelers worldwide. The NREL dataset forecasts prices, construction and retirement of specific types of electricity generation and storage facilities, and more, assuming eight decarbonization scenarios for 18 regions of the United States out to 2050.The new model then tracks buying and selling in energy markets for every hour of every day in a year, repeating the same schedule for five-year intervals. Based on the NREL dataset and details of the LAES system — plus constraints such as the system’s physical storage capacity and how often it can switch between charging and discharging — the model calculates how much money LAES operators would make selling power to the grid when it’s needed and how much they would spend buying electricity when it’s available to recharge their LAES system. In line with the NREL dataset, the model generates results for 18 U.S. regions and eight decarbonization scenarios, including 100 percent decarbonization by 2035 and 95 percent decarbonization by 2050, and other assumptions about future energy grids, including high-demand growth plus high and low costs for renewable energy and for natural gas.Cetegen describes some of their results: “Assuming a 100-megawatt (MW) system — a standard sort of size — we saw economic viability pop up under the decarbonization scenario calling for 100 percent decarbonization by 2035.” So, positive NPVs (indicating economic viability) occurred only under the most aggressive — therefore the least realistic — scenario, and they occurred in only a few southern states, including Texas and Florida, likely because of how those energy markets are structured and operate.The researchers also tested the sensitivity of NPVs to different storage capacities, that is, how long the system could continuously deliver power to the grid. They calculated the NPVs of a 100 MW system that could provide electricity supply for one day, one week, and one month. “That analysis showed that under aggressive decarbonization, weekly storage is more economically viable than monthly storage, because [in the latter case] we’re paying for more storage capacity than we need,” explains Cetegen.Improving the NPV of the LAES systemThe researchers next analyzed two possible ways to improve the NPV of liquid air storage: by increasing the system’s energy efficiency and by providing financial incentives. Their analyses showed that increasing the energy efficiency, even up to the theoretical limit of the process, would not change the economic viability of LAES under the most realistic decarbonization scenarios. On the other hand, a major improvement resulted when they assumed policies providing subsidies on capital expenditures on new installations. Indeed, assuming subsidies of between 40 percent and 60 percent made the NPVs for a 100 MW system become positive under all the realistic scenarios.Thus, their analysis showed that financial incentives could be far more effective than technical improvements in making LAES economically viable. While engineers may find that outcome disappointing, Cetegen notes that from a broader perspective, it’s good news. “You could spend your whole life trying to optimize the efficiency of this process, and it wouldn’t translate to securing the investment needed to scale the technology,” she says. “Policies can take a long time to implement as well. But theoretically you could do it overnight. So if storage is needed [on a future decarbonized grid], then this is one way to encourage adoption of LAES right away.”Cost comparison with other energy storage technologiesCalculating the economic viability of a storage technology is highly dependent on the assumptions used. As a result, a different measure — the “levelized cost of storage” (LCOS) — is typically used to compare the costs of different storage technologies. In simple terms, the LCOS is the cost of storing each unit of energy over the lifetime of a project, not accounting for any income that results.On that measure, the LAES technology excels. The researchers’ model yielded an LCOS for liquid air storage of about $60 per megawatt-hour, regardless of the decarbonization scenario. That LCOS is about a third that of lithium-ion battery storage and half that of pumped hydro. Cetegen cites another interesting finding: the LCOS of their assumed LAES system varied depending on where it’s being used. The standard practice of reporting a single LCOS for a given energy storage technology may not provide the full picture.Cetegen has adapted the model and is now calculating the NPV and LCOS for energy storage using lithium-ion batteries. But she’s already encouraged by the LCOS of liquid air storage. “While LAES systems may not be economically viable from an investment perspective today, that doesn’t mean they won’t be implemented in the future,” she concludes. “With limited options for grid-scale storage expansion and the growing need for storage technologies to ensure energy security, if we can’t find economically viable alternatives, we’ll likely have to turn to least-cost solutions to meet storage needs. This is why the story of liquid air storage is far from over. We believe our findings justify the continued exploration of LAES as a key energy storage solution for the future.” More

  • in

    MIT students advance solutions for water and food with the help of J-WAFS

    For the past decade, the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) has been instrumental in promoting student engagement across the Institute to help solve the world’s most pressing water and food system challenges. As part of J-WAFS’ central mission of securing the world’s water and food supply, J-WAFS aims to cultivate the next generation of leaders in the water and food sectors by encouraging MIT student involvement through a variety of programs and mechanisms that provide research funding, mentorship, and other types of support.J-WAFS offers a range of opportunities for both undergraduate and graduate students to engage in the advancement of water and food systems research. These include graduate student fellowships, travel grants for participation in conferences, funding for research projects in India, video competitions highlighting students’ water and food research, and support for student-led organizations and initiatives focused on critical areas in water and food.As J-WAFS enters its second decade, it continues to expose students across the Institute to experiential hands-on water and food research, career and other networking opportunities, and a platform to develop their innovative and collaborative solutions.Graduate student fellowshipsIn 2017, J-WAFS inaugurated two graduate student fellowships: the Rasikbhai L. Meswani Fellowship for Water Solutions and the J-WAFS Graduate Student Fellowship Program. The Rasikbhai L. Meswani Fellowship for Water Solutions is a doctoral fellowship for students pursuing research related to water for human need at MIT. The fellowship is made possible by Elina and Nikhil Meswani and family. Each year, up to two outstanding students are selected to receive fellowship support for one academic semester. Through it, J-WAFS seeks to support distinguished MIT students who are pursuing solutions to the pressing global water supply challenges of our time. The J-WAFS Fellowship for Water and Food Solutions is funded by the J-WAFS Research Affiliate Program, which offers companies the opportunity to collaborate with MIT on water and food research. A portion of each research affiliate’s fees supports this fellowship.Aditya Avinash Ghodgaonkar, a PhD student in the Department of Mechanical Engineering (MechE), reflects on how receiving a J-WAFS graduate student fellowship positively impacted his research on the design of low-cost emitters for affordable, resilient drip irrigation for farmers: “My J-WAFS fellowship gave me the flexibility and financial support needed to explore new directions in the area of clog-resistant drip irrigation that had a higher risk element that might not have been feasible to manage on an industrially sponsored project,” Ghodgaonkar explains. Emitters, which control the volume and flow rate of water used during irrigation, often clog due to small particles like sand. Ghodgaonkar worked with Professor Amos Winter, and with farmers in resource-constrained communities in countries like Jordan and Morocco, to develop an emitter that is mechanically more resistant to clogging. Ghodgaonkar reports that their energy-efficient, compact, clog-resistant drip emitters are being commercialized by Toro and may be available for retail in the next few years. The opportunities and funding support Ghodgaonkar has received from J-WAFS contributed greatly to his entrepreneurial success and the advancement of the water and agricultural sectors.Linzixuan (Rhoda) Zhang, a PhD student advised by Professor Robert Langer and Principal Research Scientist Ana Jaklenec of the Department of Chemical Engineering, was a 2022 J-WAFS Graduate Student Fellow. With the fellowship, Zhang was able to focus on her innovative research on a novel micronutrient delivery platform that fortifies food with essential vitamins and nutrients. “We intake micronutrients from basically all the healthy food that we eat; however, around the world there are about 2 billion people currently suffering from micronutrient deficiency because they do not have access to very healthy, very fresh food,” Zhang says. Her research involves the development of biodegradable polymers that can deliver these micronutrients in harsh environments in underserved regions of the world. “Vitamin A is not very stable, for example; we have vitamin A in different vegetables but when we cook them, the vitamin can easily degrade,” Zhang explains. However, when vitamin A is encapsulated in the microparticle platform, simulation of boiling and of the stomach environment shows that vitamin A was stabilized. “The meaningful factors behind this experiment are real,” says Zhang. The J-WAFS Fellowship helped position Zhang to win the 2024 Collegiate Inventors Competition for this work.J-WAFS grant for water and food projects in IndiaJ-WAFS India Grants are intended to further the work being pursued by MIT individuals as a part of their research, innovation, entrepreneurship, coursework, or related activities. Faculty, research staff, and undergraduate and graduate students are eligible to apply. The program aims to support projects that will benefit low-income communities in India, and facilitates travel and other expenses related to directly engaging with those communities.Gokul Sampath, a PhD student in the Department of Urban Studies and Planning, and Jonathan Bessette, a PhD student in MechE, initially met through J-WAFS-sponsored conference travel, and discovered their mutual interest in the problem of arsenic in water in India. Together, they developed a cross-disciplinary proposal that received a J-WAFS India Grant. Their project is studying how women in rural India make decisions about where they fetch water for their families, and how these decisions impact exposure to groundwater contaminants like naturally-occurring arsenic. Specifically, they are developing low-cost remote sensors to better understand water-fetching practices. The grant is enabling Sampath and Bessette to equip Indian households with sensor-enabled water collection devices (“smart buckets”) that will provide them data about fetching practices in arsenic-affected villages. By demonstrating the efficacy of a sensor-based approach, the team hopes to address a major data gap in international development. “It is due to programs like the Jameel Water and Food Systems Lab that I was able to obtain the support for interdisciplinary work on connecting water security, public health, and regional planning in India,” says Sampath.J-WAFS travel grants for water conferencesIn addition to funding graduate student research, J-WAFS also provides grants for graduate students to attend water conferences worldwide. Typically, students will only receive travel funding to attend conferences where they are presenting their research. However, the J-WAFS travel grants support learning, networking, and career exploration opportunities for exceptional MIT graduate students who are interested in a career in the water sector, whether in academia, nonprofits, government, or industry.Catherine Lu ’23, MNG ’24 was awarded a 2023 Travel Grant to attend the UNC Water and Health Conference in North Carolina. The conference serves as a curated space for policymakers, practitioners, and researchers to convene and assess data, scrutinize scientific findings, and enhance new and existing strategies for expanding access to and provision of services for water, sanitation, and hygiene (WASH). Lu, who studied civil and environmental engineering, worked with Professor Dara Entekhabi on modeling and predicting droughts in Africa using satellite Soil Moisture Active Passive (SMAP) data. As she evaluated her research trajectory and career options in the water sector, Lu found the conference to be informative and enlightening. “I was able to expand my knowledge on all the sectors and issues that are related to water and the implications they have on my research topic.” Furthermore, she notes: “I was really impressed by the diverse range of people that were able to attend the conference. The global perspective offered at the conference provided a valuable context for understanding the challenges and successes of different regions around the world — from WASH education in schools in Zimbabwe and India to rural water access disparities in the United States … Being able to engage with such passionate and dedicated people has motivated me to continue progress in this sector.” Following graduation, Lu secured a position as a water resources engineer at CDM Smith, an engineering and construction firm.Daniela Morales, a master’s student in city planning in the Department of Urban Studies and Planning, was a 2024 J-WAFS Travel Grant recipient who attended World Water Week in Stockholm, Sweden. The annual global conference is organized by the Stockholm International Water Institute and convenes leading experts, decision-makers, and professionals in the water sector to actively engage in discussions and developments addressing critical water-related challenges. Morales’ research interests involve drinking water quality and access in rural and peri-urban areas affected by climate change impacts, the effects of municipal water shutoffs on marginalized communities, and the relationship between regional water management and public health outcomes. When reflecting on her experience at the conference, Morales writes: “Being part of this event has given me so much motivation to continue my professional and academic journey in water management as it relates to public health and city planning … There was so much energy that was collectively generated in the conference, and so many new ideas that I was able to process around my own career interests and my role as a future planner in water management, that the last day of the conference felt less like an ending and more of the beginning of a new chapter. I am excited to take all the information I learned to work towards my own research, and continue to build relationships with all the new contacts I made.” Morales also notes that without the support of the J-WAFS grant, “I would not have had the opportunity to make it to Stockholm and participate in such a unique week of water wisdom.”Seed grants and Solutions grantsJ-WAFS offers seed grants for early-stage research and Solutions Grants for later-stage research that is ready to move from the lab to the commercial world. Proposals for both types of grants must be submitted and led by an MIT principal investigator, but graduate students, and sometimes undergraduates, are often supported by these grants.Arjav Shah, a PhD-MBA student in MIT’s Department of Chemical Engineering and the MIT Sloan School of Management, is currently pursuing the commercialization of a water treatment technology that was first supported through a 2019 J-WAFS seed grant and then a 2022 J-WAFS Solutions Grant with Professor Patrick Doyle. The technology uses hydrogels to remove a broad range of micropollutants from water. The Solutions funding enables entrepreneurial students and postdocs to lay the groundwork to commercialize a technology by assessing use scenarios and exploring business needs with actual potential customers. “With J-WAFS’ support, we were not only able to scale up the technology, but also gain a deeper understanding of market needs and develop a strong business case,” says Shah. Shah and the Solutions team have discovered that the hydrogels could be used in several real-world contexts, ranging from large-scale industrial use to small-scale, portable, off-grid applications. “We are incredibly grateful to J-WAFS for their support, particularly in fostering industry connections and facilitating introductions to investors, potential customers, and experts,” Shah adds.Shah was also a 2023 J-WAFS Travel Grant awardee who attended Stockholm World Water Week that year. He says, “J-WAFS has played a pivotal role in both my academic journey at MIT and my entrepreneurial pursuits. J-WAFS support has helped me grow both as a scientist and an aspiring entrepreneur. The exposure and opportunities provided have allowed me to develop critical skills such as customer discovery, financial modeling, business development, fundraising, and storytelling — all essential for translating technology into real-world impact. These experiences provided invaluable insights into what it takes to bring a technology from the lab to market.”Shah is currently leading efforts to spin out a company to commercialize the hydrogel research. Since receiving J-WAFS support, the team has made major strides toward launching a startup company, including winning the Pillar VC Moonshot Prize, Cleantech Open National Grand Prize, MassCEC Catalyst Award, and participation in the NSF I-Corps National Program.J-WAFS student video competitionsJ-WAFS has hosted two video competitions: MIT Research for a Water Secure Future and MIT Research for a Food Secure Future, in honor of World Water Day and Word Food Day, respectively. In these competitions, students are tasked with creating original videos showcasing their innovative water and food research conducted at MIT. The opportunity is open to MIT students, postdocs, and recent alumni.Following a review by a distinguished panel of judges, Vishnu Jayaprakash SM ’19, PhD ’22 won first place in the 2022 J-WAFS World Food Day Student Video Competition for his video focused on eliminating pesticide pollution and waste. Jayaprakash delved into the science behind AgZen-Cloak, a new generation of agricultural sprays that prevents pesticides from bouncing off of plants and seeping into the ground, thus causing harmful runoff. The J-WAFS competition provided Jayaprakash with a platform to highlight the universal, low-cost, and environmentally sustainable benefits of AgZen-Cloak. Jayaprakash worked on similar technology as a funded student on a J-WAFS Solutions grant with Professor Kripa Varanasi. The Solutions grant, in fact, helped Jayaprakash and Varanasi to launch AgZen, a company that deploys AgZen-Cloak and other products and technologies to control the interactions of droplets and sprays with crop surfaces. AgZen is currently helping farmers sustainably tend to their agricultural plots while also protecting the environment.  In 2021, Hilary Johnson SM ’18, PhD ’22, won first place in the J-WAFS World Water Day video competition. Her video highlighted her work on a novel pump that uses adaptive hydraulics for improved pump efficiency. The pump was part of a sponsored research project with Xylem Inc., a J-WAFS Research Affiliate company, and Professor Alex Slocum of MechE. At the time, Johnson was a PhD student in Slocum’s lab. She was instrumental in the development of the pump by engineering the volute to expand and contract to meet changing system flow rates. Johnson went on to later become a 2021-22 J-WAFS Fellow, and is now a full-time mechanical engineer at the Lawrence Livermore National Laboratory.J-WAFS-supported student clubsJ-WAFS-supported student clubs provide members of the MIT student community the opportunity for networking and professional advancement through events focused on water and food systems topics.J-WAFS is a sponsor of the MIT Water Club, a student-led group that supports and promotes the engagement of the MIT community in water-sector-related activism, dissemination of information, and research innovation. The club allows students to spearhead the organization of conferences, lectures, outreach events, research showcases, and entrepreneurship competitions including the former MIT Water Innovation Prize and MIT Water Summit. J-WAFS not only sponsors the MIT Water Club financially, but offers mentorship and guidance to the leadership team.The MIT Food and Agriculture Club is also supported by J-WAFS. The club’s mission is to promote the engagement of the MIT community in food and agriculture-related topics. In doing so, the students lead initiatives to share the innovative technology and business solutions researchers are developing in food and agriculture systems. J-WAFS assists in the connection of passionate MIT students with those who are actively working in the food and agriculture industry beyond the Institute. From 2015 to 2022, J-WAFS also helped the club co-produce the Rabobank-MIT Food and Agribusiness Innovation Prize — a student business plan competition for food and agricultural startups.From 2023 onward, the MIT Water Club and the MIT Food and Ag Club have been joining forces to organize a combined prize competition: The MIT Water, Food and Agriculture (WFA) Innovation Prize. The WFA Innovation Prize is a business plan competition for student-led startups focused on any region or market. The teams present business plans involving a technology, product, service, or process that is aimed at solving a problem related to water, food, or agriculture. The competition encourages all approaches to innovation, from engineering and product design to policy and data analytics. The goal of the competition is to help emerging entrepreneurs translate research and ideas into businesses, access mentors and resources, and build networks in the water, food, and agriculture industries. J-WAFS offers financial and in-kind support, working with student leaders to plan, organize, and implement the stages of the competition through to the final pitch event. This year, J-WAFS is continuing to support the WFA team, which is led by Ali Decker, an MBA student at MIT Sloan, and Sam Jakshtis, a master’s student in MIT’s science in real estate development program. The final pitch event will take place on April 30 in the MIT Media Lab.“I’ve had the opportunity to work with Renee Robins, executive director of J-WAFS, on MIT’s Water, Food and Agriculture Innovation Prize for the past two years, and it has been both immensely valuable and a delight to have her support,” says Decker. “Renee has helped us in all areas of prize planning: brainstorming new ideas, thinking through startup finalist selection, connecting to potential sponsors and partners, and more. Above all, she supports us with passion and joy; each time we meet, I look forward to our discussion,” Decker adds.J-WAFS eventsThroughout the year, J-WAFS aims to offer events that will engage any in the MIT student community who are working in water or food systems. For example, on April 19, 2023, J-WAFS teamed up with the MIT Energy Initiative (MITEI) and the Environmental Solutions Initiative (ESI) to co-host an MIT student poster session for Earth Month. The theme of the poster session was “MIT research for a changing planet,” and it featured work from 11 MIT students with projects in water, food, energy, and the environment. The students, who represented a range of MIT departments, labs, and centers, were on hand to discuss their projects and engage with those attending the event. Attendees could vote for their favorite poster after being asked to consider which poster most clearly communicated the research problem and the potential solution. At the end of the night, votes were tallied and the winner of the “People’s Choice Award” for best poster was Elaine Liu ’24, an undergraduate in mathematics at the time of the event. Liu’s poster featured her work on managing failure cascades in systems with wind power.J-WAFS also hosts less-structured student networking events. For instance, during MIT’s Independent Activities Period (IAP) in January 2024, J-WAFS hosted an ice cream social for student networking. The informal event was an opportunity for graduate and undergraduate students from across the Institute to meet and mingle with like-minded peers working in, or interested in, water and food systems. Students were able to explain their current and future research, interests, and projects and ask questions while exchanging ideas, engaging with one another, and potentially forming collaborations, or at the very least sharing insights.Looking ahead to 10 more years of student impactOver the past decade, J-WAFS has demonstrated a strong commitment to empowering students in the water and food sectors, fostering an environment where they can confidently drive meaningful change and innovation. PhD student Jonathan Bessette sums up the J-WAFS community as a “one-of-a-kind community that enables essential research in water and food that otherwise would not be pursued. It’s this type of research that is not often the focus of major funding, yet has such a strong impact in sustainable development.”J-WAFS aims to provide students with the support and tools they need to conduct authentic and meaningful water and food-related research that will benefit communities around the world. This support, coupled with an MIT education, enables students to become leaders in sustainable water and food systems. As the second decade of J-WAFS programming begins, the J-WAFS team remains committed to fostering student collaboration across the Institute, driving innovative solutions to revitalize the world’s water and food systems while empowering the next generation of pioneers in these critical fields.  More

  • in

    Enabling energy innovation at scale

    Enabling and sustaining a clean energy transition depends not only on groundbreaking technology to redefine the world’s energy systems, but also on that innovation happening at scale. As a part of an ongoing speaker series, the MIT Energy Initiative (MITEI) hosted Emily Knight, the president and CEO of The Engine, a nonprofit incubator and accelerator dedicated to nurturing technology solutions to the world’s most urgent challenges. She explained how her organization is bridging the gap between research breakthroughs and scalable commercial impact.“Our mission from the very beginning was to support and accelerate what we call ‘tough tech’ companies — [companies] who had this vision to solve some of the world’s biggest problems,” Knight said.The Engine, a spinout of MIT, coined the term “tough tech” to represent not only the durability of the technology, but also the complexity and scale of the problems it will solve. “We are an incubator and accelerator focused on building a platform and creating what I believe is an open community for people who want to build tough tech, who want to fund tough tech, who want to work in a tough tech company, and ultimately be a part of this community,” said Knight.According to Knight, The Engine creates “an innovation orchard” where early-stage research teams have access to the infrastructure and resources needed to take their ideas from lab to market while maximizing impact. “We use this pathway — from idea to investment, then investment to impact — in a lot of the work that we do,” explained Knight.She said that tough tech exists at the intersection of several risk factors: technology, market and customer, regulatory, and scaling. Knight highlighted MIT spinout Commonwealth Fusion Systems (CFS) — one of many MIT spinouts within The Engine’s ecosystem that focus on energy — as an example of how The Engine encourages teams to work through these risks.In the early days, the CFS team was told to assume their novel fusion technology would work. “If you’re only ever worried that your technology won’t work, you won’t pick your head up and have the right people on your team who are building the public affairs relationships so that, when you need it, you can get your first fusion reactor sited and done,” explained Knight. “You don’t know where to go for the next round of funding, and you don’t know who in government is going to be your advocates when you need them to be.”“I think [CFS’s] eighth employee was a public affairs person,” Knight said. With the significant regulatory, scaling, and customer risks associated with fusion energy, building their team wisely was essential. Bringing on a public affairs person helped CFS build awareness and excitement around fusion energy in the local community and build the community programs necessary for grant funding.The Engine’s growing ecosystem of entrepreneurs, researchers, institutions, and government agencies is a key component of the support offered to early-stage researchers. The ecosystem creates a space for sharing knowledge and resources, which Knight believes is critical for navigating the unique challenges associated with Tough Tech.This support can be especially important for new entrepreneurs: “This leader that is going from PhD student to CEO — that is a really, really big journey that happens the minute you get funding,” said Knight. “Knowing that you’re in a community of people who are on that same journey is really important.”The Engine also extends this support to the broader community through educational programs that walk participants through the process of translating their research from lab to market. Knight highlighted two climate and energy startups that joined The Engine through one such program geared toward graduate students and postdocs: Lithios, which is producing sustainable, low-cost lithium, and Lydian, which is developing sustainable aviation fuels.The Engine also offers access to capital from investors with an intimate understanding of tough tech ventures. She said that government agency partners can offer additional support through public funding opportunities and highlighted that grants from the U.S. Department of Energy were key in the early funding of another MIT spinout within their ecosystem, Sublime Systems.In response to the current political shift away from climate investments, as well as uncertainty surrounding government funding, Knight believes that the connections within their ecosystem are more important than ever as startups explore alternative funding. “We’re out there thinking about funding mechanisms that could be more reliable. That’s our role as an incubator.”Being able to convene the right people to address a problem is something that Knight attributes to her education at Cornell University’s School of Hotel Administration. “My ethos across all of this is about service,” stated Knight. “We’re constantly evolving our resources and how we help our teams based on the gaps they’re facing.”MITEI Presents: Advancing the Energy Transition is an MIT Energy Initiative speaker series highlighting energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. The next seminar in this series will be April 30 with Manish Bapna, president and CEO of the Natural Resources Defense Council. Visit MITEI’s Events page for more information on this and additional events. More

  • in

    Taking the “training wheels” off clean energy

    Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”A shifting clean energy landscapeDuring a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”“Everything on the table”The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”The road aheadSeveral speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.” More

  • in

    Will neutrons compromise the operation of superconducting magnets in a fusion plant?

    High-temperature superconducting magnets made from REBCO, an acronym for rare earth barium copper oxide, make it possible to create an intense magnetic field that can confine the extremely hot plasma needed for fusion reactions, which combine two hydrogen atoms to form an atom of helium, releasing a neutron in the process.But some early tests suggested that neutron irradiation inside a fusion power plant might instantaneously suppress the superconducting magnets’ ability to carry current without resistance (called critical current), potentially causing a reduction in the fusion power output.Now, a series of experiments has clearly demonstrated that this instantaneous effect of neutron bombardment, known as the “beam on effect,” should not be an issue during reactor operation, thus clearing the path for projects such as the ARC fusion system being developed by MIT spinoff company Commonwealth Fusion Systems.The findings were reported in the journal Superconducting Science and Technology, in a paper by MIT graduate student Alexis Devitre and professors Michael Short, Dennis Whyte, and Zachary Hartwig, along with six others.“Nobody really knew if it would be a concern,” Short explains. He recalls looking at these early findings: “Our group thought, man, somebody should really look into this. But now, luckily, the result of the paper is: It’s conclusively not a concern.”The possible issue first arose during some initial tests of the REBCO tapes planned for use in the ARC system. “I can remember the night when we first tried the experiment,” Devitre recalls. “We were all down in the accelerator lab, in the basement. It was a big shocker because suddenly the measurement we were looking at, the critical current, just went down by 30 percent” when it was measured under radiation conditions (approximating those of the fusion system), as opposed to when it was only measured after irradiation.Before that, researchers had irradiated the REBCO tapes and then tested them afterward, Short says. “We had the idea to measure while irradiating, the way it would be when the reactor’s really on,” he says. “And then we observed this giant difference, and we thought, oh, this is a big deal. It’s a margin you’d want to know about if you’re designing a reactor.”After a series of carefully calibrated tests, it turned out the drop in critical current was not caused by the irradiation at all, but was just an effect of temperature changes brought on by the proton beam used for the irradiation experiments. This is something that would not be a factor in an actual fusion plant, Short says.“We repeated experiments ‘oh so many times’ and collected about a thousand data points,” Devitre says. They then went through a detailed statistical analysis to show that the effects were exactly the same, under conditions where the material was just heated as when it was both heated and irradiated.This excluded the possibility that the instantaneous suppression of the critical current had anything to do with the “beam on effect,” at least within the sensitivity of their tests. “Our experiments are quite sensitive,” Short says. “We can never say there’s no effect, but we can say that there’s no important effect.”To carry out these tests required building a special facility for the purpose. Only a few such facilities exist in the world. “They’re all custom builds, and without this, we wouldn’t have been able to find out the answer,” he says.The finding that this specific issue is not a concern for the design of fusion plants “illustrates the power of negative results. If you can conclusively prove that something doesn’t happen, you can stop scientists from wasting their time hunting for something that doesn’t exist.” And in this case, Short says, “You can tell the fusion companies: ‘You might have thought this effect would be real, but we’ve proven that it’s not, and you can ignore it in your designs.’ So that’s one more risk retired.”That could be a relief to not only Commonwealth Fusion Systems but also several other companies that are also pursuing fusion plant designs, Devitre says. “There’s a bunch. And it’s not just fusion companies,” he adds. There remains the important issue of longer-term degradation of the REBCO that would occur over years or decades, which the group is presently investigating. Others are pursuing the use of these magnets for satellite thrusters and particle accelerators to study subatomic physics, where the effect could also have been a concern. For all these uses, “this is now one less thing to be concerned about,” Devitre says.The research team also included David Fischer, Kevin Woller, Maxwell Rae, Lauryn Kortman, and Zoe Fisher at MIT, and N. Riva at Proxima Fusion in Germany. This research was supported by Eni S.p.A. through the MIT Energy Initiative. More

  • in

    Reducing carbon emissions from residential heating: A pathway forward

    In the race to reduce climate-warming carbon emissions, the buildings sector is falling behind. While carbon dioxide (CO2) emissions in the U.S. electric power sector dropped by 34 percent between 2005 and 2021, emissions in the building sector declined by only 18 percent in that same time period. Moreover, in extremely cold locations, burning natural gas to heat houses can make up a substantial share of the emissions portfolio. Therefore, steps to electrify buildings in general, and residential heating in particular, are essential for decarbonizing the U.S. energy system.But that change will increase demand for electricity and decrease demand for natural gas. What will be the net impact of those two changes on carbon emissions and on the cost of decarbonizing? And how will the electric power and natural gas sectors handle the new challenges involved in their long-term planning for future operations and infrastructure investments?A new study by MIT researchers with support from the MIT Energy Initiative (MITEI) Future Energy Systems Center unravels the impacts of various levels of electrification of residential space heating on the joint power and natural gas systems. A specially devised modeling framework enabled them to estimate not only the added costs and emissions for the power sector to meet the new demand, but also any changes in costs and emissions that result for the natural gas sector.The analyses brought some surprising outcomes. For example, they show that — under certain conditions — switching 80 percent of homes to heating by electricity could cut carbon emissions and at the same time significantly reduce costs over the combined natural gas and electric power sectors relative to the case in which there is only modest switching. That outcome depends on two changes: Consumers must install high-efficiency heat pumps plus take steps to prevent heat losses from their homes, and planners in the power and the natural gas sectors must work together as they make long-term infrastructure and operations decisions. Based on their findings, the researchers stress the need for strong state, regional, and national policies that encourage and support the steps that homeowners and industry planners can take to help decarbonize today’s building sector.A two-part modeling approachTo analyze the impacts of electrification of residential heating on costs and emissions in the combined power and gas sectors, a team of MIT experts in building technology, power systems modeling, optimization techniques, and more developed a two-part modeling framework. Team members included Rahman Khorramfar, a senior postdoc in MITEI and the Laboratory for Information and Decision Systems (LIDS); Morgan Santoni-Colvin SM ’23, a former MITEI graduate research assistant, now an associate at Energy and Environmental Economics, Inc.; Saurabh Amin, a professor in the Department of Civil and Environmental Engineering and principal investigator in LIDS; Audun Botterud, a principal research scientist in LIDS; Leslie Norford, a professor in the Department of Architecture; and Dharik Mallapragada, a former MITEI principal research scientist, now an assistant professor at New York University, who led the project. They describe their new methods and findings in a paper published in the journal Cell Reports Sustainability on Feb. 6.The first model in the framework quantifies how various levels of electrification will change end-use demand for electricity and for natural gas, and the impacts of possible energy-saving measures that homeowners can take to help. “To perform that analysis, we built a ‘bottom-up’ model — meaning that it looks at electricity and gas consumption of individual buildings and then aggregates their consumption to get an overall demand for power and for gas,” explains Khorramfar. By assuming a wide range of building “archetypes” — that is, groupings of buildings with similar physical characteristics and properties — coupled with trends in population growth, the team could explore how demand for electricity and for natural gas would change under each of five assumed electrification pathways: “business as usual” with modest electrification, medium electrification (about 60 percent of homes are electrified), high electrification (about 80 percent of homes make the change), and medium and high electrification with “envelope improvements,” such as sealing up heat leaks and adding insulation.The second part of the framework consists of a model that takes the demand results from the first model as inputs and “co-optimizes” the overall electricity and natural gas system to minimize annual investment and operating costs while adhering to any constraints, such as limits on emissions or on resource availability. The modeling framework thus enables the researchers to explore the impact of each electrification pathway on the infrastructure and operating costs of the two interacting sectors.The New England case study: A challenge for electrificationAs a case study, the researchers chose New England, a region where the weather is sometimes extremely cold and where burning natural gas to heat houses contributes significantly to overall emissions. “Critics will say that electrification is never going to happen [in New England]. It’s just too expensive,” comments Santoni-Colvin. But he notes that most studies focus on the electricity sector in isolation. The new framework considers the joint operation of the two sectors and then quantifies their respective costs and emissions. “We know that electrification will require large investments in the electricity infrastructure,” says Santoni-Colvin. “But what hasn’t been well quantified in the literature is the savings that we generate on the natural gas side by doing that — so, the system-level savings.”Using their framework, the MIT team performed model runs aimed at an 80 percent reduction in building-sector emissions relative to 1990 levels — a target consistent with regional policy goals for 2050. The researchers defined parameters including details about building archetypes, the regional electric power system, existing and potential renewable generating systems, battery storage, availability of natural gas, and other key factors describing New England.They then performed analyses assuming various scenarios with different mixes of home improvements. While most studies assume typical weather, they instead developed 20 projections of annual weather data based on historical weather patterns and adjusted for the effects of climate change through 2050. They then analyzed their five levels of electrification.Relative to business-as-usual projections, results from the framework showed that high electrification of residential heating could more than double the demand for electricity during peak periods and increase overall electricity demand by close to 60 percent. Assuming that building-envelope improvements are deployed in parallel with electrification reduces the magnitude and weather sensitivity of peak loads and creates overall efficiency gains that reduce the combined demand for electricity plus natural gas for home heating by up to 30 percent relative to the present day. Notably, a combination of high electrification and envelope improvements resulted in the lowest average cost for the overall electric power-natural gas system in 2050.Lessons learnedReplacing existing natural gas-burning furnaces and boilers with heat pumps reduces overall energy consumption. Santoni-Colvin calls it “something of an intuitive result” that could be expected because heat pumps are “just that much more efficient than old, fossil fuel-burning systems. But even so, we were surprised by the gains.”Other unexpected results include the importance of homeowners making more traditional energy efficiency improvements, such as adding insulation and sealing air leaks — steps supported by recent rebate policies. Those changes are critical to reducing costs that would otherwise be incurred for upgrading the electricity grid to accommodate the increased demand. “You can’t just go wild dropping heat pumps into everybody’s houses if you’re not also considering other ways to reduce peak loads. So it really requires an ‘all of the above’ approach to get to the most cost-effective outcome,” says Santoni-Colvin.Testing a range of weather outcomes also provided important insights. Demand for heating fuel is very weather-dependent, yet most studies are based on a limited set of weather data — often a “typical year.” The researchers found that electrification can lead to extended peak electric load events that can last for a few days during cold winters. Accordingly, the researchers conclude that there will be a continuing need for a “firm, dispatchable” source of electricity; that is, a power-generating system that can be relied on to produce power any time it’s needed — unlike solar and wind systems. As examples, they modeled some possible technologies, including power plants fired by a low-carbon fuel or by natural gas equipped with carbon capture equipment. But they point out that there’s no way of knowing what types of firm generators will be available in 2050. It could be a system that’s not yet mature, or perhaps doesn’t even exist today.In presenting their findings, the researchers note several caveats. For one thing, their analyses don’t include the estimated cost to homeowners of installing heat pumps. While that cost is widely discussed and debated, that issue is outside the scope of their current project.In addition, the study doesn’t specify what happens to existing natural gas pipelines. “Some homes are going to electrify and get off the gas system and not have to pay for it, leaving other homes with increasing rates because the gas system cost now has to be divided among fewer customers,” says Khorramfar. “That will inevitably raise equity questions that need to be addressed by policymakers.”Finally, the researchers note that policies are needed to drive residential electrification. Current financial support for installation of heat pumps and steps to make homes more thermally efficient are a good start. But such incentives must be coupled with a new approach to planning energy infrastructure investments. Traditionally, electric power planning and natural gas planning are performed separately. However, to decarbonize residential heating, the two sectors should coordinate when planning future operations and infrastructure needs. Results from the MIT analysis indicate that such cooperation could significantly reduce both emissions and costs for residential heating — a change that would yield a much-needed step toward decarbonizing the buildings sector as a whole. More