More stories

  • in

    Optimizing food subsidies: Applying digital platforms to maximize nutrition

    Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition. World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.” Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I’ve always been fascinated by trying to solve these issues,” he noted.His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains. For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from  the data, and so this is where we need more sophisticated and robust algorithms.”Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.” The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges. More

  • in

    How to reduce greenhouse gas emissions from ammonia production

    Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.  More

  • in

    New prediction model could improve the reliability of fusion power plants

    Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investigate how the devices can spin up plasma and harness its energy. One of the challenges that tokamaks face is how to safely and reliably turn off a plasma current that is circulating at speeds of up to 100 kilometers per second, at temperatures of over 100 million degrees Celsius.Such “rampdowns” are necessary when a plasma becomes unstable. To prevent the plasma from further disrupting and potentially damaging the device’s interior, operators ramp down the plasma current. But occasionally the rampdown itself can destabilize the plasma. In some machines, rampdowns have caused scrapes and scarring to the tokamak’s interior — minor damage that still requires considerable time and resources to repair.Now, scientists at MIT have developed a method to predict how plasma in a tokamak will behave during a rampdown. The team combined machine-learning tools with a physics-based model of plasma dynamics to simulate a plasma’s behavior and any instabilities that may arise as the plasma is ramped down and turned off. The researchers trained and tested the new model on plasma data from an experimental tokamak in Switzerland. They found the method quickly learned how plasma would evolve as it was tuned down in different ways. What’s more, the method achieved a high level of accuracy using a relatively small amount of data. This training efficiency is promising, given that each experimental run of a tokamak is expensive and quality data is limited as a result.The new model, which the team highlights this week in an open-access Nature Communications paper, could improve the safety and reliability of future fusion power plants.“For fusion to be a useful energy source it’s going to have to be reliable,” says lead author Allen Wang, a graduate student in aeronautics and astronautics and a member of the Disruption Group at MIT’s Plasma Science and Fusion Center (PSFC). “To be reliable, we need to get good at managing our plasmas.”The study’s MIT co-authors include PSFC Principal Research Scientist and Disruptions Group leader Cristina Rea, and members of the Laboratory for Information and Decision Systems (LIDS) Oswin So, Charles Dawson, and Professor Chuchu Fan, along with Mark (Dan) Boyer of Commonwealth Fusion Systems and collaborators from the Swiss Plasma Center in Switzerland.“A delicate balance”Tokamaks are experimental fusion devices that were first built in the Soviet Union in the 1950s. The device gets its name from a Russian acronym that translates to a “toroidal chamber with magnetic coils.” Just as its name describes, a tokamak is toroidal, or donut-shaped, and uses powerful magnets to contain and spin up a gas to temperatures and energies high enough that atoms in the resulting plasma can fuse and release energy.Today, tokamak experiments are relatively low-energy in scale, with few approaching the size and output needed to generate safe, reliable, usable energy. Disruptions in experimental, low-energy tokamaks are generally not an issue. But as fusion machines scale up to grid-scale dimensions, controlling much higher-energy plasmas at all phases will be paramount to maintaining a machine’s safe and efficient operation.“Uncontrolled plasma terminations, even during rampdown, can generate intense heat fluxes damaging the internal walls,” Wang notes. “Quite often, especially with the high-performance plasmas, rampdowns actually can push the plasma closer to some instability limits. So, it’s a delicate balance. And there’s a lot of focus now on how to manage instabilities so that we can routinely and reliably take these plasmas and safely power them down. And there are relatively few studies done on how to do that well.”Bringing down the pulseWang and his colleagues developed a model to predict how a plasma will behave during tokamak rampdown. While they could have simply applied machine-learning tools such as a neural network to learn signs of instabilities in plasma data, “you would need an ungodly amount of data” for such tools to discern the very subtle and ephemeral changes in extremely high-temperature, high-energy plasmas, Wang says.Instead, the researchers paired a neural network with an existing model that simulates plasma dynamics according to the fundamental rules of physics. With this combination of machine learning and a physics-based plasma simulation, the team found that only a couple hundred pulses at low performance, and a small handful of pulses at high performance, were sufficient to train and validate the new model.The data they used for the new study came from the TCV, the Swiss “variable configuration tokamak” operated by the Swiss Plasma Center at EPFL (the Swiss Federal Institute of Technology Lausanne). The TCV is a small experimental fusion experimental device that is used for research purposes, often as test bed for next-generation device solutions. Wang used the data from several hundred TCV plasma pulses that included properties of the plasma such as its temperature and energies during each pulse’s ramp-up, run, and ramp-down. He trained the new model on this data, then tested it and found it was able to accurately predict the plasma’s evolution given the initial conditions of a particular tokamak run.The researchers also developed an algorithm to translate the model’s predictions into practical “trajectories,” or plasma-managing instructions that a tokamak controller can automatically carry out to for instance adjust the magnets or temperature maintain the plasma’s stability. They implemented the algorithm on several TCV runs and found that it produced trajectories that safely ramped down a plasma pulse, in some cases faster and without disruptions compared to runs without the new method.“At some point the plasma will always go away, but we call it a disruption when the plasma goes away at high energy. Here, we ramped the energy down to nothing,” Wang notes. “We did it a number of times. And we did things much better across the board. So, we had statistical confidence that we made things better.”The work was supported in part by Commonwealth Fusion Systems (CFS), an MIT spinout that intends to build the world’s first compact, grid-scale fusion power plant. The company is developing a demo tokamak, SPARC, designed to produce net-energy plasma, meaning that it should generate more energy than it takes to heat up the plasma. Wang and his colleagues are working with CFS on ways that the new prediction model and tools like it can better predict plasma behavior and prevent costly disruptions to enable safe and reliable fusion power.“We’re trying to tackle the science questions to make fusion routinely useful,” Wang says. “What we’ve done here is the start of what is still a long journey. But I think we’ve made some nice progress.”Additional support for the research came from the framework of the EUROfusion Consortium, via the Euratom Research and Training Program and funded by the Swiss State Secretariat for Education, Research, and Innovation. More

  • in

    Concrete “battery” developed at MIT now packs 10 times the power

    Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?” asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system. “Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.However, something unique happened when the load on the arch increased: the light flickered. This is likely due to the way stress impacts electrical contacts or the distribution of charges. “There may be a kind of self-monitoring capacity here. If we think of an ec3 arch at architectural scale, its output may fluctuate when it’s impacted by a stressor like high winds. We may be able to use this as a signal of when and to what extent a structure is stressed, or monitor its overall health in real time,” envisions Masic.The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting. “With these higher energy densities and demonstrated value across a broader application space, we now have a powerful and flexible tool that can help us address a wide range of persistent energy challenges,” explains Stefaniuk. “One of our biggest motivations was to help enable the renewable energy transition. Solar power, for example, has come a long way in terms of efficiency. However, it can only generate power when there’s enough sunlight. So, the question becomes: How do you meet your energy needs at night, or on cloudy days?”Franz-Josef Ulm, EC³ Hub co-director and CEE professor, continues the thread: “The answer is that you need a way to store and release energy. This has usually meant a battery, which often relies on scarce or harmful materials. We believe that ec3 is a viable substitute, letting our buildings and infrastructure meet our energy storage needs.” The team is working toward applications like parking spaces and roads that could charge electric vehicles, as well as homes that can operate fully off the grid.“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.” More

  • in

    Palladium filters could enable cheaper, more efficient generation of hydrogen fuel

    Palladium is one of the keys to jump-starting a hydrogen-based energy economy. The silvery metal is a natural gatekeeper against every gas except hydrogen, which it readily lets through. For its exceptional selectivity, palladium is considered one of the most effective materials at filtering gas mixtures to produce pure hydrogen.Today, palladium-based membranes are used at commercial scale to provide pure hydrogen for semiconductor manufacturing, food processing, and fertilizer production, among other applications in which the membranes operate at modest temperatures. If palladium membranes get much hotter than around 800 kelvins, they can break down.Now, MIT engineers have developed a new palladium membrane that remains resilient at much higher temperatures. Rather than being made as a continuous film, as most membranes are, the new design is made from palladium that is deposited as “plugs” into the pores of an underlying supporting material. At high temperatures, the snug-fitting plugs remain stable and continue separating out hydrogen, rather than degrading as a surface film would.The thermally stable design opens opportunities for membranes to be used in hydrogen-fuel-generating technologies such as compact steam methane reforming and ammonia cracking — technologies that are designed to operate at much higher temperatures to produce hydrogen for zero-carbon-emitting fuel and electricity.“With further work on scaling and validating performance under realistic industrial feeds, the design could represent a promising route toward practical membranes for high-temperature hydrogen production,” says Lohyun Kim PhD ’24, a former graduate student in MIT’s Department of Mechanical Engineering.Kim and his colleagues report details of the new membrane in a study appearing today in the journal Advanced Functional Materials. The study’s co-authors are Randall Field, director of research at the MIT Energy Initiative (MITEI); former MIT chemical engineering graduate student Chun Man Chow PhD ’23; Rohit Karnik, the Jameel Professor in the Department of Mechanical Engineering at MIT and the director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS); and Aaron Persad, a former MIT research scientist in mechanical engineering who is now an assistant professor at the University of Maryland Eastern Shore.Compact futureThe team’s new design came out of a MITEI project related to fusion energy. Future fusion power plants, such as the one MIT spinout Commonwealth Fusion Systems is designing, will involve circulating hydrogen isotopes of deuterium and tritium at extremely high temperatures to produce energy from the isotopes’ fusing. The reactions inevitably produce other gases that will have to be separated, and the hydrogen isotopes will be recirculated into the main reactor for further fusion.Similar issues arise in a number of other processes for producing hydrogen, where gases must be separated and recirculated back into a reactor. Concepts for such recirculating systems would require first cooling down the gas before it can pass through hydrogen-separating membranes — an expensive and energy-intensive step that would involve additional machinery and hardware.“One of the questions we were thinking about is: Can we develop membranes which could be as close to the reactor as possible, and operate at higher temperatures, so we don’t have to pull out the gas and cool it down first?” Karnik says. “It would enable more energy-efficient, and therefore cheaper and compact, fusion systems.”The researchers looked for ways to improve the temperature resistance of palladium membranes. Palladium is the most effective metal used today to separate hydrogen from a variety of gas mixtures. It naturally attracts hydrogen molecules (H2) to its surface, where the metal’s electrons interact with and weaken the molecule’s bonds, causing H2 to temporarily break apart into its respective atoms. The individual atoms then diffuse through the metal and join back up on the other side as pure hydrogen.Palladium is highly effective at permeating hydrogen, and only hydrogen, from streams of various gases. But conventional membranes typically can operate at temperatures of up to 800 kelvins before the film starts to form holes or clumps up into droplets, allowing other gases to flow through.Plugging inKarnik, Kim and their colleagues took a different design approach. They observed that at high temperatures, palladium will start to shrink up. In engineering terms, the material is acting to reduce surface energy. To do this, palladium, and most other materials and even water, will pull apart and form droplets with the smallest surface energy. The lower the surface energy, the more stable the material can be against further heating.This gave the team an idea: If a supporting material’s pores could be “plugged” with deposits of palladium — essentially already forming a droplet with the lowest surface energy — the tight quarters might substantially increase palladium’s heat tolerance while preserving the membrane’s selectivity for hydrogen.To test this idea, they fabricated small chip-sized samples of membrane using a porous silica supporting layer (each pore measuring about half a micron wide), onto which they deposited a very thin layer of palladium. They applied techniques to essentially grow the palladium into the pores, and polished down the surface to remove the palladium layer and leave palladium only inside the pores.They then placed samples in a custom-built apparatus in which they flowed hydrogen-containing gas of various mixtures and temperatures to test its separation performance. The membranes remained stable and continued to separate hydrogen from other gases even after experiencing temperatures of up to 1,000 kelvins for over 100 hours — a significant improvement over conventional film-based membranes.“The use of palladium film membranes are generally limited to below around 800 kelvins, at which point they degrade,” Kim says. “Our plug design therefore extends palladium’s effective heat resilience by roughly at least 200 kelvins and maintains integrity far longer under extreme conditions.”These conditions are within the range of hydrogen-generating technologies such as steam methane reforming and ammonia cracking.Steam methane reforming is an established process that has required complex, energy-intensive systems to preprocess methane to a form where pure hydrogen can be extracted. Such preprocessing steps could be replaced with a compact “membrane reactor,” through which a methane gas would directly flow, and the membrane inside would filter out pure hydrogen. Such reactors would significantly cut down the size, complexity, and cost of producing hydrogen from steam methane reforming, and Kim estimates a membrane would have to work reliably in temperatures of up to nearly 1,000 kelvins. The team’s new membrane could work well within such conditions.Ammonia cracking is another way to produce hydrogen, by “cracking” or breaking apart ammonia. As ammonia is very stable in liquid form, scientists envision that it could be used as a carrier for hydrogen and be safely transported to a hydrogen fuel station, where ammonia could be fed into a membrane reactor that again pulls out hydrogen and pumps it directly into a fuel cell vehicle. Ammonia cracking is still largely in pilot and demonstration stages, and Kim says any membrane in an ammonia cracking reactor would likely operate at temperatures of around 800 kelvins — within the range of the group’s new plug-based design.Karnik emphasizes that their results are just a start. Adopting the membrane into working reactors will require further development and testing to ensure it remains reliable over much longer periods of time.“We showed that instead of making a film, if you make discretized nanostructures you can get much more thermally stable membranes,” Karnik says. “It provides a pathway for designing membranes for extreme temperatures, with the added possibility of using smaller amounts of expensive palladium, toward making hydrogen production more efficient and affordable. There is potential there.”This work was supported by Eni S.p.A. via the MIT Energy Initiative. More

  • in

    A cysteine-rich diet may promote regeneration of the intestinal lining, study suggests

    A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, then delivering elevated quantities of cysteine, through diet or supplements, could offer a new strategy to help damaged tissue heal faster, the researchers say.“The study suggests that if we give these patients a cysteine-rich diet or cysteine supplementation, perhaps we can dampen some of the chemotherapy or radiation-induced injury,” says Omer Yilmaz, director of the MIT Stem Cell Initiative, an associate professor of biology at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research. “The beauty here is we’re not using a synthetic molecule; we’re exploiting a natural dietary compound.”While previous research has shown that certain types of diets, including low-calorie diets, can enhance intestinal stem cell activity, the new study is the first to identify a single nutrient that can help intestinal cells to regenerate.Yilmaz is the senior author of the study, which appears today in Nature. Koch Institute postdoc Fangtao Chi is the paper’s lead author.Boosting regenerationIt is well-established that diet can affect overall health: High-fat diets can lead to obesity, diabetes, and other health problems, while low-calorie diets have been shown to extend lifespans in many species. In recent years, Yilmaz’s lab has investigated how different types of diets influence stem cell regeneration, and found that high-fat diets, as well as short periods of fasting, can enhance stem cell activity in different ways.“We know that macro diets such as high-sugar diets, high-fat diets, and low-calorie diets have a clear impact on health. But at the granular level, we know much less about how individual nutrients impact stem cell fate decisions, as well as tissue function and overall tissue health,” Yilmaz says.In their new study, the researchers began by feeding mice a diet high in one of 20 different amino acids, the building blocks of proteins. For each group, they measured how the diet affected intestinal stem cell regeneration. Among these amino acids, cysteine had the most dramatic effects on stem cells and progenitor cells (immature cells that differentiate into adult intestinal cells).Further studies revealed that cysteine initiates a chain of events leading to the activation of a population of immune cells called CD8 T cells. When cells in the lining of the intestine absorb cysteine from digested food, they convert it into CoA, a cofactor that is released into the mucosal lining of the intestine. There, CD8 T cells absorb CoA, which stimulates them to begin proliferating and producing a cytokine called IL-22.IL-22 is an important player in the regulation of intestinal stem cell regeneration, but until now, it wasn’t known that CD8 T cells can produce it to boost intestinal stem cells. Once activated, those IL-22-releasing T cells are primed to help combat any kind of injury that could occur within the intestinal lining.“What’s really exciting here is that feeding mice a cysteine-rich diet leads to the expansion of an immune cell population that we typically don’t associate with IL-22 production and the regulation of intestinal stemness,” Yilmaz says. “What happens in a cysteine-rich diet is that the pool of cells that make IL-22 increases, particularly the CD8 T-cell fraction.”These T cells tend to congregate within the lining of the intestine, so they are already in position when needed. The researchers found that the stimulation of CD8 T cells occurred primarily in the small intestine, not in any other part of the digestive tract, which they believe is because most of the protein that we consume is absorbed by the small intestine.Healing the intestineIn this study, the researchers showed that regeneration stimulated by a cysteine-rich diet could help to repair radiation damage to the intestinal lining. Also, in work that has not been published yet, they showed that a high-cysteine diet had a regenerative effect following treatment with a chemotherapy drug called 5-fluorouracil. This drug, which is used to treat colon and pancreatic cancers, can also damage the intestinal lining.Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. The body can also synthesize its own cysteine, by converting the amino acid methionine to cysteine — a process that takes place in the liver. However, cysteine produced in the liver is distributed through the entire body and doesn’t lead to a buildup in the small intestine the way that consuming cysteine in the diet does.“With our high-cysteine diet, the gut is the first place that sees a high amount of cysteine,” Chi says.Cysteine has been previously shown to have antioxidant effects, which are also beneficial, but this study is the first to demonstrate its effect on intestinal stem cell regeneration. The researchers now hope to study whether it may also help other types of stem cells regenerate new tissues. In one ongoing study, they are investigating whether cysteine might stimulate hair follicle regeneration.They also plan to further investigate some of the other amino acids that appear to influence stem cell regeneration.“I think we’re going to uncover multiple new mechanisms for how these amino acids regulate cell fate decisions and gut health in the small intestine and colon,” Yilmaz says.The research was funded, in part, by the National Institutes of Health, the V Foundation, the Kathy and Curt Marble Cancer Research Award, the Koch Institute-Dana-Farber/Harvard Cancer Center Bridge Project, the American Federation for Aging Research, the MIT Stem Cell Initiative, and the Koch Institute Support (core) Grant from the National Cancer Institute. More

  • in

    3 Questions: Addressing the world’s most pressing challenges

    The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges. In this Q&A, Mihaela Papa, the center’s director of research and a principal research scientist at MIT, describes her role as well as research within the BRICS Lab at MIT — a reference to the BRICS intergovernmental organization, which comprises the nations of Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates. She also discusses the ongoing mission of CIS to tackle the world’s most complex challenges in new and creative ways.Q: What is your role at CIS, and some of your key accomplishments since joining the center just over a year ago?A: I serve as director of research and principal research scientist at CIS, a role that bridges management and scholarship. I oversee grant and fellowship programs, spearhead new research initiatives, build research communities across our center’s area programs and MIT schools, and mentor the next generation of scholars. My academic expertise is in international relations, and I publish on global governance and sustainable development, particularly through my new BRICS Lab. This past year, I focused on building collaborative platforms that highlight CIS’ role as an interdisciplinary hub and expand its research reach. With Evan Lieberman, the director of CIS, I launched the CIS Global Research and Policy Seminar series to address current challenges in global development and governance, foster cross-disciplinary dialogue, and connect theoretical insights to policy solutions. We also convened a Climate Adaptation Workshop, which examined promising strategies for financing adaptation and advancing policy innovation. We documented the outcomes in a workshop report that outlines a broader research agenda contributing to MIT’s larger climate mission.In parallel, I have been reviewing CIS’ grant-making programs to improve how we serve our community, while also supporting regional initiatives such as research planning related to Ukraine. Together with the center’s MIT-Brazil faculty director Brad Olsen, I secured a MITHIC [MIT Human Insight Collaboration] Connectivity grant to build an MIT Amazonia research community that connects MIT scholars with regional partners and strengthens collaboration across the Amazon. Finally, I launched the BRICS Lab to analyze transformations in global governance and have ongoing research on BRICS and food security and data centers in BRICS. Q: Tell us more about the BRICS Lab.A: The BRICS countries comprise the majority of the world’s population and an expanding share of the global economy. [Originally comprising Brazil, Russia, India, and China, BRICS currently includes 11 nations.] As a group, they carry the collective weight to shape international rules, influence global markets, and redefine norms — yet the question remains: Will they use this power effectively? The BRICS Lab explores the implications of the bloc’s rise for international cooperation and its role in reshaping global politics. Our work focuses on three areas: the design and strategic use of informal groups like BRICS in world affairs; the coalition’s potential to address major challenges such as food security, climate change, and artificial intelligence; and the implications of U.S. policy toward BRICS for the future of multilateralism.Q: What are the center’s biggest research priorities right now?A: Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research. Since then, we have grown into a hub that combines interdisciplinary scholarship and actively engages with policymakers and the public. Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.Our core focus spans security, development, and human dignity. Security studies have been a priority for the center, and our new nuclear security programming advances this work while training the next generation of scholars in this critical field. On the development front, our work has explored how societies manage diverse populations, navigate international migration, as well as engage with human rights and the changing patterns of regime dynamics.We are pursuing new research in three areas. First, on climate change, we seek to understand how societies confront environmental risks and harms, from insurance to water and food security in the international context. Second, we examine shifting patterns of global governance as rising powers set new agendas and take on greater responsibilities in the international system. Finally, we are initiating research on the impact of AI — how it reshapes governance across international relations, what is the role of AI corporations, and how AI-related risks can be managed.As we approach our 75th anniversary in 2026, we are excited to bring researchers together to spark bold ideas that open new possibilities for the future. More

  • in

    Responding to the climate impact of generative AI

    In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.The energy demands of generative AI are expected to continue increasing dramatically over the next decade.For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.Considering carbon emissionsTalk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds. “The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.Reducing operational carbon emissionsWhen it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.Another strategy is to use less energy-intensive computing hardware.Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.Researchers can also take advantage of efficiency-boosting measures.For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.Leveraging efficiency improvementsConstant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.Maximizing energy savingsWhile reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.AI-based solutionsCurrently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.The local, state, and federal review processes required for a new renewable energy projects can take years.Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says. More