More stories

  • in

    Unlocking ammonia as a fuel source for heavy industry

    At a high level, ammonia seems like a dream fuel: It’s carbon-free, energy-dense, and easier to move and store than hydrogen. Ammonia is also already manufactured and transported at scale, meaning it could transform energy systems using existing infrastructure. But burning ammonia creates dangerous nitrous oxides, and splitting ammonia molecules to create hydrogen fuel typically requires lots of energy and specialized engines.The startup Amogy, founded by four MIT alumni, believes it has the technology to finally unlock ammonia as a major fuel source. The company has developed a catalyst it says can split — or “crack” — ammonia into hydrogen and nitrogen up to 70 percent more efficiently than state-of-the-art systems today. The company is planning to sell its catalysts as well as modular systems including fuel cells and engines to convert ammonia directly to power. Those systems don’t burn or combust ammonia, and thus bypass the health concerns related to nitrous oxides.Since Amogy’s founding in 2020, the company has used its ammonia-cracking technology to create the world’s first ammonia-powered drone, tractor, truck, and tugboat. It has also attracted partnerships with industry leaders including Samsung, Saudi Aramco, KBR, and Hyundai, raising more than $300 million along the way.“No one has showcased that ammonia can be used to power things at the scale of ships and trucks like us,” says CEO Seonghoon Woo PhD ’15, who founded the company with Hyunho Kim PhD ’18, Jongwon Choi PhD ’17, and Young Suk Jo SM ’13, PhD ’16. “We’ve demonstrated this approach works and is scalable.”Earlier this year, Amogy completed a research and manufacturing facility in Houston and announced a pilot deployment of its catalyst with the global engineering firm JGC Holdings Corporation. Now, with a manufacturing contract secured with Samsung Heavy Industries, Amogy is set to start delivering more of its systems to customers next year. The company will deploy a 1-megawatt ammonia-to-power pilot project with the South Korean city of Pohang in 2026, with plans to scale up to 40 megawatts at that site by 2028 or 2029. Woo says dozens of other projects with multinational corporations are in the works.Because of the power density advantages of ammonia over renewables and batteries, the company is targeting power-hungry industries like maritime shipping, power generation, construction, and mining for its early systems.“This is only the beginning,” Woo says. “We’ve worked hard to build the technology and the foundation of our company, but the real value will be generated as we scale. We’ve proved the potential for ammonia to decarbonize heavy industry, and now we really want to accelerate adoption of our technology. We’re thinking long term about the energy transition.”Unlocking a new fuel sourceWoo completed his PhD in MIT’s Department of Materials Science and Engineering before his eventual co-founders, Kim, Choi, and Jo, completed their PhDs in MIT’s Department of Mechanical Engineering. Jo worked on energy science and ran experiments to make engines run more efficiently as part of his PhD.“The PhD programs at MIT teach you how to think deeply about solving technical problems using systems-based approaches,” Woo says. “You also realize the value in learning from failures, and that mindset of iteration is similar to what you need to do in startups.”In 2020, Woo was working in the semiconductor industry when he reached out to his eventual co-founders asking if they were working on anything interesting. At that time, Jo was still working on energy systems based on hydrogen and ammonia while Kim was developing new catalysts to create ammonia fuel.“I wanted to start a company and build a business to do good things for society,” Woo recalls. “People had been talking about hydrogen as a more sustainable fuel source, but it had never come to fruition. We thought there might be a way to improve ammonia catalyst technology and accelerate the hydrogen economy.”The founders started experimenting with Jo’s technology for ammonia cracking, the process in which ammonia (NH3) molecules split into their nitrogen (N2) and hydrogen (H2) constituent parts. Ammonia cracking to date has been done at huge plants in high-temperature reactors that require large amounts of energy. Those high temperatures limited the catalyst materials that could be used to drive the reaction.Starting from scratch, the founders were able to identify new material recipes that could be used to miniaturize the catalyst and work at lower temperatures. The proprietary catalyst materials allow the company to create a system that can be deployed in new places at lower costs.“We really had to redevelop the whole technology, including the catalyst and reformer, and even the integration with the larger system,” Woo says. “One of the most important things is we don’t combust ammonia — we don’t need pilot fuel, and we don’t generate any nitrogen gas or CO2.”Today Amogy has a portfolio of proprietary catalyst technologies that use base metals along with precious metals. The company has proven the efficiency of its catalysts in demonstrations beginning with the first ammonia-powered drone in 2021. The catalyst can be used to produce hydrogen more efficiently, and by integrating the catalyst with hydrogen fuel cells or engines, Amogy also offers modular ammonia-to-power systems that can scale to meet customer energy demands.“We’re enabling the decarbonization of heavy industry,” Woo says. “We are targeting transportation, chemical production, manufacturing, and industries that are carbon-heavy and need to decarbonize soon, for example to achieve domestic goals. Our vision in the longer term is to enable ammonia as a fuel in a variety of applications, including power generation, first at microgrids and then eventually full grid-scale.”Scaling with industryWhen Amogy completed its facility in Houston, one of their early visitors was MIT Professor Evelyn Wang, who is also MIT’s vice president for energy and climate. Woo says other people involved in the Climate Project at MIT have been supportive.Another key partner for Amogy is Samsung Heavy Industries, which announced a multiyear deal to manufacturing Amogy’s ammonia-to-power systems on Nov. 12.“Our strategy is to partner with the existing big players in heavy industry to accelerate the commercialization of our technology,” Woo says. “We have worked with big oil and gas companies like BHP and Saudi Aramco, companies interested in hydrogen fuel like KBR and Mitsubishi, and many more industrial companies.”When paired with other clean energy technologies to provide the power for its systems, Woo says Amogy offers a way to completely decarbonize sectors of the economy that can’t electrify on their own.“In heavy transport, you have to use high-energy density liquid fuel because of the long distances and power requirements,” Woo says. “Batteries can’t meet those requirements. It’s why hydrogen is such an exciting molecule for heavy industry and shipping. But hydrogen needs to be kept super cold, whereas ammonia can be liquid at room temperature. Our job now is to provide that power at scale.” More

  • in

    A new take on carbon capture

    If there was one thing Cameron Halliday SM ’19, MBA ’22, PhD ’22 was exceptional at during the early days of his PhD at MIT, it was producing the same graph over and over again. Unfortunately for Halliday, the graph measured various materials’ ability to absorb CO2 at high temperatures over time — and it always pointed down and to the right. That meant the materials lost their ability to capture the molecules responsible for warming our climate.At least Halliday wasn’t alone: For many years, researchers have tried and mostly failed to find materials that could reliably absorb CO2 at the super-high temperatures of industrial furnaces, kilns, and boilers. Halliday’s goal was to find something that lasted a little longer.Then in 2019, he put a type of molten salt called lithium-sodium ortho-borate through his tests. The salts absorbed more than 95 percent of the CO2. And for the first time, the graph showed almost no degradation over 50 cycles.  The same was true after 100 cycles. Then 1,000.“I honestly don’t know if we ever expected to completely solve the problem,” Halliday says. “We just expected to improve the system. It took another two months to figure out why it worked.”The researchers discovered the salts behave like a liquid at high temperatures, which avoids the brittle cracking responsible for the degradation of many solid materials.“I remember walking home over the Mass Ave bridge at 5 a.m. with all the morning runners going by me,” Halliday recalls. “That was the moment when I realized what this meant. Since then, it’s been about proving it works at larger scales. We’ve just been building the next scaled-up version, proving it still works, building a bigger version, proving that out, until we reach the ultimate goal of deploying this everywhere.”Today, Halliday is the co-founder and CEO of Mantel, a company building systems to capture carbon dioxide at large industrial sites of all types. Although a lot of people think the carbon capture industry is a dead end, Halliday doesn’t give up so easily, and he’s got a growing corpus of performance data to keep him encouraged.Mantel’s system can be added on to the machines of power stations and factories making cement, steel, paper and pulp, oil and gas, and more, reducing their carbon emissions by around 95 percent. Instead of being released into the atmosphere, the emitted CO2 is channeled into Mantel’s system, where the company’s salts are sprayed out from something that looks like a shower head. The CO2 diffuses through the molten salts in a reaction that can be reversed through further temperature increases, so the salts boil off pure CO2 that can be transported for use or stored underground.A key difference from other carbon capture methods that have struggled to be profitable is that Mantel uses the heat from its process to generate steam for customers by combining it with water in another part of its system. Mantel says delivering steam, which is used to drive many common industrial processes, lets its system work with just 3 percent of the net energy that state-of-the-art carbon capture systems require.“We’re still consuming energy, but we get most of it back as steam, whereas the incumbent technology only consumes steam,” says Halliday, who co-founded Mantel with Sean Robertson PhD ’22 and Danielle Rapson. “That steam is a useful revenue stream, so we can turn carbon capture from a waste management process into a value creation process for our customer’s core business — whether that’s a power station using steam to make electricity, or oil and gas refineries. It completely changes the economics of carbon capture.”From science to startupHalliday’s first exposure to MIT came in 2016 when he cold emailed Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, asking if he could come to his lab for the summer and work on research into carbon capture.“He invited me, but he didn’t put me on that project,” Halliday recalls. “At the end of the summer he said, ‘You should consider coming back and doing a PhD.’”Halliday enrolled in a joint PhD-MBA program the following year.“I really wanted to work on something that had an impact,” Halliday says. “The dual PhD-MBA program has some deep technical academic elements to it, but you also work with a company for two months, so you use a lot of what you learn in the real world.”Halliday worked on a few different research projects in Hatton’s lab early on, all three of which eventually turned into companies. The one that he stuck with explored ways to make carbon capture more energy efficient by working at the high temperatures common at emissions-heavy industrial sites.Halliday ran into the same problems as past researchers with materials degrading at such extreme conditions.“It was the big limiter for the technology,” Halliday recalls.Then Halliday ran his successful experiment with molten borate salts in 2019. The MBA portion of his program began soon after, and Halliday decided to use that time to commercialize the technology. Part of that occurred in Course 15.366 (Climate and Energy Ventures), where Halliday met his co-founders. As it happens, alumni of the class have started more than 150 companies over the years.“MIT tries to pull these great ideas out of academia and get them into the world so they can be valued and used,” Halliday says. “For the Climate and Energy Ventures class, outside speakers showed us every stage of company-building. The technology roadmap for our system is shoebox-sized, shipping container, one-bedroom house, and then the size of a building. It was really valuable to see other companies and say, ‘That’s what we could look like in three years, or six years.”From startup to scale upWhen Mantel was officially founded in 2022 the founders had their shoebox-sized system. After raising early funding, the team built its shipping container-sized system at The Engine, an MIT-affiliated startup incubator. That system has been operational for almost two years.Last year, Mantel announced a partnership with Kruger Inc. to build the next version of its system at a factory in Quebec, which will be operational next year. The plant will run in a two-year test phase before scaling across Kruger’s other plants if successful.“The Quebec project is proving the capture efficiency and proving the step-change improvement in energy use of our system,” Halliday says. “It’s a derisking of the technology that will unlock a lot more opportunities.”Halliday says Mantel is in conversations with close to 100 industrial partners around the world, including the owners of refineries, data centers, cement and steel plants, and oil and gas companies. Because it’s a standalone addition, Halliday says Mantel’s system doesn’t have to change much to be used in different industries.Mantel doesn’t handle CO2 conversion or sequestration, but Halliday says capture makes up the bulk of the costs in the CO2 value chain. It also generates high-quality CO2 that can be transported in pipelines and used in industries including the food and beverage industry — like the CO2 that makes your soda bubbly.“This is the solution our customers are dreaming of,” Halliday says. “It means they don’t have to shut down their billion-dollar asset and reimagine their business to address an issue that they all appreciate is existential. There are questions about the timeline, but most industries recognize this is a problem they’ll have to grapple with eventually. This is a pragmatic solution that’s not trying to reshape the world as we dream of it. It’s looking at the problem at hand today and fixing it.” More

  • in

    MIT Energy Initiative launches Data Center Power Forum

    With global power demand from data centers expected to more than double by 2030, the MIT Energy Initiative (MITEI) in September launched an effort that brings together MIT researchers and industry experts to explore innovative solutions for powering the data-driven future. At its annual research conference, MITEI announced the Data Center Power Forum, a targeted research effort for MITEI member companies interested in addressing the challenges of data center power demand. The Data Center Power Forum builds on lessons from MITEI’s May 2025 symposium on the energy to power the expansion of artificial intelligence (AI) and focus panels related to data centers at the fall 2024 research conference.In the United States, data centers consumed 4 percent of the country’s electricity in 2023, with demand expected to increase to 9 percent by 2030, according to the Electric Power Research Institute. Much of the growth in demand is from the increasing use of AI, which is placing an unprecedented strain on the electric grid. This surge in demand presents a serious challenge for the technology and energy sectors, government policymakers, and everyday consumers, who may see their electric bills skyrocket as a result.“MITEI has long supported research on ways to produce more efficient and cleaner energy and to manage the electric grid. In recent years, MITEI has also funded dozens of research projects relevant to data center energy issues. Building on this history and knowledge base, MITEI’s Data Center Power Forum is convening a specialized community of industry members who have a vital stake in the sustainable growth of AI and the acceleration of solutions for powering data centers and expanding the grid,” says William H. Green, the director of MITEI and the Hoyt C. Hottel Professor of Chemical Engineering.MITEI’s mission is to advance zero- and low-carbon solutions to expand energy access and mitigate climate change. MITEI works with companies from across the energy innovation chain, including in the infrastructure, automotive, electric power, energy, natural resources, and insurance sectors. MITEI member companies have expressed strong interest in the Data Center Power Forum and are committing to support focused research on a wide range of energy issues associated with data center expansion, Green says.MITEI’s Data Center Power Forum will provide its member companies with reliable insights into energy supply, grid load operations and management, the built environment, and electricity market design and regulatory policy for data centers. The forum complements MIT’s deep expertise in adjacent topics such as low-power processors, efficient algorithms, task-specific AI, photonic devices, quantum computing, and the societal consequences of data center expansion. As part of the forum, MITEI’s Future Energy Systems Center is funding projects relevant to data center energy in its upcoming proposal cycles. MITEI Research Scientist Deep Deka has been named the program manager for the forum.“Figuring out how to meet the power demands of data centers is a complicated challenge. Our research is coming at this from multiple directions, from looking at ways to expand transmission capacity within the electrical grid in order to bring power to where it is needed, to ensuring the quality of electrical service for existing users is not diminished when new data centers come online, and to shifting computing tasks to times and places when and where energy is available on the grid,” said Deka.MITEI currently sponsors substantial research related to data center energy topics across several MIT departments. The existing research portfolio includes more than a dozen projects related to data centers, including low- or zero-carbon solutions for energy supply and infrastructure, electrical grid management, and electricity market policy. MIT researchers funded through MITEI’s industry consortium are also designing more energy-efficient power electronics and processors and investigating behind-the-meter low-/no-carbon power plants and energy storage. MITEI-supported experts are studying how to use AI to optimize electrical distribution and the siting of data centers and conducting techno-economic analyses of data center power schemes. MITEI’s consortium projects are also bringing fresh perspectives to data center cooling challenges and considering policy approaches to balance the interests of shareholders. By drawing together industry stakeholders from across the AI and grid value chain, the Data Center Power Forum enables a richer dialog about solutions to power, grid, and carbon management problems in a noncommercial and collaborative setting.“The opportunity to meet and to hold discussions on key data center challenges with other forum members from different sectors, as well as with MIT faculty members and research scientists, is a unique benefit of this MITEI-led effort,” Green says.MITEI addressed the issue of data center power needs with its company members during its fall 2024 Annual Research Conference with a panel session titled, “The extreme challenge of powering data centers in a decarbonized way.” MITEI Director of Research Randall Field led a discussion with representatives from large technology companies Google and Microsoft, known as “hyperscalers,” as well as Madrid-based infrastructure developer Ferrovial S.E. and utility company Exelon Corp. Another conference session addressed the related topic, “Energy storage and grid expansion.” This past spring, MITEI focused its annual Spring Symposium on data centers, hosting faculty members and researchers from MIT and other universities, business leaders, and a representative of the Federal Energy Regulatory Commission for a full day of sessions on the topic, “AI and energy: Peril and promise.”  More

  • in

    MIT Maritime Consortium releases “Nuclear Ship Safety Handbook”

    Commercial shipping accounts for 3 percent of all greenhouse gas emissions globally. As the sector sets climate goals and chases a carbon-free future, nuclear power — long used as a source for military vessels — presents an enticing solution. To date, however, there has been no clear, unified public document available to guide design safety for certain components of civilian nuclear ships. A new “Nuclear Ship Safety Handbook” by the MIT Maritime Consortium aims to change that and set the standard for safe maritime nuclear propulsion.“This handbook is a critical tool in efforts to support the adoption of nuclear in the maritime industry,” explains Themis Sapsis, the William I. Koch Professor of Mechanical Engineering at MIT, director of the MIT Center for Ocean Engineering, and co-director of the MIT Maritime Consortium. “The goal is to provide a strong basis for initial safety on key areas that require nuclear and maritime regulatory research and development in the coming years to prepare for nuclear propulsion in the maritime industry.”Using research data and standards, combined with operational experiences during civilian maritime nuclear operations, the handbook provides unique insights into potential issues and resolutions in the design efficacy of maritime nuclear operations, a topic of growing importance on the national and international stage. “Right now, the nuclear-maritime policies that exist are outdated and often tied only to specific technologies, like pressurized water reactors,” says Jose Izurieta, a graduate student in the Department of Mechanical Engineering (MechE) Naval Construction and Engineering (2N) Program, and one of the handbook authors. “With the recent U.K.-U.S. Technology Prosperity Deal now including civil maritime nuclear applications, I hope the handbook can serve as a foundation for creating a clear, modern regulatory framework for nuclear-powered commercial ships.”The recent memorandum of understanding signed by the U.S. and U.K calls for the exploration of “novel applications of advanced nuclear energy, including civil maritime applications,” and for the parties to play “a leading role informing the establishment of international standards, potential establishment of a maritime shipping corridor between the Participants’ territories, and strengthening energy resilience for the Participants’ defense facilities.”“The U.S.-U.K. nuclear shipping corridor offers a great opportunity to collaborate with legislators on establishing the critical framework that will enable the United States to invest on nuclear-powered merchant vessels — an achievement that will reestablish America in the shipbuilding space,” says Fotini Christia, the Ford International Professor of the Social Sciences, director of the Institute for Data, Systems, and Society (IDSS), director of the MIT Sociotechnical Systems Research Center, and co-director of the MIT Maritime Consortium.“With over 30 nations now building or planning their first reactors, nuclear energy’s global acceptance is unprecedented — and that momentum is key to aligning safety rules across borders for nuclear-powered ships and the respective ports,” says Koroush Shirvan, the Atlantic Richfield Career Development Professor in Energy Studies at MIT and director of the Reactor Technology Course for Utility Executives.The handbook, which is divided into chapters in areas involving the overlapping nuclear and maritime safety design decisions that will be encountered by engineers, is careful to balance technical and practical guidance with policy considerations.Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering, says the handbook will significantly benefit the entire maritime community, specifically naval architects and marine engineers, by providing standardized guidelines for design and operation specific to nuclear powered commercial vessels.“This will assist in enhancing safety protocols, improve risk assessments, and ensure consistent compliance with international regulations,” MacLean says. “This will also help foster collaboration amongst engineers and regulators. Overall, this will further strengthen the reliability, sustainability, and public trust in nuclear-powered maritime systems.”Anthony Valiaveedu, the handbook’s lead author, and co-author Nat Edmonds, are both students in the MIT Master’s Program in Technology and Policy (TPP) within the IDSS. The pair are also co-authors of a paper published in Science Policy Review earlier this year that offered structured advice on the development of nuclear regulatory policies.“It is important for safety and technology to go hand-in-hand,” Valiaveedu explains. “What we have done is provide a risk-informed process to begin these discussions for engineers and policymakers.”“Ultimately, I hope this framework can be used to build strong bilateral agreements between nations that will allow nuclear propulsion to thrive,” says fellow co-author Izurieta.Impact on industry“Maritime designers needed a source of information to improve their ability to understand and design the reactor primary components, and development of the ‘Nuclear Ship Safety Handbook’ was a good step to bridge this knowledge gap,” says Christopher J. Wiernicki, American Bureau of Shipping (ABS) chair and CEO. “For this reason, it is an important document for the industry.”The ABS, which is the American classification society for the maritime industry, develops criteria and provides safety certification for all ocean-going vessels. ABS is among the founding members of the MIT Maritime Consortium. Capital Clean Energy Carriers Corp., HD Korea Shipbuilding and Offshore Engineering, and Delos Navigation Ltd. are also consortium founding members. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“As we consider a net-zero framework for the shipping industry, nuclear propulsion represents a potential solution. Careful investigation remains the priority, with safety and regulatory standards at the forefront,” says Jerry Kalogiratos, CEO of Capital Clean Energy Carriers Corp. “As first movers, we are exploring all options. This handbook lays the technical foundation for the development of nuclear-powered commercial vessels.”Sangmin Park, senior vice president at HD Korea Shipbuilding and Offshore Engineering, says “The ‘Nuclear Ship Safety Handbook’ marks a groundbreaking milestone that bridges shipbuilding excellence and nuclear safety. It drives global collaboration between industry and academia, and paves the way for the safe advancement of the nuclear maritime era.”Maritime at MITMIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. Maritime Consortium projects, including the handbook, reflect national priorities aimed at revitalizing the U.S. shipbuilding and commercial maritime industries.The MIT Maritime Consortium, which launched in 2024, brings together MIT and maritime industry leaders to explore data-powered strategies to reduce harmful emissions, optimize vessel operations, and support economic priorities.“One of our most important efforts is the development of technologies, policies, and regulations to make nuclear propulsion for commercial ships a reality,” says Sapsis. “Over the last year, we have put together an interdisciplinary team with faculty and students from across the Institute. One of the outcomes of this effort is this very detailed document providing detailed guidance on how such effort should be implemented safely.”Handbook contributors come from multiple disciplines and MIT departments, labs, and research centers, including the Center for Ocean Engineering, IDSS, MechE’s Course 2N Program, the MIT Technology and Policy Program, and the Department of Nuclear Science and Engineering.MIT faculty members and research advisors on the project include Sapsis; Christia; Shirvan; MacLean; Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering, director, Center for Advanced Nuclear Energy Systems, and director of science and technology for the Nuclear Reactor Laboratory; and Captain Andrew Gillespy, professor of the practice and director of the Naval Construction and Engineering (2N) Program.“Proving the viability of nuclear propulsion for civilian ships will entail getting the technologies, the economics and the regulations right,” says Buongiorno. “This handbook is a meaningful initial contribution to the development of a sound regulatory framework.”“We were lucky to have a team of students and knowledgeable professors from so many fields,” says Edmonds. “Before even beginning the outline of the handbook, we did significant archival and history research to understand the existing regulations and overarching story of nuclear ships. Some of the most relevant documents we found were written before 1975, and many of them were stored in the bellows of the NS Savannah.”The NS Savannah, which was built in the late 1950s as a demonstration project for the potential peacetime uses of nuclear energy, was the first nuclear-powered merchant ship. The Savannah was first launched on July 21, 1959, two years after the first nuclear-powered civilian vessel, the Soviet ice-breaker Lenin, and was retired in 1971.Historical context for this project is important, because the reactor technologies envisioned for maritime propulsion today are quite different from the traditional pressurized water reactors used by the U.S. Navy. These new reactors are being developed not just in the maritime context, but also to power ports and data centers on land; they all use low-enriched uranium and are passively cooled. For the maritime industry, Sapsis says, “the technology is there, it’s safe, and it’s ready.”“The Nuclear Ship Safety Handbook” is publicly available on the MIT Maritime Consortium website and from the MIT Libraries.  More

  • in

    Book reviews technologies aiming to remove carbon from the atmosphere

    Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized subject matter for nonspecialists” and includes Herzog’s 2018 book, “Carbon Capture.”Burning fossil fuels, as well as other human activities, cause the release of carbon dioxide (CO2) into the atmosphere, where it acts like a blanket that warms the Earth, resulting in climate change. Much attention has focused on mitigation technologies that reduce emissions, but in their book, Herzog and Mac Dowell have turned their attention to “carbon dioxide removal” (CDR), an approach that removes carbon already present in the atmosphere.In this new volume, the authors explain how CO2 naturally moves into and out of the atmosphere and present a brief history of carbon removal as a concept for dealing with climate change. They also describe the full range of “pathways” that have been proposed for removing CO2 from the atmosphere. Those pathways include engineered systems designed for “direct air capture” (DAC), as well as various “nature-based” approaches that call for planting trees or taking steps to enhance removal by biomass or the oceans. The book offers easily accessible explanations of the fundamental science and engineering behind each approach.The authors compare the “quality” of the different pathways based on the following metrics:Accounting. For public acceptance of any carbon-removal strategy, the authors note, the developers need to get the accounting right — and that’s not always easy. “If you’re going to spend money to get CO2 out of the atmosphere, you want to get paid for doing it,” notes Herzog. It can be tricky to measure how much you have removed, because there’s a lot of CO2 going in and out of the atmosphere all the time. Also, if your approach involves, say, burning fossil fuels, you must subtract the amount of CO2 that’s emitted from the total amount you claim to have removed. Then there’s the timing of the removal. With a DAC device, the removal happens right now, and the removed CO2 can be measured. “But if I plant a tree, it’s going to remove CO2 for decades. Is that equivalent to removing it right now?” Herzog queries. How to take that factor into account hasn’t yet been resolved.Permanence. Different approaches keep the CO2 out of the atmosphere for different durations of time. How long is long enough? As the authors explain, this is one of the biggest issues, especially with nature-based solutions, where events such as wildfires or pestilence or land-use changes can release the stored CO2 back into the atmosphere. How do we deal with that?Cost. Cost is another key factor. Using a DAC device to remove CO2 costs far more than planting trees, but it yields immediate removal of a measurable amount of CO2 that can then be locked away forever. How does one monetize that trade-off?Additionality. “You’re doing this project, but would what you’re doing have been done anyway?” asks Herzog. “Is your effort additional to business as usual?” This question comes into play with many of the nature-based approaches involving trees, soils, and so on.Permitting and governance. These issues are especially important — and complicated — with approaches that involve doing things in the ocean. In addition, Herzog points out that some CCS projects could also achieve carbon removal, but they would have a hard time getting permits to build the pipelines and other needed infrastructure.The authors conclude that none of the CDR strategies now being proposed is a clear winner on all the metrics. However, they stress that carbon removal has the potential to play an important role in meeting our climate change goals — not by replacing our emissions-reduction efforts, but rather by supplementing them. However, as Herzog and Mac Dowell make clear in their book, many challenges must be addressed to move CDR from today’s speculation to deployment at scale, and the book supports the wider discussion about how to move forward. Indeed, the authors have fulfilled their stated goal: “to provide an objective analysis of the opportunities and challenges for CDR and to separate myth from reality.” More

  • in

    How to reduce greenhouse gas emissions from ammonia production

    Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.  More

  • in

    Report: Sustainability in supply chains is still a firm-level priority

    Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.   During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.”  More

  • in

    Responding to the climate impact of generative AI

    In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.The energy demands of generative AI are expected to continue increasing dramatically over the next decade.For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.Considering carbon emissionsTalk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds. “The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.Reducing operational carbon emissionsWhen it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.Another strategy is to use less energy-intensive computing hardware.Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.Researchers can also take advantage of efficiency-boosting measures.For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.Leveraging efficiency improvementsConstant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.Maximizing energy savingsWhile reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.AI-based solutionsCurrently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.The local, state, and federal review processes required for a new renewable energy projects can take years.Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says. More