More stories

  • in

    MIT researchers outline a path for scaling clean hydrogen production

    Hydrogen is an integral component for the manufacture of steel, fertilizer, and a number of chemicals. Producing hydrogen using renewable electricity offers a way to clean up these and many other hard-to-decarbonize industries.

    But supporting the nascent clean hydrogen industry while ensuring it grows into a true force for decarbonization is complicated, in large part because of the challenges of sourcing clean electricity. To assist regulators and to clarify disagreements in the field, MIT researchers published a paper today in Nature Energy that outlines a path to scale the clean hydrogen industry while limiting emissions.

    Right now, U.S. electric grids are mainly powered by fossil fuels, so if scaling hydrogen production translates to greater electricity use, it could result in a major emissions increase. There is also the risk that “low-carbon” hydrogen projects could end up siphoning renewable energy that would have been built anyway for the grid. It is therefore critical to ensure that low-carbon hydrogen procures electricity from “additional” renewables, especially when hydrogen production is supported by public subsidies. The challenge is allowing hydrogen producers to procure renewable electricity in a cost-effective way that helps the industry grow, while minimizing the risk of high emissions.

    U.S. regulators have been tasked with sorting out this complexity. The Inflation Reduction Act (IRA) is offering generous production tax credits for low-carbon hydrogen. But the law didn’t specify exactly how hydrogen’s carbon footprint should be judged.

    To this end, the paper proposes a phased approach to qualify for the tax credits. In the first phase, hydrogen created from grid electricity can receive the credits under looser standards as the industry gets its footing. Once electricity demand for hydrogen production grows, the industry should be required to adhere to stricter standards for ensuring the electricity is coming from renewable sources. Finally, many years from now when the grid is mainly powered by renewable energy, the standards can loosen again.

    The researchers say the nuanced approach ensures the law supports the growth of clean hydrogen without coming at the expense of emissions.

    “If we can scale low-carbon hydrogen production, we can cut some significant sources of existing emissions and enable decarbonization of other critical industries,” says paper co-author Michael Giovanniello, a graduate student in MIT’s Technology and Policy Program. “At the same time, there’s a real risk of implementing the wrong requirements and wasting lots of money to subsidize carbon-intensive hydrogen production. So, you have to balance scaling the industry with reducing the risk of emissions. I hope there’s clarity and foresight in how this policy is implemented, and I hope our paper makes the argument clear for policymakers.”

    Giovanniello’s co-authors on the paper are MIT Energy Initiative (MITEI) Principal Research Scientist Dharik Mallapragada, MITEI Research Assistant Anna Cybulsky, and MIT Sloan School of Management Senior Lecturer Tim Schittekatte.

    On definitions and disagreements

    When renewable electricity from a wind farm or solar array flows through the grid, it’s mixed with electricity from fossil fuels. The situation raises a question worth billions of dollars in federal tax credits: What are the carbon dioxide emissions of grid users who are also signing agreements to procure electricity from renewables?

    One way to answer this question is via energy system models that can simulate various scenarios related to technology configurations and qualifying requirements for receiving the credit.

    To date, many studies using such models have come up with very different emissions estimates for electrolytic hydrogen production. One source of disagreement is over “time matching,” which refers to how strictly to align the timing of electric hydrogen production with the generation of clean electricity. One proposed approach, known as hourly time matching, would require that electricity consumption to produce hydrogen is accounted for by procured clean electricity at every hour.

    A less stringent approach, called annual time matching, would offer more flexibility in hourly electricity consumption for hydrogen production, so long as the annual consumption matches the annual generation from the procured clean electricity generation. The added flexibility could reduce the cost of hydrogen production, which is critical for scaling its use, but could lead to greater emissions per unit of hydrogen produced.

    Another point of disagreement stems from how hydrogen producers purchase renewable electricity. If an electricity user procures energy from an existing solar farm, it’s simply increasing overall electricity demand and taking clean energy away from other users. But if the tax credits only go to electric hydrogen producers that sign power purchase agreements with new renewable suppliers, they’re supporting clean electricity that wouldn’t have otherwise been contributing to the grid. This concept is known as “additionality.”

    The researchers analyzed previous studies that reached conflicting conclusions, and identified different interpretations of additionality underlying their methodologies. One interpretation of additionality is that new electrolytic hydrogen projects do not compete with nonhydrogen demand for renewable energy resources. The other assumes that they do compete for all newly deployed renewables — and, because of low-carbon hydrogen subsidies, the electrolyzers take priority.

    Using DOLPHYN, an open-source energy systems model, the researchers tested how these two interpretations of additionality (the “compete” and “noncompete” scenarios) impact the cost and emissions of the alternative time-matching requirements (hourly and annual) associated with grid-interconnected hydrogen production. They modeled two regional U.S. grids — in Texas and Florida — which represent the high and low end of renewables deployment. They further tested the interaction of four critical policy factors with the hydrogen tax credits, including renewable portfolio standards, constraints of renewables and energy storage deployment, limits on hydrogen electrolyzer capacity factors, and competition with natural gas-based hydrogen with carbon capture.

    They show that the different modeling interpretations of additionality are the primary factor explaining the vastly different estimates of emissions from electrolyzer hydrogen under annual time-matching.

    Getting policy right

    The paper concludes that the right way to implement the production tax credit qualifying requirements depends on whether you believe we live in a “compete” or “noncompete” world. But reality is not so binary.

    “What framework is more appropriate is going to change with time as we deploy more hydrogen and the grid decarbonizes, so therefore the policy has to be adaptive to those changes,” Mallapragada says. “It’s an evolving story that’s tied to what’s happening in the rest of the energy system, and in particular the electric grid, both from the technological as policy perspective.”

    Today, renewables deployment is driven, in part, by binding factors, such as state renewable portfolio standards and corporate clean-energy commitments, as well as by purely market forces. Since the electrolyzer is so nascent, and today resembles a “noncompete” world, the researchers argue for starting with the less strict annual requirement. But as hydrogen demand for renewable electricity grows, and market competition drives an increasing quantity of renewables deployment, transitioning to hourly matching will be necessary to avoid high emissions.

    This phased approach necessitates deliberate, long-term planning from regulators. “If regulators make a decision and don’t outline when they’ll reassess that decision, they might never reassess that decision, so we might get locked into a bad policy,” Giovanniello explains. In particular, the paper highlights the risk of locking in an annual time-matching requirement that leads to significant emissions in future.

    The researchers hope their findings will contribute to upcoming policy decisions around the Inflation Reduction Act’s tax credits. They started looking into this question around a year ago, making it a quick turnaround by academic standards.

    “There was definitely a sense to be timely in our analysis so as to be responsive to the needs of policy,” Mallapragada says.

    The researchers say the paper can also help policymakers understand the emissions impacts of companies procuring renewable energy credits to meet net-zero targets and electricity suppliers attempting to sell “green” electricity.

    “This question is relevant in a lot of different domains,” Schittekatte says. “Other popular examples are the emission impacts of data centers that procure green power, or even the emission impacts of your own electric car sourcing power from your rooftop solar and the grid. There are obviously differences based on the technology in question, but the underlying research question we’ve answered is the same. This is an extremely important topic for the energy transition.” More

  • in

    AI meets climate: MIT Energy and Climate Hack 2023

    The MIT Energy and Climate Hack brought together participants from myriad fields and disciplines to develop rapid, innovative solutions to one of the most complex challenges facing society today: the global energy and climate crisis. Hundreds of students from MIT and colleges across the globe convened on MIT’s campus and virtually for this year’s event, which was held Nov. 10-12.

    Established in 2013, the MIT Energy and Climate Hack has been the launchpad for innovative and sustainable solutions for a decade; an annual reminder that exciting new ideas are always just around the corner.

    According to Claire Lorenzo, an MIT student organizer and communications director for this year’s Energy and Climate Hack, “There were a lot of people from a lot of places who showed up; both virtually and in person. It was encouraging to see how driven everyone was. How passionate they were about finding great solutions. You could see these ideas starting to form immediately.”

    On the first day, representatives from companies across numerous industries presented participants with their most pressing energy and climate-related challenges. Once the gathering broke into teams, participants had two days to “hack the challenge” they were assigned and present their solution to company representatives, fellow hackers, and judges.  

    The focus areas at this year’s event were energy markets, transportation, and farms and forests. Participating corporate sponsors included Google, Crusoe, Ironwood, Foothill Ventures, Koidra, Mitra Chem, Avangrid, Schneider Electric, First Solar, and Climate Ledger. 

    This year’s event also marked the first time that artificial intelligence emerged as a viable tool for developing creative climate solutions. Lorenzo observed, “I’m studying computer science, so exploring how AI could be harnessed to have a positive impact on the climate was particularly exciting for me. It can be applicable to virtually any domain. Like transportation, [with emissions] for example. In agriculture, too.”

    Energy and Climate Hack organizers identified the implementation of four core AI applications for special consideration: the acceleration of discovery (shortening the development process while simultaneously producing less waste), optimizing real-world solutions (utilizing automation to increase efficiency), prediction (using AI to improve prediction algorithms), and processing unstructured data (using AI to analyze and scale large amounts of data efficiently).

    “If there was a shared sentiment among the participants, it would probably be the idea that there isn’t a singular solution to climate change,” says Lorenzo, “and that requires cooperation from various industries, leveraging knowledge and experience from numerous fields, to make a lasting impact.”

    After the initial round of presentations concluded, one team from each challenge advanced from the preliminary presentation judging session to the final presentation round, where they pitched their solutions to a crowded room of attendees. Once the semi-finalists had pitched their solutions, the judges deliberated over the entries and selected team Fenergy, which worked in the energy markets sector, as the winners. The team, consisting of Alessandro Fumi, Amal Nammouchi, Amaury De Bock, Cyrine Chaabani, and Robbie Lee V, said, “Our solution, Unbiased Cathode, enables researchers to assess the supply chain implications of battery materials before development begins, hence reducing the lab-to-production timeline.”

    “They created a LLM [large language model]-powered tool that allows innovative new battery technologies to be iterated and developed much more efficiently,” Lorenzo added.

    When asked what she will remember most about her first experience at the MIT Energy and Climate Hack, Lorenzo replied, “Having hope for the future. Hope from seeing the passion that so many people have to find a solution. Hope from seeing all of these individuals come so far to tackle this challenge and make a difference. If we continue to develop and implement solutions like these on a global level, I am hopeful.”

    Students interested in learning more about the MIT Energy and Climate Hackathon, or participating in next year’s Hack, can find more information on the event website. More

  • in

    Merging science and systems thinking to make materials more sustainable

    For Professor Elsa Olivetti, tackling a problem as large and complex as climate change requires not only lab research but also understanding the systems of production that power the global economy.

    Her career path reflects a quest to investigate materials at scales ranging from the microscopic to the mass-manufactured.

    “I’ve always known what questions I wanted to ask, and then set out to build the tools to help me ask those questions,” says Olivetti, the Jerry McAfee Professor in Engineering.

    Olivetti, who earned tenure in 2022 and was recently appointed associate dean of engineering, has sought to equip students with similar skills, whether in the classroom, in her lab group, or through the interdisciplinary programs she leads at MIT. Those efforts have earned her accolades including the Bose Award for Excellence in Teaching, a MacVicar Faculty Fellowship in 2021, and the McDonald Award for Excellence in Mentoring and Advising in 2023.

    “I think to make real progress in sustainability, materials scientists need to think in interdisciplinary, systems-level ways, but at a deep technical level,” Olivetti says. “Supporting my students so that’s something that a lot more people can do is very rewarding for me.”

    Her mission to make materials more sustainable also makes Olivetti grateful [EAO1] she’s at MIT, which has a long tradition of both interdisciplinary collaboration and technical know-how.

    “MIT’s core competencies are well-positioned for bold achievements in climate and sustainability — the deep expertise on the economics side, the frontier knowledge in science, the computational creativity,” Olivetti says. “It’s a really exciting time and place where the key ingredients for progress are simmering in transformative ways.”

    Answering the call

    The moment that set Olivetti on her life’s journey began when she was 8, with a knock at her door. Her parents were in the other room, so Olivetti opened the door and met an organizer for Greenpeace, a nonprofit that works to raise awareness of environmental issues.

    “I had a chat with that guy and got hooked on environmental concerns,” Olivetti says. “I still remember that conversation.”

    The interaction changed the way Olivetti thought about her place in the world, and her new perspective manifested itself in some unique ways. Her elementary school science fair projects became elaborate pursuits of environmental solutions involving burying various items in the backyard to test for biodegradability. There was also an awkward attempt at natural pesticide development, which lead to a worm hatching in her bedroom.

    As an undergraduate at the University of Virginia, Olivetti gravitated toward classes in environmentalism and materials science.

    “There was a link between materials science and a broader, systems way of framing design for environment, and that just clicked for me in terms of the way I wanted to think about environmental problems — from the atom to the system,” Olivetti recalls.

    That interest led Olivetti to MIT for a PhD in 2001, where she studied the feasibility of new materials for lithium-ion batteries.

    “I really wanted to be thinking of things at a systems level, but I wanted to ground that in lab-based research,” Olivetti says. “I wanted an experiential experience in grad school, and that’s why I chose MIT’s program.”

    Whether it was her undergraduate studies, her PhD, or her ensuing postdoc work at MIT, Olivetti sought to learn new skills to continue bridging the gap between materials science and environmental systems thinking.

    “I think of it as, ‘Here’s how I can build up the ways I ask questions,’” Olivetti explains. “How do we design these materials while thinking about their implications as early as possible?”

    Since joining MIT’s faculty in 2014, Olivetti has developed computational models to measure the cost and environmental impact of new materials, explored ways to adopt more sustainable and circular supply chains, and evaluated potential materials limitations as lithium-ion battery production is scaled. That work helps companies increase their use of greener, recyclable materials and more sustainably dispose of waste.

    Olivetti believes the wide scope of her research gives the students in her lab a more holistic understanding of the life cycle of materials.

    “When the group started, each student was working on a different aspect of the problem — like on the natural language processing pipeline, or on recycling technology assessment, or beneficial use of waste — and now each student can link each of those pieces in their research,” Olivetti explains.

    Beyond her research, Olivetti also co-directs the MIT Climate and Sustainability Consortium, which has established a set of eight areas of sustainability that it organizes coalitions around. Each coalition involves technical leaders at companies and researchers at MIT that work together to accelerate the impact of MIT’s research by helping companies adopt innovative and more sustainable technologies.

    “Climate change mitigation and resilience is such a complex problem, and at MIT we have practice in working together across disciplines on many challenges,” Olivetti says. “It’s been exciting to lean on that culture and unlock ways to move forward more effectively.”

    Bridging divides

    Today, Olivetti tries to maximize the impact of her and her students’ research in materials industrial ecology by maintaining close ties to applications. In her research, this means working directly with aluminum companies to design alloys that could incorporate more scrap material or with nongovernmental organizations to incorporate agricultural residues in building products. In the classroom, that means bringing in people from companies to explain how they think about concepts like heat exchange or fluid flow in their products.

    “I enjoy trying to ground what students are learning in the classroom with what’s happening in the world,” Olivetti explains.

    Exposing students to industry is also a great way to help them think about their own careers. In her research lab, she’s started using the last 30 minutes of meetings to host talks from people working in national labs, startups, and larger companies to show students what they can do after their PhDs. The talks are similar to the Industry Seminar series Olivetti started that pairs undergraduate students with people working in areas like 3D printing, environmental consulting, and manufacturing.

    “It’s about helping students learn what they’re excited about,” Olivetti says.

    Whether in the classroom, lab, or at events held by organizations like MCSC, Olivetti believes collaboration is humanity’s most potent tool to combat climate change.

    “I just really enjoy building links between people,” Olivetti says. “Learning about people and meeting them where they are is a way that one can create effective links. It’s about creating the right playgrounds for people to think and learn.” More

  • in

    How to decarbonize the world, at scale

    The world in recent years has largely been moving on from debates about the need to curb carbon emissions and focusing more on action — the development, implementation, and deployment of the technological, economic, and policy measures to spur the scale of reductions needed by mid-century. That was the message Robert Stoner, the interim director of the MIT Energy Initiative (MITEI), gave in his opening remarks at the 2023 MITEI Annual Research Conference.

    Attendees at the two-day conference included faculty members, researchers, industry and financial leaders, government officials, and students, as well as more than 50 online participants from around the world.

    “We are at an extraordinary inflection point. We have this narrow window in time to mitigate the worst effects of climate change by transforming our entire energy system and economy,” said Jonah Wagner, the chief strategist of the U.S. Department of Energy’s (DOE) Loan Programs Office, in one of the conference’s keynote speeches.

    Yet the solutions exist, he said. “Most of the technologies that we need to deploy to stay close to the international target of 1.5 degrees Celsius warming are proven and ready to go,” he said. “We have over 80 percent of the technologies we will need through 2030, and at least half of the technologies we will need through 2050.”

    For example, Wagner pointed to the newly commissioned advanced nuclear power plant near Augusta, Georgia — the first new nuclear reactor built in the United States in a generation, partly funded through DOE loans. “It will be the largest source of clean power in America,” he said. Though implementing all the needed technologies in the United States through mid-century will cost an estimated $10 trillion, or about $300 billion a year, most of that money will come from the private sector, he said.

    As the United States faces what he describes as “a tsunami of distributed energy production,” one key example of the strategy that’s needed going forward, he said, is encouraging the development of virtual power plants (VPPs). The U.S. power grid is growing, he said, and will add 200 gigawatts of peak demand by 2030. But rather than building new, large power plants to satisfy that need, much of the increase can be accommodated by VPPs, he said — which are “aggregations of distributed energy resources like rooftop solar with batteries, like electric vehicles (EVs) and chargers, like smart appliances, commercial and industrial loads on the grid that can be used together to help balance supply and demand just like a traditional power plant.” For example, by shifting the time of demand for some applications where the timing is not critical, such as recharging EVs late at night instead of right after getting home from work when demand may be peaking, the need for extra peak power can be alleviated.

    Such programs “offer a broad range of benefits,” including affordability, reliability and resilience, decarbonization, and emissions reductions. But implementing such systems on a wide scale requires some up-front help, he explained. Payment for consumers to enroll in programs that allow such time adjustments “is the majority of the cost” of establishing VPPs, he says, “and that means most of the money spent on VPPs goes back into the pockets of American consumers.” But to make that happen, there is a need for standardization of VPP operations “so that we are not recreating the wheel every single time we deploy a pilot or an effort with a utility.”

    The conference’s other keynote speaker, Anne White, the vice provost and associate vice president for research administration at MIT, cited devastating recent floods, wildfires, and many other extreme weather-related crises around the world that have been exacerbated by climate change. “We saw in myriad ways that energy concerns and climate concerns are one and the same,” she said. “So, we must urgently develop and scale low-carbon and zero-carbon solutions to prevent future warming. And we must do this with a practical, systems-based approach that considers efficiency, affordability, equity, and sustainability for how the world will meet its energy needs.”

    White added that at MIT, “we are mobilizing everything.” People at MIT feel a strong sense of responsibility for dealing with these global issues, she said, “and I think it’s because we believe we have tools that can really make a difference.”

    Among the specific promising technologies that have sprung from MIT’s labs, she pointed out, is the rapid development of fusion technology that led to MIT spinoff company Commonwealth Fusion Systems, which aims to build a demonstration unit of a practical fusion power reactor by the decade’s end. That’s an outcome of decades of research, she emphasized — the kinds of early-stage risky work that only academic labs, with help from government grants, can carry out.

    For example, she pointed to the more than 200 projects that MITEI has provided seed funds of $150,000 each for two years, totaling over $28 million to date. Such early support is “a key part of producing the kind of transformative innovation we know we all need.” In addition, MIT’s The Engine has also helped launch not only Commonwealth Fusion Systems, but also Form Energy, a company building a plant in West Virginia to manufacture advanced iron-air batteries for renewable energy storage, and many others.

    Following that theme of supporting early innovation, the conference featured two panels that served to highlight the work of students and alumni and their energy-related startup companies. First, a startup showcase, moderated by Catarina Madeira, the director of MIT’s Startup Exchange, featured presentations about seven recent spinoff companies that are developing cutting-edge technologies that emerged from MIT research. These included:

    Aeroshield, developing a new kind of highly-insulated window using a unique aerogel material;
    Sublime, which is developing a low-emissions concrete;
    Found Energy, developing a way to use recycled aluminum as a fuel;
    Veir, developing superconducting power lines;
    Emvolom, developing inexpensive green fuels from waste gases;
    Boston Metal, developing low-emissions production processes for steel and other metals;
    Transaera, with a new kind of efficient air conditioning; and
    Carbon Recycling International, producing cheap hydrogen fuel and syngas.
    Later in the conference, a “student slam competition” featured presentations by 11 students who described results of energy projects they had been working on this past summer. The projects were as diverse as analyzing opposition to wind farms in Maine, how best to allocate EV charging stations, optimizing bioenergy production, recycling the lithium from batteries, encouraging adoption of heat pumps, and conflict analysis about energy project siting. Attendees voted on the quality of the student presentations, and electrical engineering and computer science student Tori Hagenlocker was declared first-place winner for her talk on heat pump adoption.

    Students were also featured in a first-time addition to the conference: a panel discussion among five current or recent students, giving their perspective on today’s energy issues and priorities, and how they are working toward trying to make a difference. Andres Alvarez, a recent graduate in nuclear engineering, described his work with a startup focused on identifying and supporting early-stage ideas that have potential. Graduate student Dyanna Jaye of urban studies and planning spoke about her work helping to launch a group called the Sunrise Movement to try to drive climate change as a top priority for the country, and her work helping to develop the Green New Deal.

    Peter Scott, a graduate student in mechanical engineering who is studying green hydrogen production, spoke of the need for a “very drastic and rapid phaseout of current, existing fossil fuels” and a halt on developing new sources. Amar Dayal, an MBA candidate at the MIT Sloan School of Management, talked about the interplay between technology and policy, and the crucial role that legislation like the Inflation Reduction Act can have in enabling new energy technology to make the climb to commercialization. And Shreyaa Raghavan, a doctoral student in the Institute of Data, Systems, and Society, talked about the importance of multidisciplinary approaches to climate issues, including the important role of computer science. She added that MIT does well on this compared to other institutions, and “sustainability and decarbonization is a pillar in a lot of the different departments and programs that exist here.”

    Some recent recipients of MITEI’s Seed Fund grants reported on their progress in a panel discussion moderated by MITEI Executive Director Martha Broad. Seed grant recipient Ariel Furst, a professor of chemical engineering, pointed out that access to electricity is very much concentrated in the global North and that, overall, one in 10 people worldwide lacks access to electricity and some 2.5 billion people “rely on dirty fuels to heat their homes and cook their food,” with impacts on both health and climate. The solution her project is developing involves using DNA molecules combined with catalysts to passively convert captured carbon dioxide into ethylene, a widely used chemical feedstock and fuel. Kerri Cahoy, a professor of aeronautics and astronautics, described her work on a system for monitoring methane emissions and power-line conditions by using satellite-based sensors. She and her team found that power lines often begin emitting detectable broadband radio frequencies long before they actually fail in a way that could spark fires.

    Admir Masic, an associate professor of civil and environmental engineering, described work on mining the ocean for minerals such as magnesium hydroxide to be used for carbon capture. The process can turn carbon dioxide into solid material that is stable over geological times and potentially usable as a construction material. Kripa Varanasi, a professor of mechanical engineering, said that over the years MITEI seed funding helped some of his projects that “went on to become startup companies, and some of them are thriving.” He described ongoing work on a new kind of electrolyzer for green hydrogen production. He developed a system using bubble-attracting surfaces to increase the efficiency of bioreactors that generate hydrogen fuel.

    A series of panel discussions over the two days covered a range of topics related to technologies and policies that could make a difference in combating climate change. On the technological side, one panel led by Randall Field, the executive director of MITEI’s Future Energy Systems Center, looked at large, hard-to-decarbonize industrial processes. Antoine Allanore, a professor of metallurgy, described progress in developing innovative processes for producing iron and steel, among the world’s most used commodities, in a way that drastically reduces greenhouse gas emissions. Greg Wilson of JERA Americas described the potential for ammonia produced from renewable sources to substitute for natural gas in power plants, greatly reducing emissions. Yet-Ming Chiang, a professor in materials science and engineering, described ways to decarbonize cement production using a novel low-temperature process. And Guiyan Zang, a research scientist at MITEI, spoke of efforts to reduce the carbon footprint of producing ethylene, a major industrial chemical, by using an electrochemical process.

    Another panel, led by Jacopo Buongiorno, professor of nuclear science and engineering, explored the brightening future for expansion of nuclear power, including new, small, modular reactors that are finally emerging into commercial demonstration. “There is for the first time truly here in the U.S. in at least a decade-and-a-half, a lot of excitement, a lot of attention towards nuclear,” Buongiorno said. Nuclear power currently produces 45 to 50 percent of the nation’s carbon-free electricity, the panelists said, and with the first new nuclear power plant in decades now in operation, the stage is set for significant growth.

    Carbon capture and sequestration was the subject of a panel led by David Babson, the executive director of MIT’s Climate Grand Challenges program. MIT professors Betar Gallant and Kripa Varanasi and industry representatives Elisabeth Birkeland from Equinor and Luc Huyse from Chevron Technology Ventures described significant progress in various approaches to recovering carbon dioxide from power plant emissions, from the air, and from the ocean, and converting it into fuels, construction materials, or other valuable commodities.

    Some panel discussions also addressed the financial and policy side of the climate issue. A panel on geopolitical implications of the energy transition was moderated by MITEI Deputy Director of Policy Christopher Knittel, who said “energy has always been synonymous with geopolitics.” He said that as concerns shift from where to find the oil and gas to where is the cobalt and nickel and other elements that will be needed, “not only are we worried about where the deposits of natural resources are, but we’re going to be more and more worried about how governments are incentivizing the transition” to developing this new mix of natural resources. Panelist Suzanne Berger, an Institute professor, said “we’re now at a moment of unique openness and opportunity for creating a new American production system,” one that is much more efficient and less carbon-producing.

    One panel dealt with the investor’s perspective on the possibilities and pitfalls of emerging energy technologies. Moderator Jacqueline Pless, an assistant professor in MIT Sloan, said “there’s a lot of momentum now in this space. It’s a really ripe time for investing,” but the risks are real. “Tons of investment is needed in some very big and uncertain technologies.”

    The role that large, established companies can play in leading a transition to cleaner energy was addressed by another panel. Moderator J.J. Laukatis, MITEI’s director of member services, said that “the scale of this transformation is massive, and it will also be very different from anything we’ve seen in the past. We’re going to have to scale up complex new technologies and systems across the board, from hydrogen to EVs to the electrical grid, at rates we haven’t done before.” And doing so will require a concerted effort that includes industry as well as government and academia. More

  • in

    3 Questions: What should scientists and the public know about nuclear waste?

    Many researchers see an expansion of nuclear power, which produces no greenhouse gas emissions from its power generation, as an essential component of strategies to combat global climate change. Yet there is still strong resistance to such expansion, and much of that is based on the issue of how to safely dispose of the resulting radioactive waste material. MIT recently convened a workshop to help nuclear engineers, policymakers, and academics learn about approaches to communicating accurate information about the management of nuclear waste to students and the public, in hopes of allaying fears and encouraging support for the development of new, safer nuclear power plants around the world.

    Organized by Haruko Wainwright, an MIT assistant professor of nuclear science and engineering and of civil and environmental engineering, the workshop included professors, researchers, industry representatives, and government officials, and was designed to emphasize the multidisciplinary nature of the issue. MIT News asked Wainwright to describe the workshop and its conclusions, which she reported on in a paper just published in the Journal of Environmental Radioactivity.

    Q: What was the main objective of the this workshop?

    A: There is a growing concern that, in spite of much excitement about new nuclear reactor deployment and nuclear energy for tackling climate change, relatively less attention is being paid to the thorny question of long-term management of the spent fuel (waste) from these reactors. The government and industry have embraced consent-based siting approaches — that is, finding sites to store and dispose nuclear waste through broad community participation with equity and environmental justice considered. However, many of us in academia feel that those in the industry are missing key facts to communicate to the public.

    Understanding and managing nuclear waste requires a multidisciplinary expertise in nuclear, civil, and chemical engineering as well as environmental and earth sciences. For example, the amount of waste per se, which is always very small for nuclear systems, is not the only factor determining the environmental impacts because some radionuclides in the waste are vastly more mobile than others, and thus can spread farther and more quickly. Nuclear engineers, environmental scientists, and others need to work together to predict the environmental impacts of radionuclides in the waste generated by the new reactors, and to develop waste isolation strategies for an extended time.

    We organized this workshop to ensure this collaborative approach is mastered from the start. A second objective was to develop a blueprint for educating next-generation engineers and scientists about nuclear waste and shaping a more broadly educated group of nuclear and general engineers.

    Q: What kinds of innovative teaching practices were discussed and recommended, and are there examples of these practices in action?

     A: Some participants teach project-based or simulation-based courses of real-world situations. For example, students are divided into several groups representing various stakeholders — such as the public, policymakers, scientists, and governments — and discuss the potential siting of a nuclear waste repository in a community. Such a course helps the students to consider the perspectives of different groups, understand a plurality of points of view, and learn how to communicate their ideas and concerns effectively. Other courses may ask students to synthesize key technical facts and numbers, and to develop a Congressional testimony statement or an opinion article for newspapers. 

    Q: What are some of the biggest misconceptions people have about nuclear waste, and how do you think these misconceptions can be addressed?

    A: The workshop participants agreed that the broader and life-cycle perspectives are important. Within the nuclear energy life cycle, for example, people focus disproportionally on high-level radioactive waste or spent fuel, which has been highly regulated and well managed. Nuclear systems also produce secondary waste, including low-level waste and uranium mining waste, which gets less attention.

    The participants also believe that the nuclear industry has been exemplary in leading the environmental and waste isolation science and technologies. Nuclear waste disposal strategies were developed in the 1950s, much earlier than other hazardous waste which began to receive serious regulation only in the 1970s. In addition, current nuclear waste disposal practices consider the compliance periods of isolation for thousands of years, while other hazardous waste disposal is not required to consider beyond 30 years, although some waste has an essentially infinite longevity, for example, mercury or lead. Finally, there is relatively unregulated waste — such as CO2 from fossil energy, agricultural effluents and other sources — that is released freely into the biosphere and is already impacting our environment. Yet, many people remain more concerned about the relatively well-regulated nuclear waste than about all these unregulated sources.

    Interestingly, many engineers — even nuclear engineers — do not know these facts. We believe that we need to teach students not just cutting-edge technologies, but also broader perspectives, including the history of industries and regulations, as well as environmental science.

    At the same time, we need to move the nuclear community to think more holistically about waste and its environmental impacts from the early stages of design of nuclear systems. We should design new reactors from the “waste up.”  We believe that the nuclear industry should continue to lead waste-management technologies and strategies, and also encourage other industries to adopt lifecycle approaches about their own waste to improve the overall sustainability. More

  • in

    MIT design would harness 40 percent of the sun’s heat to produce clean hydrogen fuel

    MIT engineers aim to produce totally green, carbon-free hydrogen fuel with a new, train-like system of reactors that is driven solely by the sun.

    In a study appearing today in Solar Energy Journal, the engineers lay out the conceptual design for a system that can efficiently produce “solar thermochemical hydrogen.” The system harnesses the sun’s heat to directly split water and generate hydrogen — a clean fuel that can power long-distance trucks, ships, and planes, while in the process emitting no greenhouse gas emissions.

    Today, hydrogen is largely produced through processes that involve natural gas and other fossil fuels, making the otherwise green fuel more of a “grey” energy source when considered from the start of its production to its end use. In contrast, solar thermochemical hydrogen, or STCH, offers a totally emissions-free alternative, as it relies entirely on renewable solar energy to drive hydrogen production. But so far, existing STCH designs have limited efficiency: Only about 7 percent of incoming sunlight is used to make hydrogen. The results so far have been low-yield and high-cost.

    In a big step toward realizing solar-made fuels, the MIT team estimates its new design could harness up to 40 percent of the sun’s heat to generate that much more hydrogen. The increase in efficiency could drive down the system’s overall cost, making STCH a potentially scalable, affordable option to help decarbonize the transportation industry.

    “We’re thinking of hydrogen as the fuel of the future, and there’s a need to generate it cheaply and at scale,” says the study’s lead author, Ahmed Ghoniem, the Ronald C. Crane Professor of Mechanical Engineering at MIT. “We’re trying to achieve the Department of Energy’s goal, which is to make green hydrogen by 2030, at $1 per kilogram. To improve the economics, we have to improve the efficiency and make sure most of the solar energy we collect is used in the production of hydrogen.”

    Ghoniem’s study co-authors are Aniket Patankar, first author and MIT postdoc; Harry Tuller, MIT professor of materials science and engineering; Xiao-Yu Wu of the University of Waterloo; and Wonjae Choi at Ewha Womans University in South Korea.

    Solar stations

    Similar to other proposed designs, the MIT system would be paired with an existing source of solar heat, such as a concentrated solar plant (CSP) — a circular array of hundreds of mirrors that collect and reflect sunlight to a central receiving tower. An STCH system then absorbs the receiver’s heat and directs it to split water and produce hydrogen. This process is very different from electrolysis, which uses electricity instead of heat to split water.

    At the heart of a conceptual STCH system is a two-step thermochemical reaction. In the first step, water in the form of steam is exposed to a metal. This causes the metal to grab oxygen from steam, leaving hydrogen behind. This metal “oxidation” is similar to the rusting of iron in the presence of water, but it occurs much faster. Once hydrogen is separated, the oxidized (or rusted) metal is reheated in a vacuum, which acts to reverse the rusting process and regenerate the metal. With the oxygen removed, the metal can be cooled and exposed to steam again to produce more hydrogen. This process can be repeated hundreds of times.

    The MIT system is designed to optimize this process. The system as a whole resembles a train of box-shaped reactors running on a circular track. In practice, this track would be set around a solar thermal source, such as a CSP tower. Each reactor in the train would house the metal that undergoes the redox, or reversible rusting, process.

    Each reactor would first pass through a hot station, where it would be exposed to the sun’s heat at temperatures of up to 1,500 degrees Celsius. This extreme heat would effectively pull oxygen out of a reactor’s metal. That metal would then be in a “reduced” state — ready to grab oxygen from steam. For this to happen, the reactor would move to a cooler station at temperatures around 1,000 C, where it would be exposed to steam to produce hydrogen.

    Rust and rails

    Other similar STCH concepts have run up against a common obstacle: what to do with the heat released by the reduced reactor as it is cooled. Without recovering and reusing this heat, the system’s efficiency is too low to be practical.

    A second challenge has to do with creating an energy-efficient vacuum where metal can de-rust. Some prototypes generate a vacuum using mechanical pumps, though the pumps are too energy-intensive and costly for large-scale hydrogen production.

    To address these challenges, the MIT design incorporates several energy-saving workarounds. To recover most of the heat that would otherwise escape from the system, reactors on opposite sides of the circular track are allowed to exchange heat through thermal radiation; hot reactors get cooled while cool reactors get heated. This keeps the heat within the system. The researchers also added a second set of reactors that would circle around the first train, moving in the opposite direction. This outer train of reactors would operate at generally cooler temperatures and would be used to evacuate oxygen from the hotter inner train, without the need for energy-consuming mechanical pumps.

    These outer reactors would carry a second type of metal that can also easily oxidize. As they circle around, the outer reactors would absorb oxygen from the inner reactors, effectively de-rusting the original metal, without having to use energy-intensive vacuum pumps. Both reactor trains would  run continuously and would enerate separate streams of pure hydrogen and oxygen.

    The researchers carried out detailed simulations of the conceptual design, and found that it would significantly boost the efficiency of solar thermochemical hydrogen production, from 7 percent, as previous designs have demonstrated, to 40 percent.

    “We have to think of every bit of energy in the system, and how to use it, to minimize the cost,” Ghoniem says. “And with this design, we found that everything can be powered by heat coming from the sun. It is able to use 40 percent of the sun’s heat to produce hydrogen.”

    “If this can be realized, it could drastically change our energy future — namely, enabling hydrogen production, 24/7,” says Christopher Muhich, an assistant professor of chemical engineering at Arizona State University, who was not involved in the research. “The ability to make hydrogen is the linchpin to producing liquid fuels from sunlight.”

    In the next year, the team will be building a prototype of the system that they plan to test in concentrated solar power facilities at laboratories of the Department of Energy, which is currently funding the project.

    “When fully implemented, this system would be housed in a little building in the middle of a solar field,” Patankar explains. “Inside the building, there could be one or more trains each having about 50 reactors. And we think this could be a modular system, where you can add reactors to a conveyor belt, to scale up hydrogen production.”

    This work was supported by the Centers for Mechanical Engineering Research and Education at MIT and SUSTech. More

  • in

    New tools are available to help reduce the energy that AI models devour

    When searching for flights on Google, you may have noticed that each flight’s carbon-emission estimate is now presented next to its cost. It’s a way to inform customers about their environmental impact, and to let them factor this information into their decision-making.

    A similar kind of transparency doesn’t yet exist for the computing industry, despite its carbon emissions exceeding those of the entire airline industry. Escalating this energy demand are artificial intelligence models. Huge, popular models like ChatGPT signal a trend of large-scale artificial intelligence, boosting forecasts that predict data centers will draw up to 21 percent of the world’s electricity supply by 2030.

    The MIT Lincoln Laboratory Supercomputing Center (LLSC) is developing techniques to help data centers reel in energy use. Their techniques range from simple but effective changes, like power-capping hardware, to adopting novel tools that can stop AI training early on. Crucially, they have found that these techniques have a minimal impact on model performance.

    In the wider picture, their work is mobilizing green-computing research and promoting a culture of transparency. “Energy-aware computing is not really a research area, because everyone’s been holding on to their data,” says Vijay Gadepally, senior staff in the LLSC who leads energy-aware research efforts. “Somebody has to start, and we’re hoping others will follow.”

    Curbing power and cooling down

    Like many data centers, the LLSC has seen a significant uptick in the number of AI jobs running on its hardware. Noticing an increase in energy usage, computer scientists at the LLSC were curious about ways to run jobs more efficiently. Green computing is a principle of the center, which is powered entirely by carbon-free energy.

    Training an AI model — the process by which it learns patterns from huge datasets — requires using graphics processing units (GPUs), which are power-hungry hardware. As one example, the GPUs that trained GPT-3 (the precursor to ChatGPT) are estimated to have consumed 1,300 megawatt-hours of electricity, roughly equal to that used by 1,450 average U.S. households per month.

    While most people seek out GPUs because of their computational power, manufacturers offer ways to limit the amount of power a GPU is allowed to draw. “We studied the effects of capping power and found that we could reduce energy consumption by about 12 percent to 15 percent, depending on the model,” Siddharth Samsi, a researcher within the LLSC, says.

    The trade-off for capping power is increasing task time — GPUs will take about 3 percent longer to complete a task, an increase Gadepally says is “barely noticeable” considering that models are often trained over days or even months. In one of their experiments in which they trained the popular BERT language model, limiting GPU power to 150 watts saw a two-hour increase in training time (from 80 to 82 hours) but saved the equivalent of a U.S. household’s week of energy.

    The team then built software that plugs this power-capping capability into the widely used scheduler system, Slurm. The software lets data center owners set limits across their system or on a job-by-job basis.

    “We can deploy this intervention today, and we’ve done so across all our systems,” Gadepally says.

    Side benefits have arisen, too. Since putting power constraints in place, the GPUs on LLSC supercomputers have been running about 30 degrees Fahrenheit cooler and at a more consistent temperature, reducing stress on the cooling system. Running the hardware cooler can potentially also increase reliability and service lifetime. They can now consider delaying the purchase of new hardware — reducing the center’s “embodied carbon,” or the emissions created through the manufacturing of equipment — until the efficiencies gained by using new hardware offset this aspect of the carbon footprint. They’re also finding ways to cut down on cooling needs by strategically scheduling jobs to run at night and during the winter months.

    “Data centers can use these easy-to-implement approaches today to increase efficiencies, without requiring modifications to code or infrastructure,” Gadepally says.

    Taking this holistic look at a data center’s operations to find opportunities to cut down can be time-intensive. To make this process easier for others, the team — in collaboration with Professor Devesh Tiwari and Baolin Li at Northeastern University — recently developed and published a comprehensive framework for analyzing the carbon footprint of high-performance computing systems. System practitioners can use this analysis framework to gain a better understanding of how sustainable their current system is and consider changes for next-generation systems.  

    Adjusting how models are trained and used

    On top of making adjustments to data center operations, the team is devising ways to make AI-model development more efficient.

    When training models, AI developers often focus on improving accuracy, and they build upon previous models as a starting point. To achieve the desired output, they have to figure out what parameters to use, and getting it right can take testing thousands of configurations. This process, called hyperparameter optimization, is one area LLSC researchers have found ripe for cutting down energy waste. 

    “We’ve developed a model that basically looks at the rate at which a given configuration is learning,” Gadepally says. Given that rate, their model predicts the likely performance. Underperforming models are stopped early. “We can give you a very accurate estimate early on that the best model will be in this top 10 of 100 models running,” he says.

    In their studies, this early stopping led to dramatic savings: an 80 percent reduction in the energy used for model training. They’ve applied this technique to models developed for computer vision, natural language processing, and material design applications.

    “In my opinion, this technique has the biggest potential for advancing the way AI models are trained,” Gadepally says.

    Training is just one part of an AI model’s emissions. The largest contributor to emissions over time is model inference, or the process of running the model live, like when a user chats with ChatGPT. To respond quickly, these models use redundant hardware, running all the time, waiting for a user to ask a question.

    One way to improve inference efficiency is to use the most appropriate hardware. Also with Northeastern University, the team created an optimizer that matches a model with the most carbon-efficient mix of hardware, such as high-power GPUs for the computationally intense parts of inference and low-power central processing units (CPUs) for the less-demanding aspects. This work recently won the best paper award at the International ACM Symposium on High-Performance Parallel and Distributed Computing.

    Using this optimizer can decrease energy use by 10-20 percent while still meeting the same “quality-of-service target” (how quickly the model can respond).

    This tool is especially helpful for cloud customers, who lease systems from data centers and must select hardware from among thousands of options. “Most customers overestimate what they need; they choose over-capable hardware just because they don’t know any better,” Gadepally says.

    Growing green-computing awareness

    The energy saved by implementing these interventions also reduces the associated costs of developing AI, often by a one-to-one ratio. In fact, cost is usually used as a proxy for energy consumption. Given these savings, why aren’t more data centers investing in green techniques?

    “I think it’s a bit of an incentive-misalignment problem,” Samsi says. “There’s been such a race to build bigger and better models that almost every secondary consideration has been put aside.”

    They point out that while some data centers buy renewable-energy credits, these renewables aren’t enough to cover the growing energy demands. The majority of electricity powering data centers comes from fossil fuels, and water used for cooling is contributing to stressed watersheds. 

    Hesitancy may also exist because systematic studies on energy-saving techniques haven’t been conducted. That’s why the team has been pushing their research in peer-reviewed venues in addition to open-source repositories. Some big industry players, like Google DeepMind, have applied machine learning to increase data center efficiency but have not made their work available for others to deploy or replicate. 

    Top AI conferences are now pushing for ethics statements that consider how AI could be misused. The team sees the climate aspect as an AI ethics topic that has not yet been given much attention, but this also appears to be slowly changing. Some researchers are now disclosing the carbon footprint of training the latest models, and industry is showing a shift in energy transparency too, as in this recent report from Meta AI.

    They also acknowledge that transparency is difficult without tools that can show AI developers their consumption. Reporting is on the LLSC roadmap for this year. They want to be able to show every LLSC user, for every job, how much energy they consume and how this amount compares to others, similar to home energy reports.

    Part of this effort requires working more closely with hardware manufacturers to make getting these data off hardware easier and more accurate. If manufacturers can standardize the way the data are read out, then energy-saving and reporting tools can be applied across different hardware platforms. A collaboration is underway between the LLSC researchers and Intel to work on this very problem.

    Even for AI developers who are aware of the intense energy needs of AI, they can’t do much on their own to curb this energy use. The LLSC team wants to help other data centers apply these interventions and provide users with energy-aware options. Their first partnership is with the U.S. Air Force, a sponsor of this research, which operates thousands of data centers. Applying these techniques can make a significant dent in their energy consumption and cost.

    “We’re putting control into the hands of AI developers who want to lessen their footprint,” Gadepally says. “Do I really need to gratuitously train unpromising models? Am I willing to run my GPUs slower to save energy? To our knowledge, no other supercomputing center is letting you consider these options. Using our tools, today, you get to decide.”

    Visit this webpage to see the group’s publications related to energy-aware computing and findings described in this article. More

  • in

    Tracking US progress on the path to a decarbonized economy

    Investments in new technologies and infrastucture that help reduce greenhouse gas emissions — everything from electric vehicles to heat pumps — are growing rapidly in the United States. Now, a new database enables these investments to be comprehensively monitored in real-time, thereby helping to assess the efficacy of policies designed to spur clean investments and address climate change.

    The Clean Investment Monitor (CIM), developed by a team at MIT’s Center for Energy and Environmental Policy Research (CEEPR) led by Institute Innovation Fellow Brian Deese and in collaboration with the Rhodium Group, an independent research firm, provides a timely and methodologically consistent tracking of all announced public and private investments in the manufacture and deployment of clean technologies and infrastructure in the U.S. The CIM offers a means of assessing the country’s progress in transitioning to a cleaner economy and reducing greenhouse gas emissions.

    In the year from July 1, 2022, to June 30, 2023, data from the CIM show, clean investments nationwide totaled $213 billion. To put that figure in perspective, 18 states in the U.S. have GDPs each lower than $213 billion.

    “As clean technology becomes a larger and larger sector in the United States, its growth will have far-reaching implications — for our economy, for our leadership in innovation, and for reducing our greenhouse gas emissions,” says Deese, who served as the director of the White House National Economic Council from January 2021 to February 2023. “The Clean Investment Monitor is a tool designed to help us understand and assess this growth in a real-time, comprehensive way. Our hope is that the CIM will enhance research and improve public policies designed to accelerate the clean energy transition.”

    Launched on Sept. 13, the CIM shows that the $213 billion invested over the last year reflects a 37 percent increase from the $155 billion invested in the previous 12-month period. According to CIM data, the fastest growth has been in the manufacturing sector, where investment grew 125 percent year-on-year, particularly in electric vehicle and solar manufacturing.

    Beyond manufacturing, the CIM also provides data on investment in clean energy production, such as solar, wind, and nuclear; industrial decarbonization, such as sustainable aviation fuels; and retail investments by households and businesses in technologies like heat pumps and zero-emission vehicles. The CIM’s data goes back to 2018, providing a baseline before the passage of the legislation in 2021 and 2022.

    “We’re really excited to bring MIT’s analytical rigor to bear to help develop the Clean Investment Monitor,” says Christopher Knittel, the George P. Shultz Professor of Energy Economics at the MIT Sloan School of Management and CEEPR’s faculty director. “Bolstered by Brian’s keen understanding of the policy world, this tool is poised to become the go-to reference for anyone looking to understand clean investment flows and what drives them.”

    In 2021 and 2022, the U.S. federal government enacted a series of new laws that together aimed to catalyze the largest-ever national investment in clean energy technologies and related infrastructure. The Clean Investment Monitor can also be used to track how well the legislation is living up to expectations.

    The three pieces of federal legislation — the Infrastructure Investment and Jobs Act, enacted in 2021, and the Inflation Reduction Act (IRA) and the CHIPS and Science Act, both enacted in 2022 — provide grants, loans, loan guarantees, and tax incentives to spur investments in technologies that reduce greenhouse gas emissions.

    The effectiveness of the legislation in hastening the U.S. transition to a clean economy will be crucial in determining whether the country reaches its goal of reducing greenhouse gas emissions by 50 percent to 52 percent below 2005 levels in 2030. An analysis earlier this year estimated that the IRA will lead to a 43 percent to 48 percent decline in economywide emissions below 2005 levels by 2035, compared with 27 percent to 35 percent in a reference scenario without the law’s provisions, helping bring the U.S. goal closer in reach.

    The Clean Investment Monitor is available at cleaninvestmentmonitor.org. More