More stories

  • in

    MIT Energy Initiative launches Data Center Power Forum

    With global power demand from data centers expected to more than double by 2030, the MIT Energy Initiative (MITEI) in September launched an effort that brings together MIT researchers and industry experts to explore innovative solutions for powering the data-driven future. At its annual research conference, MITEI announced the Data Center Power Forum, a targeted research effort for MITEI member companies interested in addressing the challenges of data center power demand. The Data Center Power Forum builds on lessons from MITEI’s May 2025 symposium on the energy to power the expansion of artificial intelligence (AI) and focus panels related to data centers at the fall 2024 research conference.In the United States, data centers consumed 4 percent of the country’s electricity in 2023, with demand expected to increase to 9 percent by 2030, according to the Electric Power Research Institute. Much of the growth in demand is from the increasing use of AI, which is placing an unprecedented strain on the electric grid. This surge in demand presents a serious challenge for the technology and energy sectors, government policymakers, and everyday consumers, who may see their electric bills skyrocket as a result.“MITEI has long supported research on ways to produce more efficient and cleaner energy and to manage the electric grid. In recent years, MITEI has also funded dozens of research projects relevant to data center energy issues. Building on this history and knowledge base, MITEI’s Data Center Power Forum is convening a specialized community of industry members who have a vital stake in the sustainable growth of AI and the acceleration of solutions for powering data centers and expanding the grid,” says William H. Green, the director of MITEI and the Hoyt C. Hottel Professor of Chemical Engineering.MITEI’s mission is to advance zero- and low-carbon solutions to expand energy access and mitigate climate change. MITEI works with companies from across the energy innovation chain, including in the infrastructure, automotive, electric power, energy, natural resources, and insurance sectors. MITEI member companies have expressed strong interest in the Data Center Power Forum and are committing to support focused research on a wide range of energy issues associated with data center expansion, Green says.MITEI’s Data Center Power Forum will provide its member companies with reliable insights into energy supply, grid load operations and management, the built environment, and electricity market design and regulatory policy for data centers. The forum complements MIT’s deep expertise in adjacent topics such as low-power processors, efficient algorithms, task-specific AI, photonic devices, quantum computing, and the societal consequences of data center expansion. As part of the forum, MITEI’s Future Energy Systems Center is funding projects relevant to data center energy in its upcoming proposal cycles. MITEI Research Scientist Deep Deka has been named the program manager for the forum.“Figuring out how to meet the power demands of data centers is a complicated challenge. Our research is coming at this from multiple directions, from looking at ways to expand transmission capacity within the electrical grid in order to bring power to where it is needed, to ensuring the quality of electrical service for existing users is not diminished when new data centers come online, and to shifting computing tasks to times and places when and where energy is available on the grid,” said Deka.MITEI currently sponsors substantial research related to data center energy topics across several MIT departments. The existing research portfolio includes more than a dozen projects related to data centers, including low- or zero-carbon solutions for energy supply and infrastructure, electrical grid management, and electricity market policy. MIT researchers funded through MITEI’s industry consortium are also designing more energy-efficient power electronics and processors and investigating behind-the-meter low-/no-carbon power plants and energy storage. MITEI-supported experts are studying how to use AI to optimize electrical distribution and the siting of data centers and conducting techno-economic analyses of data center power schemes. MITEI’s consortium projects are also bringing fresh perspectives to data center cooling challenges and considering policy approaches to balance the interests of shareholders. By drawing together industry stakeholders from across the AI and grid value chain, the Data Center Power Forum enables a richer dialog about solutions to power, grid, and carbon management problems in a noncommercial and collaborative setting.“The opportunity to meet and to hold discussions on key data center challenges with other forum members from different sectors, as well as with MIT faculty members and research scientists, is a unique benefit of this MITEI-led effort,” Green says.MITEI addressed the issue of data center power needs with its company members during its fall 2024 Annual Research Conference with a panel session titled, “The extreme challenge of powering data centers in a decarbonized way.” MITEI Director of Research Randall Field led a discussion with representatives from large technology companies Google and Microsoft, known as “hyperscalers,” as well as Madrid-based infrastructure developer Ferrovial S.E. and utility company Exelon Corp. Another conference session addressed the related topic, “Energy storage and grid expansion.” This past spring, MITEI focused its annual Spring Symposium on data centers, hosting faculty members and researchers from MIT and other universities, business leaders, and a representative of the Federal Energy Regulatory Commission for a full day of sessions on the topic, “AI and energy: Peril and promise.”  More

  • in

    The brain power behind sustainable AI

    How can you use science to build a better gingerbread house?That was something Miranda Schwacke spent a lot of time thinking about. The MIT graduate student in the Department of Materials Science and Engineering (DMSE) is part of Kitchen Matters, a group of grad students who use food and kitchen tools to explain scientific concepts through short videos and outreach events. Past topics included why chocolate “seizes,” or becomes difficult to work with when melting (spoiler: water gets in), and how to make isomalt, the sugar glass that stunt performers jump through in action movies.Two years ago, when the group was making a video on how to build a structurally sound gingerbread house, Schwacke scoured cookbooks for a variable that would produce the most dramatic difference in the cookies.“I was reading about what determines the texture of cookies, and then tried several recipes in my kitchen until I got two gingerbread recipes that I was happy with,” Schwacke says.She focused on butter, which contains water that turns to steam at high baking temperatures, creating air pockets in cookies. Schwacke predicted that decreasing the amount of butter would yield denser gingerbread, strong enough to hold together as a house.“This hypothesis is an example of how changing the structure can influence the properties and performance of material,” Schwacke said in the eight-minute video.That same curiosity about materials properties and performance drives her research on the high energy cost of computing, especially for artificial intelligence. Schwacke develops new materials and devices for neuromorphic computing, which mimics the brain by processing and storing information in the same place. She studies electrochemical ionic synapses — tiny devices that can be “tuned” to adjust conductivity, much like neurons strengthening or weakening connections in the brain.“If you look at AI in particular — to train these really large models — that consumes a lot of energy. And if you compare that to the amount of energy that we consume as humans when we’re learning things, the brain consumes a lot less energy,” Schwacke says. “That’s what led to this idea to find more brain-inspired, energy-efficient ways of doing AI.”Her advisor, Bilge Yildiz, underscores the point: One reason the brain is so efficient is that data doesn’t need to be moved back and forth.“In the brain, the connections between our neurons, called synapses, are where we process information. Signal transmission is there. It is processed, programmed, and also stored in the same place,” says Yildiz, the Breene M. Kerr (1951) Professor in the Department of Nuclear Science and Engineering and DMSE. Schwacke’s devices aim to replicate that efficiency.Scientific rootsThe daughter of a marine biologist mom and an electrical engineer dad, Schwacke was immersed in science from a young age. Science was “always a part of how I understood the world.”“I was obsessed with dinosaurs. I wanted to be a paleontologist when I grew up,” she says. But her interests broadened. At her middle school in Charleston, South Carolina, she joined a FIRST Lego League robotics competition, building robots to complete tasks like pushing or pulling objects. “My parents, my dad especially, got very involved in the school team and helping us design and build our little robot for the competition.”Her mother, meanwhile, studied how dolphin populations are affected by pollution for the National Oceanic and Atmospheric Administration. That had a lasting impact.“That was an example of how science can be used to understand the world, and also to figure out how we can improve the world,” Schwacke says. “And that’s what I’ve always wanted to do with science.”Her interest in materials science came later, in her high school magnet program. There, she was introduced to the interdisciplinary subject, a blend of physics, chemistry, and engineering that studies the structure and properties of materials and uses that knowledge to design new ones.“I always liked that it goes from this very basic science, where we’re studying how atoms are ordering, all the way up to these solid materials that we interact with in our everyday lives — and how that gives them their properties that we can see and play with,” Schwacke says.As a senior, she participated in a research program with a thesis project on dye-sensitized solar cells, a low-cost, lightweight solar technology that uses dye molecules to absorb light and generate electricity.“What drove me was really understanding, this is how we go from light to energy that we can use — and also seeing how this could help us with having more renewable energy sources,” Schwacke says.After high school, she headed across the country to Caltech. “I wanted to try a totally new place,” she says, where she studied materials science, including nanostructured materials thousands of times thinner than a human hair. She focused on materials properties and microstructure — the tiny internal structure that governs how materials behave — which led her to electrochemical systems like batteries and fuel cells.AI energy challengeAt MIT, she continued exploring energy technologies. She met Yildiz during a Zoom meeting in her first year of graduate school, in fall 2020, when the campus was still operating under strict Covid-19 protocols. Yildiz’s lab studies how charged atoms, or ions, move through materials in technologies like fuel cells, batteries, and electrolyzers.The lab’s research into brain-inspired computing fired Schwacke’s imagination, but she was equally drawn to Yildiz’s way of talking about science.“It wasn’t based on jargon and emphasized a very basic understanding of what was going on — that ions are going here, and electrons are going here — to understand fundamentally what’s happening in the system,” Schwacke says.That mindset shaped her approach to research. Her early projects focused on the properties these devices need to work well — fast operation, low energy use, and compatibility with semiconductor technology — and on using magnesium ions instead of hydrogen, which can escape into the environment and make devices unstable.Her current project, the focus of her PhD thesis, centers on understanding how the insertion of magnesium ions into tungsten oxide, a metal oxide whose electrical properties can be precisely tuned, changes its electrical resistance. In these devices, tungsten oxide serves as a channel layer, where resistance controls signal strength, much like synapses regulate signals in the brain.“I am trying to understand exactly how these devices change the channel conductance,” Schwacke says.Schwacke’s research was recognized with a MathWorks Fellowship from the School of Engineering in 2023 and 2024. The fellowship supports graduate students who leverage tools like MATLAB or Simulink in their work; Schwacke applied MATLAB for critical data analysis and visualization.Yildiz describes Schwacke’s research as a novel step toward solving one of AI’s biggest challenges.“This is electrochemistry for brain-inspired computing,” Yildiz says. “It’s a new context for electrochemistry, but also with an energy implication, because the energy consumption of computing is unsustainably increasing. We have to find new ways of doing computing with much lower energy, and this is one way that can help us move in that direction.”Like any pioneering work, it comes with challenges, especially in bridging the concepts between electrochemistry and semiconductor physics.“Our group comes from a solid-state chemistry background, and when we started this work looking into magnesium, no one had used magnesium in these kinds of devices before,” Schwacke says. “So we were looking at the magnesium battery literature for inspiration and different materials and strategies we could use. When I started this, I wasn’t just learning the language and norms for one field — I was trying to learn it for two fields, and also translate between the two.”She also grapples with a challenge familiar to all scientists: how to make sense of messy data.“The main challenge is being able to take my data and know that I’m interpreting it in a way that’s correct, and that I understand what it actually means,” Schwacke says.She overcomes hurdles by collaborating closely with colleagues across fields, including neuroscience and electrical engineering, and sometimes by just making small changes to her experiments and watching what happens next.Community mattersSchwacke is not just active in the lab. In Kitchen Matters, she and her fellow DMSE grad students set up booths at local events like the Cambridge Science Fair and Steam It Up, an after-school program with hands-on activities for kids.“We did ‘pHun with Food’ with ‘fun’ spelled with a pH, so we had cabbage juice as a pH indicator,” Schwacke says. “We let the kids test the pH of lemon juice and vinegar and dish soap, and they had a lot of fun mixing the different liquids and seeing all the different colors.”She has also served as the social chair and treasurer for DMSE’s graduate student group, the Graduate Materials Council. As an undergraduate at Caltech, she led workshops in science and technology for Robogals, a student-run group that encourages young women to pursue careers in science, and assisted students in applying for the school’s Summer Undergraduate Research Fellowships.For Schwacke, these experiences sharpened her ability to explain science to different audiences, a skill she sees as vital whether she’s presenting at a kids’ fair or at a research conference.“I always think, where is my audience starting from, and what do I need to explain before I can get into what I’m doing so that it’ll all make sense to them?” she says.Schwacke sees the ability to communicate as central to building community, which she considers an important part of doing research. “It helps with spreading ideas. It always helps to get a new perspective on what you’re working on,” she says. “I also think it keeps us sane during our PhD.”Yildiz sees Schwacke’s community involvement as an important part of her resume. “She’s doing all these activities to motivate the broader community to do research, to be interested in science, to pursue science and technology, but that ability will help her also progress in her own research and academic endeavors.”After her PhD, Schwacke wants to take that ability to communicate with her to academia, where she’d like to inspire the next generation of scientists and engineers. Yildiz has no doubt she’ll thrive.“I think she’s a perfect fit,” Yildiz says. “She’s brilliant, but brilliance by itself is not enough. She’s persistent, resilient. You really need those on top of that.” More

  • in

    Solar energy startup Active Surfaces wins inaugural PITCH.nano competition

    The inaugural PITCH.nano competition, hosted by MIT.nano’s hard technology accelerator START.nano, provided a platform for early-stage startups to present their innovations to MIT and Boston’s hard-tech startup ecosystem.The grand prize winner was Active Surfaces, a startup that is generating renewable energy exactly where it is going to be used through lightweight, flexible solar cells. Active Surfaces says its ultralight, peel-and-stick panels will reimagine how we deploy photovoltaics in the built environment.Shiv Bhakta MBA ’24, SM ’24, CEO and co-founder, delivered the winning presentation to an audience of entrepreneurs, investors, startup incubators, and industry partners at PITCH.nano on Sept. 30. Active Surfaces received the grand prize of 25,000 nanoBucks — equivalent to $25,000 that can be spent at MIT.nano facilities.Why has MIT.nano chosen to embrace startup activity as much as we do? asked Vladimir Bulović, MIT.nano faculty director, at the start of PITCH.nano. “We need to make sure that entrepreneurs can be born out of MIT and can take the next technical ideas developed in the lab out into the market, so they can make the next millions of jobs that the world needs.”The journey of a hard-tech entrepreneur takes at least 10 years and 100 million dollars, explained Bulović. By linking open tool facilities to startup needs, MIT.nano can make those first few years a little bit easier, bringing more startups to the scale-up stage.“Getting VCs [venture capitalists] to invest in hard tech is challenging,” explained Joyce Wu SM ’00, PhD ’07, START.nano program manager. “Through START.nano, we provide discounted access to MIT.nano’s cleanrooms, characterization tools, and laboratories for startups to build their prototypes and attract investment earlier and with reduced spend. Our goal is to support the translation of fundamental research to real-world solutions in hard tech.”In addition to discounted access to tools, START.nano helps early-stage companies become part of the MIT and Cambridge innovation network. PITCH.nano, inspired by the MIT 100K Competition, was launched as a new opportunity this year to introduce these hard-tech ventures to the investor and industry community. Twelve startups delivered presentations that were evaluated by a panel of four judges who are, themselves, venture capitalists and startup founders.“It is amazing to see the quality, diversity, and ingenuity of this inspiring group of startups,” said judge Brendan Smith PhD ’18, CEO of SiTration, a company that was part of the inaugural START.nano cohort. “Together, these founders are demonstrating the power of fundamental hard-tech innovation to solve the world’s greatest challenges, in a way that is both scalable and profitable.”Startups who presented at PITCH.nano spanned a wide range of focus areas. In the fields of climate, energy, and materials, the audience heard from Addis Energy, Copernic Catalysts, Daqus Energy, VioNano Innovations, Active Surfaces, and Metal Fuels; in life sciences, Acorn Genetics, Advanced Silicon Group, and BioSens8; and in quantum and photonics, Qunett, nOhm Devices, and Brightlight Photonics. The common thread for these companies: They are all using MIT.nano to advance their innovations.“MIT.nano has been instrumental in compressing our time to market, especially as a company building a novel, physical product,” said Bhakta. “Access to world-class characterization tools — normally out of reach for startups — lets us validate scale-up much faster. The START.nano community accelerates problem-solving, and the nanoBucks award is directly supporting the development of our next prototypes headed to pilot.”In addition to the grand prize, a 5,000 nanoBucks audience choice award went to Advanced Silicon Group, a startup that is developing a next-generation biosensor to improve testing in pharma and health tech.Now in its fifth year, START.nano has supported 40 companies spanning a diverse set of market areas — life sciences, clean tech, semiconductors, photonics, quantum, materials, and software. Fourteen START.nano companies have graduated from the program, proving that START.nano is indeed succeeding in its mission to help early-stage ventures advance from prototype to manufacturing. “I believe MIT.nano has a fantastic opportunity here,” said judge Davide Marini, PhD ’03, co-founder and CEO of Inkbit, “to create the leading incubator for hard tech entrepreneurs worldwide.”START.nano accepts applications on a monthly basis. The program is made possible through the generous support of FEMSA. More

  • in

    Palladium filters could enable cheaper, more efficient generation of hydrogen fuel

    Palladium is one of the keys to jump-starting a hydrogen-based energy economy. The silvery metal is a natural gatekeeper against every gas except hydrogen, which it readily lets through. For its exceptional selectivity, palladium is considered one of the most effective materials at filtering gas mixtures to produce pure hydrogen.Today, palladium-based membranes are used at commercial scale to provide pure hydrogen for semiconductor manufacturing, food processing, and fertilizer production, among other applications in which the membranes operate at modest temperatures. If palladium membranes get much hotter than around 800 kelvins, they can break down.Now, MIT engineers have developed a new palladium membrane that remains resilient at much higher temperatures. Rather than being made as a continuous film, as most membranes are, the new design is made from palladium that is deposited as “plugs” into the pores of an underlying supporting material. At high temperatures, the snug-fitting plugs remain stable and continue separating out hydrogen, rather than degrading as a surface film would.The thermally stable design opens opportunities for membranes to be used in hydrogen-fuel-generating technologies such as compact steam methane reforming and ammonia cracking — technologies that are designed to operate at much higher temperatures to produce hydrogen for zero-carbon-emitting fuel and electricity.“With further work on scaling and validating performance under realistic industrial feeds, the design could represent a promising route toward practical membranes for high-temperature hydrogen production,” says Lohyun Kim PhD ’24, a former graduate student in MIT’s Department of Mechanical Engineering.Kim and his colleagues report details of the new membrane in a study appearing today in the journal Advanced Functional Materials. The study’s co-authors are Randall Field, director of research at the MIT Energy Initiative (MITEI); former MIT chemical engineering graduate student Chun Man Chow PhD ’23; Rohit Karnik, the Jameel Professor in the Department of Mechanical Engineering at MIT and the director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS); and Aaron Persad, a former MIT research scientist in mechanical engineering who is now an assistant professor at the University of Maryland Eastern Shore.Compact futureThe team’s new design came out of a MITEI project related to fusion energy. Future fusion power plants, such as the one MIT spinout Commonwealth Fusion Systems is designing, will involve circulating hydrogen isotopes of deuterium and tritium at extremely high temperatures to produce energy from the isotopes’ fusing. The reactions inevitably produce other gases that will have to be separated, and the hydrogen isotopes will be recirculated into the main reactor for further fusion.Similar issues arise in a number of other processes for producing hydrogen, where gases must be separated and recirculated back into a reactor. Concepts for such recirculating systems would require first cooling down the gas before it can pass through hydrogen-separating membranes — an expensive and energy-intensive step that would involve additional machinery and hardware.“One of the questions we were thinking about is: Can we develop membranes which could be as close to the reactor as possible, and operate at higher temperatures, so we don’t have to pull out the gas and cool it down first?” Karnik says. “It would enable more energy-efficient, and therefore cheaper and compact, fusion systems.”The researchers looked for ways to improve the temperature resistance of palladium membranes. Palladium is the most effective metal used today to separate hydrogen from a variety of gas mixtures. It naturally attracts hydrogen molecules (H2) to its surface, where the metal’s electrons interact with and weaken the molecule’s bonds, causing H2 to temporarily break apart into its respective atoms. The individual atoms then diffuse through the metal and join back up on the other side as pure hydrogen.Palladium is highly effective at permeating hydrogen, and only hydrogen, from streams of various gases. But conventional membranes typically can operate at temperatures of up to 800 kelvins before the film starts to form holes or clumps up into droplets, allowing other gases to flow through.Plugging inKarnik, Kim and their colleagues took a different design approach. They observed that at high temperatures, palladium will start to shrink up. In engineering terms, the material is acting to reduce surface energy. To do this, palladium, and most other materials and even water, will pull apart and form droplets with the smallest surface energy. The lower the surface energy, the more stable the material can be against further heating.This gave the team an idea: If a supporting material’s pores could be “plugged” with deposits of palladium — essentially already forming a droplet with the lowest surface energy — the tight quarters might substantially increase palladium’s heat tolerance while preserving the membrane’s selectivity for hydrogen.To test this idea, they fabricated small chip-sized samples of membrane using a porous silica supporting layer (each pore measuring about half a micron wide), onto which they deposited a very thin layer of palladium. They applied techniques to essentially grow the palladium into the pores, and polished down the surface to remove the palladium layer and leave palladium only inside the pores.They then placed samples in a custom-built apparatus in which they flowed hydrogen-containing gas of various mixtures and temperatures to test its separation performance. The membranes remained stable and continued to separate hydrogen from other gases even after experiencing temperatures of up to 1,000 kelvins for over 100 hours — a significant improvement over conventional film-based membranes.“The use of palladium film membranes are generally limited to below around 800 kelvins, at which point they degrade,” Kim says. “Our plug design therefore extends palladium’s effective heat resilience by roughly at least 200 kelvins and maintains integrity far longer under extreme conditions.”These conditions are within the range of hydrogen-generating technologies such as steam methane reforming and ammonia cracking.Steam methane reforming is an established process that has required complex, energy-intensive systems to preprocess methane to a form where pure hydrogen can be extracted. Such preprocessing steps could be replaced with a compact “membrane reactor,” through which a methane gas would directly flow, and the membrane inside would filter out pure hydrogen. Such reactors would significantly cut down the size, complexity, and cost of producing hydrogen from steam methane reforming, and Kim estimates a membrane would have to work reliably in temperatures of up to nearly 1,000 kelvins. The team’s new membrane could work well within such conditions.Ammonia cracking is another way to produce hydrogen, by “cracking” or breaking apart ammonia. As ammonia is very stable in liquid form, scientists envision that it could be used as a carrier for hydrogen and be safely transported to a hydrogen fuel station, where ammonia could be fed into a membrane reactor that again pulls out hydrogen and pumps it directly into a fuel cell vehicle. Ammonia cracking is still largely in pilot and demonstration stages, and Kim says any membrane in an ammonia cracking reactor would likely operate at temperatures of around 800 kelvins — within the range of the group’s new plug-based design.Karnik emphasizes that their results are just a start. Adopting the membrane into working reactors will require further development and testing to ensure it remains reliable over much longer periods of time.“We showed that instead of making a film, if you make discretized nanostructures you can get much more thermally stable membranes,” Karnik says. “It provides a pathway for designing membranes for extreme temperatures, with the added possibility of using smaller amounts of expensive palladium, toward making hydrogen production more efficient and affordable. There is potential there.”This work was supported by Eni S.p.A. via the MIT Energy Initiative. More

  • in

    MIT’s work with Idaho National Laboratory advances America’s nuclear industry

    At the center of nuclear reactors across the United States, a new type of chromium-coated fuel is being used to make the reactors more efficient and more resistant to accidents. The fuel is one of many innovations sprung from collaboration between researchers at MIT and the Idaho National Laboratory (INL) — a relationship that has altered the trajectory of the country’s nuclear industry.Amid renewed excitement around nuclear energy in America, MIT’s research community is working to further develop next-generation fuels, accelerate the deployment of small modular reactors (SMRs), and enable the first nuclear reactor in space.Researchers at MIT and INL have worked closely for decades, and the collaboration takes many forms, including joint research efforts, student and postdoc internships, and a standing agreement that lets INL employees spend extended periods on MIT’s campus researching and teaching classes. MIT is also a founding member of the Battelle Energy Alliance, which has managed the Idaho National Laboratory for the Department of Energy since 2005.The collaboration gives MIT’s community a chance to work on the biggest problems facing America’s nuclear industry while bolstering INL’s research infrastructure.“The Idaho National Laboratory is the lead lab for nuclear energy technology in the United States today — that’s why it’s essential that MIT works hand in hand with INL,” says Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering at MIT. “Countless MIT students and postdocs have interned at INL over the years, and a memorandum of understanding that strengthened the collaboration between MIT and INL in 2019 has been extended twice.”Ian Waitz, MIT’s vice president for research, adds, “The strong collaborative history between MIT and the Idaho National Laboratory enables us to jointly contribute practical technologies to enable the growth of clean, safe nuclear energy. It’s a clear example of how rigorous collaboration across sectors, and among the nation’s top research facilities, can advance U.S. economic prosperity, health, and well-being.”Research with impactMuch of MIT’s joint research with INL involves tests and simulations of new nuclear materials, fuels, and instrumentation. One of the largest collaborations was part of a global push for more accident-tolerant fuels in the wake of the nuclear accident that followed the 2011 earthquake and tsunami in Fukushima, Japan.In a series of studies involving INL and members of the nuclear energy industry, MIT researchers helped identify and evaluate alloy materials that could be deployed in the near term to not only bolster safety but also offer higher densities of fuel.“These new alloys can withstand much more challenging conditions during abnormal occurrences without reacting chemically with steam, which could result in hydrogen explosions during accidents,” explains Buongiorno, who is also the director of science and technology at MIT’s Nuclear Reactor Laboratory and the director of MIT’s Center for Advanced Nuclear Energy Systems. “The fuels can take much more abuse without breaking apart in the reactor, resulting in a higher safety margin.”The fuels tested at MIT were eventually adopted by power plants across the U.S., starting with the Byron Clean Energy Center in Ogle County, Illinois.“We’re also developing new materials, fuels, and instrumentation,” Buongiorno says. “People don’t just come to MIT and say, ‘I have this idea, evaluate it for me.’ We collaborate with industry and national labs to develop the new ideas together, and then we put them to the test,  reproducing the environment in which these materials and fuels would operate in commercial power reactors. That capability is quite unique.”Another major collaboration was led by Koroush Shirvan, MIT’s Atlantic Richfield Career Development Professor in Energy Studies. Shirvan’s team analyzed the costs associated with different reactor designs, eventually developing an open-source tool to help industry leaders evaluate the feasibility of different approaches.“The reason we’re not building a single nuclear reactor in the U.S. right now is cost and financial risk,” Shirvan says. “The projects have gone over budget by a factor of two and their schedule has lengthened by a factor of 1.5, so we’ve been doing a lot of work assessing the risk drivers. There’s also a lot of different types of reactors proposed, so we’ve looked at their cost potential as well and how those costs change if you can mass manufacture them.”Other INL-supported research of Shirvan’s involves exploring new manufacturing methods for nuclear fuels and testing materials for use in a nuclear reactor on the surface of the moon.“You want materials that are lightweight for these nuclear reactors because you have to send them to space, but there isn’t much data around how those light materials perform in nuclear environments,” Shirvan says.People and progressEvery summer, MIT students at every level travel to Idaho to conduct research in INL labs as interns.“It’s an example of our students getting access to cutting-edge research facilities,” Shirvan says.There are also several joint research appointments between the institutions. One such appointment is held by Sacit Cetiner, a distinguished scientist at INL who also currently runs the MIT and INL Joint Center for Reactor Instrumentation and Sensor Physics (CRISP) at MIT’s Nuclear Reactor Laboratory.CRISP focuses its research on key technology areas in the field of instrumentation and controls, which have long stymied the bottom line of nuclear power generation.“For the current light-water reactor fleet, operations and maintenance expenditures constitute a sizeable fraction of unit electricity generation cost,” says Cetiner. “In order to make advanced reactors economically competitive, it’s much more reasonable to address anticipated operational issues during the design phase. One such critical technology area is remote and autonomous operations. Working directly with INL, which manages the projects for the design and testing of several advanced reactors under a number of federal programs, gives our students, faculty, and researchers opportunities to make a real impact.”The sharing of experts helps strengthen MIT and the nation’s nuclear workforce overall.“MIT has a crucial role to play in advancing the country’s nuclear industry, whether that’s testing and developing new technologies or assessing the economic feasibility of new nuclear designs,” Buongiorno says. More

  • in

    Confronting the AI/energy conundrum

    The explosive growth of AI-powered computing centers is creating an unprecedented surge in electricity demand that threatens to overwhelm power grids and derail climate goals. At the same time, artificial intelligence technologies could revolutionize energy systems, accelerating the transition to clean power.“We’re at a cusp of potentially gigantic change throughout the economy,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering, at MITEI’s Spring Symposium, “AI and energy: Peril and promise,” held on May 13. The event brought together experts from industry, academia, and government to explore solutions to what Green described as both “local problems with electric supply and meeting our clean energy targets” while seeking to “reap the benefits of AI without some of the harms.” The challenge of data center energy demand and potential benefits of AI to the energy transition is a research priority for MITEI.AI’s startling energy demandsFrom the start, the symposium highlighted sobering statistics about AI’s appetite for electricity. After decades of flat electricity demand in the United States, computing centers now consume approximately 4 percent of the nation’s electricity. Although there is great uncertainty, some projections suggest this demand could rise to 12-15 percent by 2030, largely driven by artificial intelligence applications.Vijay Gadepally, senior scientist at MIT’s Lincoln Laboratory, emphasized the scale of AI’s consumption. “The power required for sustaining some of these large models is doubling almost every three months,” he noted. “A single ChatGPT conversation uses as much electricity as charging your phone, and generating an image consumes about a bottle of water for cooling.”Facilities requiring 50 to 100 megawatts of power are emerging rapidly across the United States and globally, driven both by casual and institutional research needs relying on large language programs such as ChatGPT and Gemini. Gadepally cited congressional testimony by Sam Altman, CEO of OpenAI, highlighting how fundamental this relationship has become: “The cost of intelligence, the cost of AI, will converge to the cost of energy.”“The energy demands of AI are a significant challenge, but we also have an opportunity to harness these vast computational capabilities to contribute to climate change solutions,” said Evelyn Wang, MIT vice president for energy and climate and the former director at the Advanced Research Projects Agency-Energy (ARPA-E) at the U.S. Department of Energy.Wang also noted that innovations developed for AI and data centers — such as efficiency, cooling technologies, and clean-power solutions — could have broad applications beyond computing facilities themselves.Strategies for clean energy solutionsThe symposium explored multiple pathways to address the AI-energy challenge. Some panelists presented models suggesting that while artificial intelligence may increase emissions in the short term, its optimization capabilities could enable substantial emissions reductions after 2030 through more efficient power systems and accelerated clean technology development.Research shows regional variations in the cost of powering computing centers with clean electricity, according to Emre Gençer, co-founder and CEO of Sesame Sustainability and former MITEI principal research scientist. Gençer’s analysis revealed that the central United States offers considerably lower costs due to complementary solar and wind resources. However, achieving zero-emission power would require massive battery deployments — five to 10 times more than moderate carbon scenarios — driving costs two to three times higher.“If we want to do zero emissions with reliable power, we need technologies other than renewables and batteries, which will be too expensive,” Gençer said. He pointed to “long-duration storage technologies, small modular reactors, geothermal, or hybrid approaches” as necessary complements.Because of data center energy demand, there is renewed interest in nuclear power, noted Kathryn Biegel, manager of R&D and corporate strategy at Constellation Energy, adding that her company is restarting the reactor at the former Three Mile Island site, now called the “Crane Clean Energy Center,” to meet this demand. “The data center space has become a major, major priority for Constellation,” she said, emphasizing how their needs for both reliability and carbon-free electricity are reshaping the power industry.Can AI accelerate the energy transition?Artificial intelligence could dramatically improve power systems, according to Priya Donti, assistant professor and the Silverman Family Career Development Professor in MIT’s Department of Electrical Engineering and Computer Science and the Laboratory for Information and Decision Systems. She showcased how AI can accelerate power grid optimization by embedding physics-based constraints into neural networks, potentially solving complex power flow problems at “10 times, or even greater, speed compared to your traditional models.”AI is already reducing carbon emissions, according to examples shared by Antonia Gawel, global director of sustainability and partnerships at Google. Google Maps’ fuel-efficient routing feature has “helped to prevent more than 2.9 million metric tons of GHG [greenhouse gas] emissions reductions since launch, which is the equivalent of taking 650,000 fuel-based cars off the road for a year,” she said. Another Google research project uses artificial intelligence to help pilots avoid creating contrails, which represent about 1 percent of global warming impact.AI’s potential to speed materials discovery for power applications was highlighted by Rafael Gómez-Bombarelli, the Paul M. Cook Career Development Associate Professor in the MIT Department of Materials Science and Engineering. “AI-supervised models can be trained to go from structure to property,” he noted, enabling the development of materials crucial for both computing and efficiency.Securing growth with sustainabilityThroughout the symposium, participants grappled with balancing rapid AI deployment against environmental impacts. While AI training receives most attention, Dustin Demetriou, senior technical staff member in sustainability and data center innovation at IBM, quoted a World Economic Forum article that suggested that “80 percent of the environmental footprint is estimated to be due to inferencing.” Demetriou emphasized the need for efficiency across all artificial intelligence applications.Jevons’ paradox, where “efficiency gains tend to increase overall resource consumption rather than decrease it” is another factor to consider, cautioned Emma Strubell, the Raj Reddy Assistant Professor in the Language Technologies Institute in the School of Computer Science at Carnegie Mellon University. Strubell advocated for viewing computing center electricity as a limited resource requiring thoughtful allocation across different applications.Several presenters discussed novel approaches for integrating renewable sources with existing grid infrastructure, including potential hybrid solutions that combine clean installations with existing natural gas plants that have valuable grid connections already in place. These approaches could provide substantial clean capacity across the United States at reasonable costs while minimizing reliability impacts.Navigating the AI-energy paradoxThe symposium highlighted MIT’s central role in developing solutions to the AI-electricity challenge.Green spoke of a new MITEI program on computing centers, power, and computation that will operate alongside the comprehensive spread of MIT Climate Project research. “We’re going to try to tackle a very complicated problem all the way from the power sources through the actual algorithms that deliver value to the customers — in a way that’s going to be acceptable to all the stakeholders and really meet all the needs,” Green said.Participants in the symposium were polled about priorities for MIT’s research by Randall Field, MITEI director of research. The real-time results ranked “data center and grid integration issues” as the top priority, followed by “AI for accelerated discovery of advanced materials for energy.”In addition, attendees revealed that most view AI’s potential regarding power as a “promise,” rather than a “peril,” although a considerable portion remain uncertain about the ultimate impact. When asked about priorities in power supply for computing facilities, half of the respondents selected carbon intensity as their top concern, with reliability and cost following. More

  • in

    Q&A: The climate impact of generative AI

    Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.Q: What trends are you seeing in terms of how generative AI is being used in computing?A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we’ve seen an explosion in the number of projects that need access to high-performance computing for generative AI. We’re also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can’t predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.Q: What strategies is the LLSC using to mitigate this climate impact?A: We’re always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.As one example, we’ve been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.Q: What’s an example of a project you’ve done that reduces the energy output of a generative AI program?A: We recently built a climate-aware computer vision tool. Computer vision is a domain that’s focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!Q: What can we do as consumers of generative AI to help mitigate its climate impact?A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight’s carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.There are many cases where customers would be happy to make a trade-off if they knew the trade-off’s impact.Q: What do you see for the future?A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We’re doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide “energy audits” to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.If you’re interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

    Play video

    Video: MIT Lincoln Laboratory More

  • in

    The role of modeling in the energy transition

    Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what’s going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”Dialogue, not forecastsEIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”Modeling uncertaintyThe key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”The future of energy modelingLooking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”  More