More stories

  • in

    Taking the “training wheels” off clean energy

    Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”A shifting clean energy landscapeDuring a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”“Everything on the table”The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”The road aheadSeveral speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.” More

  • in

    Surprise discovery could lead to improved catalysts for industrial reactions

    The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”This work is “illuminating, something that will be worth teaching at the undergraduate level,” says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. … [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation. More

  • in

    Collaboration between MIT and GE Vernova aims to develop and scale sustainable energy systems

    MIT and GE Vernova today announced the creation of the MIT-GE Vernova Energy and Climate Alliance to help develop and scale sustainable energy systems across the globe.The alliance launches a five-year collaboration between MIT and GE Vernova, a global energy company that spun off from General Electric’s energy business in 2024. The endeavor will encompass research, education, and career opportunities for students, faculty, and staff across MIT’s five schools and the MIT Schwarzman College of Computing. It will focus on three main themes: decarbonization, electrification, and renewables acceleration.“This alliance will provide MIT students and researchers with a tremendous opportunity to work on energy solutions that could have real-world impact,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of the School of Engineering. “GE Vernova brings domain knowledge and expertise deploying these at scale. When our researchers develop new innovative technologies, GE Vernova is strongly positioned to bring them to global markets.”Through the alliance, GE Vernova is sponsoring research projects at MIT and providing philanthropic support for MIT research fellowships. The company will also engage with MIT’s community through participation in corporate membership programs and professional education.“It’s a privilege to combine forces with MIT’s world-class faculty and students as we work together to realize an optimistic, innovation-driven approach to solving the world’s most pressing challenges,” says Scott Strazik, GE Vernova CEO. “Through this alliance, we are proud to be able to help drive new technologies while at the same time inspire future leaders to play a meaningful role in deploying technology to improve the planet at companies like GE Vernova.”“This alliance embodies the spirit of the MIT Climate Project — combining cutting-edge research, a shared drive to tackle today’s toughest energy challenges, and a deep sense of optimism about what we can achieve together,” says Sally Kornbluth, president of MIT. “With the combined strengths of MIT and GE Vernova, we have a unique opportunity to make transformative progress in the flagship areas of electrification, decarbonization, and renewables acceleration.”The alliance, comprising a $50 million commitment, will operate within MIT’s Office of Innovation and Strategy. It will fund approximately 12 annual research projects relating to the three themes, as well as three master’s student projects in MIT’s Technology and Policy Program. The research projects will address challenges like developing and storing clean energy, as well as the creation of robust system architectures that help sustainable energy sources like solar, wind, advanced nuclear reactors, green hydrogen, and more compete with carbon-emitting sources.The projects will be selected by a joint steering committee composed of representatives from MIT and GE Vernova, following an annual Institute-wide call for proposals.The collaboration will also create approximately eight endowed GE Vernova research fellowships for MIT students, to be selected by faculty and beginning in the fall. There will also be 10 student internships that will span GE Vernova’s global operations, and GE Vernova will also sponsor programming through MIT’s New Engineering Education Transformation (NEET), which equips students with career-oriented experiential opportunities. Additionally, the alliance will create professional education programming for GE Vernova employees.“The internships and fellowships will be designed to bring students into our ecosystem,” says GE Vernova Chief Corporate Affairs Officer Roger Martella. “Students will walk our factory floor, come to our labs, be a part of our management teams, and see how we operate as business leaders. They’ll get a sense for how what they’re learning in the classroom is being applied in the real world.”Philanthropic support from GE Vernova will also support projects in MIT’s Human Insight Collaborative (MITHIC), which launched last fall to elevate human-centered research and teaching. The projects will allow faculty to explore how areas like energy and cybersecurity influence human behavior and experiences.In connection with the alliance, GE Vernova is expected to join several MIT consortia and membership programs, helping foster collaborations and dialogue between industry experts and researchers and educators across campus.With operations across more than 100 countries, GE Vernova designs, manufactures, and services technologies to generate, transfer, and store electricity with a mission to decarbonize the world. The company is headquartered in Kendall Square, right down the road from MIT, which its leaders say is not a coincidence.“We’re really good at taking proven technologies and commercializing them and scaling them up through our labs,” Martella says. “MIT excels at coming up with those ideas and being a sort of time machine that thinks outside the box to create the future. That’s why this such a great fit: We both have a commitment to research, innovation, and technology.”The alliance is the latest in MIT’s rapidly growing portfolio of research and innovation initiatives around sustainable energy systems, which also includes the Climate Project at MIT. Separate from, but complementary to, the MIT-GE Vernova Alliance, the Climate Project is a campus-wide effort to develop technological, behavioral, and policy solutions to some of the toughest problems impeding an effective global climate response. More

  • in

    Developing materials for stellar performance in fusion power plants

    When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.On track to an MIT educationGrowing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.Pursuing fusion at NSEBefore she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.Sailing new pathsIt’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today. More

  • in

    Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

    There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.Power boostThe team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.Attack modeIn their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative. More

  • in

    Unlocking the secrets of fusion’s core with AI-enhanced simulations

    Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.The biggest and best of what’s never been builtForty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.“Just dropped in to see what condition my condition was in”Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”  More

  • in

    How to make small modular reactors more cost-effective

    When Youyeon Choi was in high school, she discovered she really liked “thinking in geometry.” The shapes, the dimensions … she was into all of it. Today, geometry plays a prominent role in her doctoral work under the guidance of Professor Koroush Shirvan, as she explores ways to increase the competitiveness of small modular reactors (SMRs).Central to the thesis is metallic nuclear fuel in a helical cruciform shape, which improves surface area and lowers heat flux as compared to the traditional cylindrical equivalent.A childhood in a prominent nuclear energy countryHer passion for geometry notwithstanding, Choi admits she was not “really into studying” in middle school. But that changed when she started excelling in technical subjects in her high school years. And because it was the natural sciences that first caught Choi’s eye, she assumed she would major in the subject when she went to university.This focus, too, would change. Growing up in Seoul, Choi was becoming increasingly aware of the critical role nuclear energy played in meeting her native country’s energy needs. Twenty-six reactors provide nearly a third of South Korea’s electricity, according to the World Nuclear Association. The country is also one of the world’s most prominent nuclear energy entities.In such an ecosystem, Choi understood the stakes at play, especially with electricity-guzzling technologies such as AI and electric vehicles on the rise. Her father also discussed energy-related topics with Choi when she was in high school. Being soaked in that atmosphere eventually led Choi to nuclear engineering.

    Youyeon Choi: Making small modular reactors more cost-effective

    Early work in South KoreaExcelling in high school math and science, Choi was a shoo-in for college at Seoul National University. Initially intent on studying nuclear fusion, Choi switched to fission because she saw that the path to fusion was more convoluted and was still in the early stages of exploration.Choi went on to complete her bachelor’s and master’s degrees in nuclear engineering from the university. As part of her master’s thesis, she worked on a multi-physics modeling project involving high-fidelity simulations of reactor physics and thermal hydraulics to analyze reactor cores.South Korea exports its nuclear know-how widely, so work in the field can be immensely rewarding. Indeed, after graduate school, Choi moved to Daejeon, which has the moniker “Science City.” As an intern at the Korea Atomic Energy Research Institute (KAERI), she conducted experimental studies on the passive safety systems of nuclear reactors. Choi then moved to the Korea Institute of Nuclear Nonproliferation and Control, where she worked as a researcher developing nuclear security programs for countries. Given South Korea’s dominance in the field, other countries would tap its knowledge resource to tap their own nuclear energy programs. The focus was on international training programs, an arm of which involved cybersecurity and physical protection.While the work was impactful, Choi found she missed the modeling work she did as part of her master’s thesis. Looking to return to technical research, she applied to the MIT Department of Nuclear Science and Engineering (NSE). “MIT has the best nuclear engineering program in the States, and maybe even the world,” Choi says, explaining her decision to enroll as a doctoral student.Innovative research at MITAt NSE, Choi is working to make SMRs more price competitive as compared to traditional nuclear energy power plants.Due to their smaller size, SMRs are able to serve areas where larger reactors might not work, but they’re more expensive. One way to address costs is to squeeze more electricity out of a unit of fuel — to increase the power density. Choi is doing so by replacing the traditional uranium dioxide ceramic fuel in a cylindrical shape with a metal one in a helical cruciform. Such a replacement potentially offers twin advantages: the metal fuel has high conductivity, which means the fuel will operate even more safely at lower temperatures. And the twisted shape gives more surface area and lower heat flux. The net result is more electricity for the same volume.The project receives funding from a collaboration between Lightbridge Corp., which is exploring how advanced fuel technologies can improve the performance of water-cooled SMRs, and the U.S. Department of Energy Nuclear Energy University Program.With SMR efficiencies in mind, Choi is indulging her love of multi-physics modeling, and focusing on reactor physics, thermal hydraulics, and fuel performance simulation. “The goal of this modeling and simulation is to see if we can really use this fuel in the SMR,” Choi says. “I’m really enjoying doing the simulations because the geometry is really hard to model. Because the shape is twisted, there’s no symmetry at all,” she says. Always up for a challenge, Choi learned the various aspects of physics and a variety of computational tools, including the Monte Carlo code for reactor physics.Being at MIT has a whole roster of advantages, Choi says, and she especially appreciates the respect researchers have for each other. She appreciates being able to discuss projects with Shirvan and his focus on practical applications of research. At the same time, Choi appreciates the “exotic” nature of her project. “Even assessing if this SMR fuel is at all feasible is really hard, but I think it’s all possible because it’s MIT and my PI [principal investigator] is really invested in innovation,” she says.It’s an exciting time to be in nuclear engineering, Choi says. She serves as one of the board members of the student section of the American Nuclear Society and is an NSE representative of the Graduate Student Council for the 2024-25 academic year.Choi is excited about the global momentum toward nuclear as more countries are exploring the energy source and trying to build more nuclear power plants on the path to decarbonization. “I really do believe nuclear energy is going to be a leading carbon-free energy. It’s very important for our collective futures,” Choi says. More

  • in

    Toward sustainable decarbonization of aviation in Latin America

    According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.SAF emissions, costs, and sourcesUnder an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.RecommendationsThe researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”Financial support for this study was provided by LATAM Airlines and Airbus. More