More stories

  • in

    Surprise discovery could lead to improved catalysts for industrial reactions

    The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”This work is “illuminating, something that will be worth teaching at the undergraduate level,” says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. … [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation. More

  • in

    Collaboration between MIT and GE Vernova aims to develop and scale sustainable energy systems

    MIT and GE Vernova today announced the creation of the MIT-GE Vernova Energy and Climate Alliance to help develop and scale sustainable energy systems across the globe.The alliance launches a five-year collaboration between MIT and GE Vernova, a global energy company that spun off from General Electric’s energy business in 2024. The endeavor will encompass research, education, and career opportunities for students, faculty, and staff across MIT’s five schools and the MIT Schwarzman College of Computing. It will focus on three main themes: decarbonization, electrification, and renewables acceleration.“This alliance will provide MIT students and researchers with a tremendous opportunity to work on energy solutions that could have real-world impact,” says Anantha Chandrakasan, MIT’s chief innovation and strategy officer and dean of the School of Engineering. “GE Vernova brings domain knowledge and expertise deploying these at scale. When our researchers develop new innovative technologies, GE Vernova is strongly positioned to bring them to global markets.”Through the alliance, GE Vernova is sponsoring research projects at MIT and providing philanthropic support for MIT research fellowships. The company will also engage with MIT’s community through participation in corporate membership programs and professional education.“It’s a privilege to combine forces with MIT’s world-class faculty and students as we work together to realize an optimistic, innovation-driven approach to solving the world’s most pressing challenges,” says Scott Strazik, GE Vernova CEO. “Through this alliance, we are proud to be able to help drive new technologies while at the same time inspire future leaders to play a meaningful role in deploying technology to improve the planet at companies like GE Vernova.”“This alliance embodies the spirit of the MIT Climate Project — combining cutting-edge research, a shared drive to tackle today’s toughest energy challenges, and a deep sense of optimism about what we can achieve together,” says Sally Kornbluth, president of MIT. “With the combined strengths of MIT and GE Vernova, we have a unique opportunity to make transformative progress in the flagship areas of electrification, decarbonization, and renewables acceleration.”The alliance, comprising a $50 million commitment, will operate within MIT’s Office of Innovation and Strategy. It will fund approximately 12 annual research projects relating to the three themes, as well as three master’s student projects in MIT’s Technology and Policy Program. The research projects will address challenges like developing and storing clean energy, as well as the creation of robust system architectures that help sustainable energy sources like solar, wind, advanced nuclear reactors, green hydrogen, and more compete with carbon-emitting sources.The projects will be selected by a joint steering committee composed of representatives from MIT and GE Vernova, following an annual Institute-wide call for proposals.The collaboration will also create approximately eight endowed GE Vernova research fellowships for MIT students, to be selected by faculty and beginning in the fall. There will also be 10 student internships that will span GE Vernova’s global operations, and GE Vernova will also sponsor programming through MIT’s New Engineering Education Transformation (NEET), which equips students with career-oriented experiential opportunities. Additionally, the alliance will create professional education programming for GE Vernova employees.“The internships and fellowships will be designed to bring students into our ecosystem,” says GE Vernova Chief Corporate Affairs Officer Roger Martella. “Students will walk our factory floor, come to our labs, be a part of our management teams, and see how we operate as business leaders. They’ll get a sense for how what they’re learning in the classroom is being applied in the real world.”Philanthropic support from GE Vernova will also support projects in MIT’s Human Insight Collaborative (MITHIC), which launched last fall to elevate human-centered research and teaching. The projects will allow faculty to explore how areas like energy and cybersecurity influence human behavior and experiences.In connection with the alliance, GE Vernova is expected to join several MIT consortia and membership programs, helping foster collaborations and dialogue between industry experts and researchers and educators across campus.With operations across more than 100 countries, GE Vernova designs, manufactures, and services technologies to generate, transfer, and store electricity with a mission to decarbonize the world. The company is headquartered in Kendall Square, right down the road from MIT, which its leaders say is not a coincidence.“We’re really good at taking proven technologies and commercializing them and scaling them up through our labs,” Martella says. “MIT excels at coming up with those ideas and being a sort of time machine that thinks outside the box to create the future. That’s why this such a great fit: We both have a commitment to research, innovation, and technology.”The alliance is the latest in MIT’s rapidly growing portfolio of research and innovation initiatives around sustainable energy systems, which also includes the Climate Project at MIT. Separate from, but complementary to, the MIT-GE Vernova Alliance, the Climate Project is a campus-wide effort to develop technological, behavioral, and policy solutions to some of the toughest problems impeding an effective global climate response. More

  • in

    Developing materials for stellar performance in fusion power plants

    When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.On track to an MIT educationGrowing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.Pursuing fusion at NSEBefore she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.Sailing new pathsIt’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today. More

  • in

    Rooftop panels, EV chargers, and smart thermostats could chip in to boost power grid resilience

    There’s a lot of untapped potential in our homes and vehicles that could be harnessed to reinforce local power grids and make them more resilient to unforeseen outages, a new study shows.In response to a cyber attack or natural disaster, a backup network of decentralized devices — such as residential solar panels, batteries, electric vehicles, heat pumps, and water heaters — could restore electricity or relieve stress on the grid, MIT engineers say.Such devices are “grid-edge” resources found close to the consumer rather than near central power plants, substations, or transmission lines. Grid-edge devices can independently generate, store, or tune their consumption of power. In their study, the research team shows how such devices could one day be called upon to either pump power into the grid, or rebalance it by dialing down or delaying their power use.In a paper appearing this week in the Proceedings of the National Academy of Sciences, the engineers present a blueprint for how grid-edge devices could reinforce the power grid through a “local electricity market.” Owners of grid-edge devices could subscribe to a regional market and essentially loan out their device to be part of a microgrid or a local network of on-call energy resources.In the event that the main power grid is compromised, an algorithm developed by the researchers would kick in for each local electricity market, to quickly determine which devices in the network are trustworthy. The algorithm would then identify the combination of trustworthy devices that would most effectively mitigate the power failure, by either pumping power into the grid or reducing the power they draw from it, by an amount that the algorithm would calculate and communicate to the relevant subscribers. The subscribers could then be compensated through the market, depending on their participation.The team illustrated this new framework through a number of grid attack scenarios, in which they considered failures at different levels of a power grid, from various sources such as a cyber attack or a natural disaster. Applying their algorithm, they showed that various networks of grid-edge devices were able to dissolve the various attacks.The results demonstrate that grid-edge devices such as rooftop solar panels, EV chargers, batteries, and smart thermostats (for HVAC devices or heat pumps) could be tapped to stabilize the power grid in the event of an attack.“All these small devices can do their little bit in terms of adjusting their consumption,” says study co-author Anu Annaswamy, a research scientist in MIT’s Department of Mechanical Engineering. “If we can harness our smart dishwashers, rooftop panels, and EVs, and put our combined shoulders to the wheel, we can really have a resilient grid.”The study’s MIT co-authors include lead author Vineet Nair and John Williams, along with collaborators from multiple institutions including the Indian Institute of Technology, the National Renewable Energy Laboratory, and elsewhere.Power boostThe team’s study is an extension of their broader work in adaptive control theory and designing systems to automatically adapt to changing conditions. Annaswamy, who leads the Active-Adaptive Control Laboratory at MIT, explores ways to boost the reliability of renewable energy sources such as solar power.“These renewables come with a strong temporal signature, in that we know for sure the sun will set every day, so the solar power will go away,” Annaswamy says. “How do you make up for the shortfall?”The researchers found the answer could lie in the many grid-edge devices that consumers are increasingly installing in their own homes.“There are lots of distributed energy resources that are coming up now, closer to the customer rather than near large power plants, and it’s mainly because of individual efforts to decarbonize,” Nair says. “So you have all this capability at the grid edge. Surely we should be able to put them to good use.”While considering ways to deal with drops in energy from the normal operation of renewable sources, the team also began to look into other causes of power dips, such as from cyber attacks. They wondered, in these malicious instances, whether and how the same grid-edge devices could step in to stabilize the grid following an unforeseen, targeted attack.Attack modeIn their new work, Annaswamy, Nair, and their colleagues developed a framework for incorporating grid-edge devices, and in particular, internet-of-things (IoT) devices, in a way that would support the larger grid in the event of an attack or disruption. IoT devices are physical objects that contain sensors and software that connect to the internet.For their new framework, named EUREICA (Efficient, Ultra-REsilient, IoT-Coordinated Assets), the researchers start with the assumption that one day, most grid-edge devices will also be IoT devices, enabling rooftop panels, EV chargers, and smart thermostats to wirelessly connect to a larger network of similarly independent and distributed devices. The team envisions that for a given region, such as a community of 1,000 homes, there exists a certain number of IoT devices that could potentially be enlisted in the region’s local network, or microgrid. Such a network would be managed by an operator, who would be able to communicate with operators of other nearby microgrids.If the main power grid is compromised or attacked, operators would run the researchers’ decision-making algorithm to determine trustworthy devices within the network that can pitch in to help mitigate the attack.The team tested the algorithm on a number of scenarios, such as a cyber attack in which all smart thermostats made by a certain manufacturer are hacked to raise their setpoints simultaneously to a degree that dramatically alters a region’s energy load and destabilizes the grid. The researchers also considered attacks and weather events that would shut off the transmission of energy at various levels and nodes throughout a power grid.“In our attacks we consider between 5 and 40 percent of the power being lost. We assume some nodes are attacked, and some are still available and have some IoT resources, whether a battery with energy available or an EV or HVAC device that’s controllable,” Nair explains. “So, our algorithm decides which of those houses can step in to either provide extra power generation to inject into the grid or reduce their demand to meet the shortfall.”In every scenario that they tested, the team found that the algorithm was able to successfully restabilize the grid and mitigate the attack or power failure. They acknowledge that to put in place such a network of grid-edge devices will require buy-in from customers, policymakers, and local officials, as well as innovations such as advanced power inverters that enable EVs to inject power back into the grid.“This is just the first of many steps that have to happen in quick succession for this idea of local electricity markets to be implemented and expanded upon,” Annaswamy says. “But we believe it’s a good start.”This work was supported, in part, by the U.S. Department of Energy and the MIT Energy Initiative. More

  • in

    Unlocking the secrets of fusion’s core with AI-enhanced simulations

    Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.The biggest and best of what’s never been builtForty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.“Just dropped in to see what condition my condition was in”Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”  More

  • in

    How to make small modular reactors more cost-effective

    When Youyeon Choi was in high school, she discovered she really liked “thinking in geometry.” The shapes, the dimensions … she was into all of it. Today, geometry plays a prominent role in her doctoral work under the guidance of Professor Koroush Shirvan, as she explores ways to increase the competitiveness of small modular reactors (SMRs).Central to the thesis is metallic nuclear fuel in a helical cruciform shape, which improves surface area and lowers heat flux as compared to the traditional cylindrical equivalent.A childhood in a prominent nuclear energy countryHer passion for geometry notwithstanding, Choi admits she was not “really into studying” in middle school. But that changed when she started excelling in technical subjects in her high school years. And because it was the natural sciences that first caught Choi’s eye, she assumed she would major in the subject when she went to university.This focus, too, would change. Growing up in Seoul, Choi was becoming increasingly aware of the critical role nuclear energy played in meeting her native country’s energy needs. Twenty-six reactors provide nearly a third of South Korea’s electricity, according to the World Nuclear Association. The country is also one of the world’s most prominent nuclear energy entities.In such an ecosystem, Choi understood the stakes at play, especially with electricity-guzzling technologies such as AI and electric vehicles on the rise. Her father also discussed energy-related topics with Choi when she was in high school. Being soaked in that atmosphere eventually led Choi to nuclear engineering.

    Youyeon Choi: Making small modular reactors more cost-effective

    Early work in South KoreaExcelling in high school math and science, Choi was a shoo-in for college at Seoul National University. Initially intent on studying nuclear fusion, Choi switched to fission because she saw that the path to fusion was more convoluted and was still in the early stages of exploration.Choi went on to complete her bachelor’s and master’s degrees in nuclear engineering from the university. As part of her master’s thesis, she worked on a multi-physics modeling project involving high-fidelity simulations of reactor physics and thermal hydraulics to analyze reactor cores.South Korea exports its nuclear know-how widely, so work in the field can be immensely rewarding. Indeed, after graduate school, Choi moved to Daejeon, which has the moniker “Science City.” As an intern at the Korea Atomic Energy Research Institute (KAERI), she conducted experimental studies on the passive safety systems of nuclear reactors. Choi then moved to the Korea Institute of Nuclear Nonproliferation and Control, where she worked as a researcher developing nuclear security programs for countries. Given South Korea’s dominance in the field, other countries would tap its knowledge resource to tap their own nuclear energy programs. The focus was on international training programs, an arm of which involved cybersecurity and physical protection.While the work was impactful, Choi found she missed the modeling work she did as part of her master’s thesis. Looking to return to technical research, she applied to the MIT Department of Nuclear Science and Engineering (NSE). “MIT has the best nuclear engineering program in the States, and maybe even the world,” Choi says, explaining her decision to enroll as a doctoral student.Innovative research at MITAt NSE, Choi is working to make SMRs more price competitive as compared to traditional nuclear energy power plants.Due to their smaller size, SMRs are able to serve areas where larger reactors might not work, but they’re more expensive. One way to address costs is to squeeze more electricity out of a unit of fuel — to increase the power density. Choi is doing so by replacing the traditional uranium dioxide ceramic fuel in a cylindrical shape with a metal one in a helical cruciform. Such a replacement potentially offers twin advantages: the metal fuel has high conductivity, which means the fuel will operate even more safely at lower temperatures. And the twisted shape gives more surface area and lower heat flux. The net result is more electricity for the same volume.The project receives funding from a collaboration between Lightbridge Corp., which is exploring how advanced fuel technologies can improve the performance of water-cooled SMRs, and the U.S. Department of Energy Nuclear Energy University Program.With SMR efficiencies in mind, Choi is indulging her love of multi-physics modeling, and focusing on reactor physics, thermal hydraulics, and fuel performance simulation. “The goal of this modeling and simulation is to see if we can really use this fuel in the SMR,” Choi says. “I’m really enjoying doing the simulations because the geometry is really hard to model. Because the shape is twisted, there’s no symmetry at all,” she says. Always up for a challenge, Choi learned the various aspects of physics and a variety of computational tools, including the Monte Carlo code for reactor physics.Being at MIT has a whole roster of advantages, Choi says, and she especially appreciates the respect researchers have for each other. She appreciates being able to discuss projects with Shirvan and his focus on practical applications of research. At the same time, Choi appreciates the “exotic” nature of her project. “Even assessing if this SMR fuel is at all feasible is really hard, but I think it’s all possible because it’s MIT and my PI [principal investigator] is really invested in innovation,” she says.It’s an exciting time to be in nuclear engineering, Choi says. She serves as one of the board members of the student section of the American Nuclear Society and is an NSE representative of the Graduate Student Council for the 2024-25 academic year.Choi is excited about the global momentum toward nuclear as more countries are exploring the energy source and trying to build more nuclear power plants on the path to decarbonization. “I really do believe nuclear energy is going to be a leading carbon-free energy. It’s very important for our collective futures,” Choi says. More

  • in

    Toward sustainable decarbonization of aviation in Latin America

    According to the International Energy Agency, aviation accounts for about 2 percent of global carbon dioxide emissions, and aviation emissions are expected to double by mid-century as demand for domestic and international air travel rises. To sharply reduce emissions in alignment with the Paris Agreement’s long-term goal to keep global warming below 1.5 degrees Celsius, the International Air Transport Association (IATA) has set a goal to achieve net-zero carbon emissions by 2050. Which raises the question: Are there technologically feasible and economically viable strategies to reach that goal within the next 25 years?To begin to address that question, a team of researchers at the MIT Center for Sustainability Science and Strategy (CS3) and the MIT Laboratory for Aviation and the Environment has spent the past year analyzing aviation decarbonization options in Latin America, where air travel is expected to more than triple by 2050 and thereby double today’s aviation-related emissions in the region.Chief among those options is the development and deployment of sustainable aviation fuel. Currently produced from low- and zero-carbon sources (feedstock) including municipal waste and non-food crops, and requiring practically no alteration of aircraft systems or refueling infrastructure, sustainable aviation fuel (SAF) has the potential to perform just as well as petroleum-based jet fuel with as low as 20 percent of its carbon footprint.Focused on Brazil, Chile, Colombia, Ecuador, Mexico and Peru, the researchers assessed SAF feedstock availability, the costs of corresponding SAF pathways, and how SAF deployment would likely impact fuel use, prices, emissions, and aviation demand in each country. They also explored how efficiency improvements and market-based mechanisms could help the region to reach decarbonization targets. The team’s findings appear in a CS3 Special Report.SAF emissions, costs, and sourcesUnder an ambitious emissions mitigation scenario designed to cap global warming at 1.5 C and raise the rate of SAF use in Latin America to 65 percent by 2050, the researchers projected aviation emissions to be reduced by about 60 percent in 2050 compared to a scenario in which existing climate policies are not strengthened. To achieve net-zero emissions by 2050, other measures would be required, such as improvements in operational and air traffic efficiencies, airplane fleet renewal, alternative forms of propulsion, and carbon offsets and removals.As of 2024, jet fuel prices in Latin America are around $0.70 per liter. Based on the current availability of feedstocks, the researchers projected SAF costs within the six countries studied to range from $1.11 to $2.86 per liter. They cautioned that increased fuel prices could affect operating costs of the aviation sector and overall aviation demand unless strategies to manage price increases are implemented.Under the 1.5 C scenario, the total cumulative capital investments required to build new SAF producing plants between 2025 and 2050 were estimated at $204 billion for the six countries (ranging from $5 billion in Ecuador to $84 billion in Brazil). The researchers identified sugarcane- and corn-based ethanol-to-jet fuel, palm oil- and soybean-based hydro-processed esters and fatty acids as the most promising feedstock sources in the near term for SAF production in Latin America.“Our findings show that SAF offers a significant decarbonization pathway, which must be combined with an economy-wide emissions mitigation policy that uses market-based mechanisms to offset the remaining emissions,” says Sergey Paltsev, lead author of the report, MIT CS3 deputy director, and senior research scientist at the MIT Energy Initiative.RecommendationsThe researchers concluded the report with recommendations for national policymakers and aviation industry leaders in Latin America.They stressed that government policy and regulatory mechanisms will be needed to create sufficient conditions to attract SAF investments in the region and make SAF commercially viable as the aviation industry decarbonizes operations. Without appropriate policy frameworks, SAF requirements will affect the cost of air travel. For fuel producers, stable, long-term-oriented policies and regulations will be needed to create robust supply chains, build demand for establishing economies of scale, and develop innovative pathways for producing SAF.Finally, the research team recommended a region-wide collaboration in designing SAF policies. A unified decarbonization strategy among all countries in the region will help ensure competitiveness, economies of scale, and achievement of long-term carbon emissions-reduction goals.“Regional feedstock availability and costs make Latin America a potential major player in SAF production,” says Angelo Gurgel, a principal research scientist at MIT CS3 and co-author of the study. “SAF requirements, combined with government support mechanisms, will ensure sustainable decarbonization while enhancing the region’s connectivity and the ability of disadvantaged communities to access air transport.”Financial support for this study was provided by LATAM Airlines and Airbus. More

  • in

    The multifaceted challenge of powering AI

    Artificial intelligence has become vital in business and financial dealings, medical care, technology development, research, and much more. Without realizing it, consumers rely on AI when they stream a video, do online banking, or perform an online search. Behind these capabilities are more than 10,000 data centers globally, each one a huge warehouse containing thousands of computer servers and other infrastructure for storing, managing, and processing data. There are now over 5,000 data centers in the United States, and new ones are being built every day — in the U.S. and worldwide. Often dozens are clustered together right near where people live, attracted by policies that provide tax breaks and other incentives, and by what looks like abundant electricity.And data centers do consume huge amounts of electricity. U.S. data centers consumed more than 4 percent of the country’s total electricity in 2023, and by 2030 that fraction could rise to 9 percent, according to the Electric Power Research Institute. A single large data center can consume as much electricity as 50,000 homes.The sudden need for so many data centers presents a massive challenge to the technology and energy industries, government policymakers, and everyday consumers. Research scientists and faculty members at the MIT Energy Initiative (MITEI) are exploring multiple facets of this problem — from sourcing power to grid improvement to analytical tools that increase efficiency, and more. Data centers have quickly become the energy issue of our day.Unexpected demand brings unexpected solutionsSeveral companies that use data centers to provide cloud computing and data management services are announcing some surprising steps to deliver all that electricity. Proposals include building their own small nuclear plants near their data centers and even restarting one of the undamaged nuclear reactors at Three Mile Island, which has been shuttered since 2019. (A different reactor at that plant partially melted down in 1979, causing the nation’s worst nuclear power accident.) Already the need to power AI is causing delays in the planned shutdown of some coal-fired power plants and raising prices for residential consumers. Meeting the needs of data centers is not only stressing power grids, but also setting back the transition to clean energy needed to stop climate change.There are many aspects to the data center problem from a power perspective. Here are some that MIT researchers are focusing on, and why they’re important.An unprecedented surge in the demand for electricity“In the past, computing was not a significant user of electricity,” says William H. Green, director of MITEI and the Hoyt C. Hottel Professor in the MIT Department of Chemical Engineering. “Electricity was used for running industrial processes and powering household devices such as air conditioners and lights, and more recently for powering heat pumps and charging electric cars. But now all of a sudden, electricity used for computing in general, and by data centers in particular, is becoming a gigantic new demand that no one anticipated.”Why the lack of foresight? Usually, demand for electric power increases by roughly half-a-percent per year, and utilities bring in new power generators and make other investments as needed to meet the expected new demand. But the data centers now coming online are creating unprecedented leaps in demand that operators didn’t see coming. In addition, the new demand is constant. It’s critical that a data center provides its services all day, every day. There can be no interruptions in processing large datasets, accessing stored data, and running the cooling equipment needed to keep all the packed-together computers churning away without overheating.Moreover, even if enough electricity is generated, getting it to where it’s needed may be a problem, explains Deepjyoti Deka, a MITEI research scientist. “A grid is a network-wide operation, and the grid operator may have sufficient generation at another location or even elsewhere in the country, but the wires may not have sufficient capacity to carry the electricity to where it’s wanted.” So transmission capacity must be expanded — and, says Deka, that’s a slow process.Then there’s the “interconnection queue.” Sometimes, adding either a new user (a “load”) or a new generator to an existing grid can cause instabilities or other problems for everyone else already on the grid. In that situation, bringing a new data center online may be delayed. Enough delays can result in new loads or generators having to stand in line and wait for their turn. Right now, much of the interconnection queue is already filled up with new solar and wind projects. The delay is now about five years. Meeting the demand from newly installed data centers while ensuring that the quality of service elsewhere is not hampered is a problem that needs to be addressed.Finding clean electricity sourcesTo further complicate the challenge, many companies — including so-called “hyperscalers” such as Google, Microsoft, and Amazon — have made public commitments to having net-zero carbon emissions within the next 10 years. Many have been making strides toward achieving their clean-energy goals by buying “power purchase agreements.” They sign a contract to buy electricity from, say, a solar or wind facility, sometimes providing funding for the facility to be built. But that approach to accessing clean energy has its limits when faced with the extreme electricity demand of a data center.Meanwhile, soaring power consumption is delaying coal plant closures in many states. There are simply not enough sources of renewable energy to serve both the hyperscalers and the existing users, including individual consumers. As a result, conventional plants fired by fossil fuels such as coal are needed more than ever.As the hyperscalers look for sources of clean energy for their data centers, one option could be to build their own wind and solar installations. But such facilities would generate electricity only intermittently. Given the need for uninterrupted power, the data center would have to maintain energy storage units, which are expensive. They could instead rely on natural gas or diesel generators for backup power — but those devices would need to be coupled with equipment to capture the carbon emissions, plus a nearby site for permanently disposing of the captured carbon.Because of such complications, several of the hyperscalers are turning to nuclear power. As Green notes, “Nuclear energy is well matched to the demand of data centers, because nuclear plants can generate lots of power reliably, without interruption.”In a much-publicized move in September, Microsoft signed a deal to buy power for 20 years after Constellation Energy reopens one of the undamaged reactors at its now-shuttered nuclear plant at Three Mile Island, the site of the much-publicized nuclear accident in 1979. If approved by regulators, Constellation will bring that reactor online by 2028, with Microsoft buying all of the power it produces. Amazon also reached a deal to purchase power produced by another nuclear plant threatened with closure due to financial troubles. And in early December, Meta released a request for proposals to identify nuclear energy developers to help the company meet their AI needs and their sustainability goals.Other nuclear news focuses on small modular nuclear reactors (SMRs), factory-built, modular power plants that could be installed near data centers, potentially without the cost overruns and delays often experienced in building large plants. Google recently ordered a fleet of SMRs to generate the power needed by its data centers. The first one will be completed by 2030 and the remainder by 2035.Some hyperscalers are betting on new technologies. For example, Google is pursuing next-generation geothermal projects, and Microsoft has signed a contract to purchase electricity from a startup’s fusion power plant beginning in 2028 — even though the fusion technology hasn’t yet been demonstrated.Reducing electricity demandOther approaches to providing sufficient clean electricity focus on making the data center and the operations it houses more energy efficient so as to perform the same computing tasks using less power. Using faster computer chips and optimizing algorithms that use less energy are already helping to reduce the load, and also the heat generated.Another idea being tried involves shifting computing tasks to times and places where carbon-free energy is available on the grid. Deka explains: “If a task doesn’t have to be completed immediately, but rather by a certain deadline, can it be delayed or moved to a data center elsewhere in the U.S. or overseas where electricity is more abundant, cheaper, and/or cleaner? This approach is known as ‘carbon-aware computing.’” We’re not yet sure whether every task can be moved or delayed easily, says Deka. “If you think of a generative AI-based task, can it easily be separated into small tasks that can be taken to different parts of the country, solved using clean energy, and then be brought back together? What is the cost of doing this kind of division of tasks?”That approach is, of course, limited by the problem of the interconnection queue. It’s difficult to access clean energy in another region or state. But efforts are under way to ease the regulatory framework to make sure that critical interconnections can be developed more quickly and easily.What about the neighbors?A major concern running through all the options for powering data centers is the impact on residential energy consumers. When a data center comes into a neighborhood, there are not only aesthetic concerns but also more practical worries. Will the local electricity service become less reliable? Where will the new transmission lines be located? And who will pay for the new generators, upgrades to existing equipment, and so on? When new manufacturing facilities or industrial plants go into a neighborhood, the downsides are generally offset by the availability of new jobs. Not so with a data center, which may require just a couple dozen employees.There are standard rules about how maintenance and upgrade costs are shared and allocated. But the situation is totally changed by the presence of a new data center. As a result, utilities now need to rethink their traditional rate structures so as not to place an undue burden on residents to pay for the infrastructure changes needed to host data centers.MIT’s contributionsAt MIT, researchers are thinking about and exploring a range of options for tackling the problem of providing clean power to data centers. For example, they are investigating architectural designs that will use natural ventilation to facilitate cooling, equipment layouts that will permit better airflow and power distribution, and highly energy-efficient air conditioning systems based on novel materials. They are creating new analytical tools for evaluating the impact of data center deployments on the U.S. power system and for finding the most efficient ways to provide the facilities with clean energy. Other work looks at how to match the output of small nuclear reactors to the needs of a data center, and how to speed up the construction of such reactors.MIT teams also focus on determining the best sources of backup power and long-duration storage, and on developing decision support systems for locating proposed new data centers, taking into account the availability of electric power and water and also regulatory considerations, and even the potential for using what can be significant waste heat, for example, for heating nearby buildings. Technology development projects include designing faster, more efficient computer chips and more energy-efficient computing algorithms.In addition to providing leadership and funding for many research projects, MITEI is acting as a convenor, bringing together companies and stakeholders to address this issue. At MITEI’s 2024 Annual Research Conference, a panel of representatives from two hyperscalers and two companies that design and construct data centers together discussed their challenges, possible solutions, and where MIT research could be most beneficial.As data centers continue to be built, and computing continues to create an unprecedented increase in demand for electricity, Green says, scientists and engineers are in a race to provide the ideas, innovations, and technologies that can meet this need, and at the same time continue to advance the transition to a decarbonized energy system. More