More stories

  • in

    Study sheds light on graphite’s lifespan in nuclear reactors

    Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.A long-studied, complex materialEver since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”But graphite also has its complexities.“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”From research to reactorsThe researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”This work was supported, in part, by the U.S. Department of Energy. More

  • in

    Theory-guided strategy expands the scope of measurable quantum interactions

    A new theory-guided framework could help scientists probe the properties of new semiconductors for next-generation microelectronic devices, or discover materials that boost the performance of quantum computers.Research to develop new or better materials typically involves investigating properties that can be reliably measured with existing lab equipment, but this represents just a fraction of the properties that scientists could potentially probe in principle. Some properties remain effectively “invisible” because they are too difficult to capture directly with existing methods.Take electron-phonon interaction — this property plays a critical role in a material’s electrical, thermal, optical, and superconducting properties, but directly capturing it using existing techniques is notoriously challenging.Now, MIT researchers have proposed a theoretically justified approach that could turn this challenge into an opportunity. Their method reinterprets neutron scattering, an often-overlooked interference effect as a potential direct probe of electron-phonon coupling strength.The procedure creates two interaction effects in the material. The researchers show that, by deliberately designing their experiment to leverage the interference between the two interactions, they can capture the strength of a material’s electron-phonon interaction.The researchers’ theory-informed methodology could be used to shape the design of future experiments, opening the door to measuring new quantities that were previously out of reach.“Rather than discovering new spectroscopy techniques by pure accident, we can use theory to justify and inform the design of our experiments and our physical equipment,” says Mingda Li, the Class of 1947 Career Development Professor and an associate professor of nuclear science and engineering, and senior author of a paper on this experimental method.Li is joined on the paper by co-lead authors Chuliang Fu, an MIT postdoc; Phum Siriviboon and Artittaya Boonkird, both MIT graduate students; as well as others at MIT, the National Institute of Standards and Technology, the University of California at Riverside, Michigan State University, and Oak Ridge National Laboratory. The research appears this week in Materials Today Physics.Investigating interferenceNeutron scattering is a powerful measurement technique that involves aiming a beam of neutrons at a material and studying how the neutrons are scattered after they strike it. The method is ideal for measuring a material’s atomic structure and magnetic properties.When neutrons collide with the material sample, they interact with it through two different mechanisms, creating a nuclear interaction and a magnetic interaction. These interactions can interfere with each other.“The scientific community has known about this interference effect for a long time, but researchers tend to view it as a complication that can obscure measurement signals. So it hasn’t received much focused attention,” Fu says.The team and their collaborators took a conceptual “leap of faith” and decided to explore this oft-overlooked interference effect more deeply.They flipped the traditional materials research approach on its head by starting with a multifaceted theoretical analysis. They explored what happens inside a material when the nuclear interaction and magnetic interaction interfere with each other.Their analysis revealed that this interference pattern is directly proportional to the strength of the material’s electron-phonon interaction.“This makes the interference effect a probe we can use to detect this interaction,” explains Siriviboon.Electron-phonon interactions play a role in a wide range of material properties. They affect how heat flows through a material, impact a material’s ability to absorb and emit light, and can even lead to superconductivity.But the complexity of these interactions makes them hard to directly measure using existing experimental techniques. Instead, researchers often rely on less precise, indirect methods to capture electron-phonon interactions.However, leveraging this interference effect enables direct measurement of the electron-phonon interaction, a major advantage over other approaches.“Being able to directly measure the electron-phonon interaction opens the door to many new possibilities,” says Boonkird.Rethinking materials researchBased on their theoretical insights, the researchers designed an experimental setup to demonstrate their approach.Since the available equipment wasn’t powerful enough for this type of neutron scattering experiment, they were only able to capture a weak electron-phonon interaction signal — but the results were clear enough to support their theory.“These results justify the need for a new facility where the equipment might be 100 to 1,000 times more powerful, enabling scientists to clearly resolve the signal and measure the interaction,” adds Landry.With improved neutron scattering facilities, like those proposed for the upcoming Second Target Station at Oak Ridge National Laboratory, this experimental method could be an effective technique for measuring many crucial material properties.For instance, by helping scientists identify and harness better semiconductors, this approach could enable more energy-efficient appliances, faster wireless communication devices, and more reliable medical equipment like pacemakers and MRI scanners.   Ultimately, the team sees this work as a broader message about the need to rethink the materials research process.“Using theoretical insights to design experimental setups in advance can help us redefine the properties we can measure,” Fu says.To that end, the team and their collaborators are currently exploring other types of interactions they could leverage to investigate additional material properties.“This is a very interesting paper,” says Jon Taylor, director of the neutron scattering division at Oak Ridge National Laboratory, who was not involved with this research. “It would be interesting to have a neutron scattering method that is directly sensitive to charge lattice interactions or more generally electronic effects that were not just magnetic moments. It seems that such an effect is expectedly rather small, so facilities like STS could really help develop that fundamental understanding of the interaction and also leverage such effects routinely for research.”This work is funded, in part, by the U.S. Department of Energy and the National Science Foundation. More

  • in

    Model predicts long-term effects of nuclear waste on underground disposal systems

    As countries across the world experience a resurgence in nuclear energy projects, the questions of where and how to dispose of nuclear waste remain as politically fraught as ever. The United States, for instance, has indefinitely stalled its only long-term underground nuclear waste repository. Scientists are using both modeling and experimental methods to study the effects of underground nuclear waste disposal and ultimately, they hope, build public trust in the decision-making process.New research from scientists at MIT, Lawrence Berkeley National Lab, and the University of Orléans makes progress in that direction. The study shows that simulations of underground nuclear waste interactions, generated by new, high-performance-computing software, aligned well with experimental results from a research facility in Switzerland.The study, which was co-authored by MIT PhD student Dauren Sarsenbayev and Assistant Professor Haruko Wainwright, along with Christophe Tournassat and Carl Steefel, appears in the journal PNAS.“These powerful new computational tools, coupled with real-world experiments like those at the Mont Terri research site in Switzerland, help us understand how radionuclides will migrate in coupled underground systems,” says Sarsenbayev, who is first author of the new study.The authors hope the research will improve confidence among policymakers and the public in the long-term safety of underground nuclear waste disposal.“This research — coupling both computation and experiments — is important to improve our confidence in waste disposal safety assessments,” says Wainwright. “With nuclear energy re-emerging as a key source for tackling climate change and ensuring energy security, it is critical to validate disposal pathways.”Comparing simulations with experimentsDisposing of nuclear waste in deep underground geological formations is currently considered the safest long-term solution for managing high-level radioactive waste. As such, much effort has been put into studying the migration behaviors of radionuclides from nuclear waste within various natural and engineered geological materials.Since its founding in 1996, the Mont Terri research site in northern Switzerland has served as an important test bed for an international consortium of researchers interested in studying materials like Opalinus clay — a thick, water-tight claystone abundant in the tunneled areas of the mountain.“It is widely regarded as one of the most valuable real-world experiment sites because it provides us with decades of datasets around the interactions of cement and clay, and those are the key materials proposed to be used by countries across the world for engineered barrier systems and geological repositories for nuclear waste,” explains Sarsenbayev.For their study, Sarsenbayev and Wainwright collaborated with co-authors Tournassat and Steefel, who have developed high-performance computing software to improve modeling of interactions between the nuclear waste and both engineered and natural materials.To date, several challenges have limited scientists’ understanding of how nuclear waste reacts with cement-clay barriers. For one thing, the barriers are made up of irregularly mixed materials deep underground. Additionally, the existing class of models commonly used to simulate radionuclide interactions with cement-clay do not take into account electrostatic effects associated with the negatively charged clay minerals in the barriers.Tournassat and Steefel’s new software accounts for electrostatic effects, making it the only one that can simulate those interactions in three-dimensional space. The software, called CrunchODiTi, was developed from established software known as CrunchFlow and was most recently updated this year. It is designed to be run on many high-performance computers at once in parallel.For the study, the researchers looked at a 13-year-old experiment, with an initial focus on cement-clay rock interactions. Within the last several years, a mix of both negatively and positively charged ions were added to the borehole located near the center of the cement emplaced in the formation. The researchers focused on a 1-centimeter-thick zone between the radionuclides and cement-clay referred to as the “skin.” They compared their experimental results to the software simulation, finding the two datasets aligned.“The results are quite significant because previously, these models wouldn’t fit field data very well,” Sarsenbayev says. “It’s interesting how fine-scale phenomena at the ‘skin’ between cement and clay, the physical and chemical properties of which changes over time, could be used to reconcile the experimental and simulation data.” The experimental results showed the model successfully accounted for electrostatic effects associated with the clay-rich formation and the interaction between materials in Mont Terri over time.“This is all driven by decades of work to understand what happens at these interfaces,” Sarsenbayev says. “It’s been hypothesized that there is mineral precipitation and porosity clogging at this interface, and our results strongly suggest that.”“This application requires millions of degrees of freedom because these multibarrier systems require high resolution and a lot of computational power,” Sarsenbayev says. “This software is really ideal for the Mont Terri experiment.”Assessing waste disposal plansThe new model could now replace older models that have been used to conduct safety and performance assessments of underground geological repositories.“If the U.S. eventually decides to dispose nuclear waste in a geological repository, then these models could dictate the most appropriate materials to use,” Sarsenbayev says. “For instance, right now clay is considered an appropriate storage material, but salt formations are another potential medium that could be used. These models allow us to see the fate of radionuclides over millennia. We can use them to understand interactions at timespans that vary from months to years to many millions of years.”Sarsenbayev says the model is reasonably accessible to other researchers and that future efforts may focus on the use of machine learning to develop less computationally expensive surrogate models.Further data from the experiment will be available later this month. The team plans to compare those data to additional simulations.“Our collaborators will basically get this block of cement and clay, and they’ll be able to run experiments to determine the exact thickness of the skin along with all of the minerals and processes present at this interface,” Sarsenbayev says. “It’s a huge project and it takes time, but we wanted to share initial data and this software as soon as we could.”For now, the researchers hope their study leads to a long-term solution for storing nuclear waste that policymakers and the public can support.“This is an interdisciplinary study that includes real world experiments showing we’re able to predict radionuclides’ fate in the subsurface,” Sarsenbayev says. “The motto of MIT’s Department of Nuclear Science and Engineering is ‘Science. Systems. Society.’ I think this merges all three domains.” More

  • in

    New facility to accelerate materials solutions for fusion energy

    Fusion energy has the potential to enable the energy transition from fossil fuels, enhance domestic energy security, and power artificial intelligence. Private companies have already invested more than $8 billion to develop commercial fusion and seize the opportunities it offers. An urgent challenge, however, is the discovery and evaluation of cost-effective materials that can withstand extreme conditions for extended periods, including 150-million-degree plasmas and intense particle bombardment.To meet this challenge, MIT’s Plasma Science and Fusion Center (PSFC) has launched the Schmidt Laboratory for Materials in Nuclear Technologies, or LMNT (pronounced “element”). Backed by a philanthropic consortium led by Eric and Wendy Schmidt, LMNT is designed to speed up the discovery and selection of materials for a variety of fusion power plant components. By drawing on MIT’s expertise in fusion and materials science, repurposing existing research infrastructure, and tapping into its close collaborations with leading private fusion companies, the PSFC aims to drive rapid progress in the materials that are necessary for commercializing fusion energy on rapid timescales. LMNT will also help develop and assess materials for nuclear power plants, next-generation particle physics experiments, and other science and industry applications.Zachary Hartwig, head of LMNT and an associate professor in the Department of Nuclear Science and Engineering (NSE), says, “We need technologies today that will rapidly develop and test materials to support the commercialization of fusion energy. LMNT’s mission includes discovery science but seeks to go further, ultimately helping select the materials that will be used to build fusion power plants in the coming years.”A different approach to fusion materialsFor decades, researchers have worked to understand how materials behave under fusion conditions using methods like exposing test specimens to low-energy particle beams, or placing them in the core of nuclear fission reactors. These approaches, however, have significant limitations. Low-energy particle beams only irradiate the thinnest surface layer of materials, while fission reactor irradiation doesn’t accurately replicate the mechanism by which fusion damages materials. Fission irradiation is also an expensive, multiyear process that requires specialized facilities.To overcome these obstacles, researchers at MIT and peer institutions are exploring the use of energetic beams of protons to simulate the damage materials undergo in fusion environments. Proton beams can be tuned to match the damage expected in fusion power plants, and protons penetrate deep enough into test samples to provide insights into how exposure can affect structural integrity. They also offer the advantage of speed: first, intense proton beams can rapidly damage dozens of material samples at once, allowing researchers to test them in days, rather than years. Second, high-energy proton beams can be generated with a type of particle accelerator known as a cyclotron commonly used in the health-care industry. As a result, LMNT will be built around a cost-effective, off-the-shelf cyclotron that is easy to obtain and highly reliable.LMNT will surround its cyclotron with four experimental areas dedicated to materials science research. The lab is taking shape inside the large shielded concrete vault at PSFC that once housed the Alcator C-Mod tokamak, a record-setting fusion experiment that ran at the PSFC from 1992 to 2016. By repurposing C-Mod’s former space, the center is skipping the need for extensive, costly new construction and accelerating the research timeline significantly. The PSFC’s veteran team — who have led major projects like the Alcator tokamaks and advanced high-temperature superconducting magnet development — are overseeing the facilities design, construction, and operation, ensuring LMNT moves quickly from concept to reality. The PSFC expects to receive the cyclotron by the end of 2025, with experimental operations starting in early 2026.“LMNT is the start of a new era of fusion research at MIT, one where we seek to tackle the most complex fusion technology challenges on timescales commensurate with the urgency of the problem we face: the energy transition,” says Nuno Loureiro, director of the PSFC, a professor of nuclear science and engineering, and the Herman Feshbach Professor of Physics. “It’s ambitious, bold, and critical — and that’s exactly why we do it.”“What’s exciting about this project is that it aligns the resources we have today — substantial research infrastructure, off-the-shelf technologies, and MIT expertise — to address the key resource we lack in tackling climate change: time. Using the Schmidt Laboratory for Materials in Nuclear Technologies, MIT researchers advancing fusion energy, nuclear power, and other technologies critical to the future of energy will be able to act now and move fast,” says Elsa Olivetti, the Jerry McAfee Professor in Engineering and a mission director of MIT’s Climate Project.In addition to advancing research, LMNT will provide a platform for educating and training students in the increasingly important areas of fusion technology. LMNT’s location on MIT’s main campus gives students the opportunity to lead research projects and help manage facility operations. It also continues the hands-on approach to education that has defined the PSFC, reinforcing that direct experience in large-scale research is the best approach to create fusion scientists and engineers for the expanding fusion industry workforce.Benoit Forget, head of NSE and the Korea Electric Power Professor of Nuclear Engineering, notes, “This new laboratory will give nuclear science and engineering students access to a unique research capability that will help shape the future of both fusion and fission energy.”Accelerating progress on big challengesPhilanthropic support has helped LMNT leverage existing infrastructure and expertise to move from concept to facility in just one-and-a-half years — a fast timeline for establishing a major research project.“I’m just as excited about this research model as I am about the materials science. It shows how focused philanthropy and MIT’s strengths can come together to build something that’s transformational — a major new facility that helps researchers from the public and private sectors move fast on fusion materials,” emphasizes Hartwig.By utilizing this approach, the PSFC is executing a major public-private partnership in fusion energy, realizing a research model that the U.S. fusion community has only recently started to explore, and demonstrating the crucial role that universities can play in the acceleration of the materials and technology required for fusion energy.“Universities have long been at the forefront of tackling society’s biggest challenges, and the race to identify new forms of energy and address climate change demands bold, high-risk, high-reward approaches,” says Ian Waitz, MIT’s vice president for research. “LMNT is helping turn fusion energy from a long-term ambition into a near-term reality.” More

  • in

    MIT Maritime Consortium sets sail

    Around 11 billion tons of goods, or about 1.5 tons per person worldwide, are transported by sea each year, representing about 90 percent of global trade by volume. Internationally, the merchant shipping fleet numbers around 110,000 vessels. These ships, and the ports that service them, are significant contributors to the local and global economy — and they’re significant contributors to greenhouse gas emissions.A new consortium, formalized in a signing ceremony at MIT last week, aims to address climate-harming emissions in the maritime shipping industry, while supporting efforts for environmentally friendly operation in compliance with the decarbonization goals set by the International Maritime Organization.“This is a timely collaboration with key stakeholders from the maritime industry with a very bold and interdisciplinary research agenda that will establish new technologies and evidence-based standards,” says Themis Sapsis, the William Koch Professor of Marine Technology at MIT and the director of MIT’s Center for Ocean Engineering. “It aims to bring the best from MIT in key areas for commercial shipping, such as nuclear technology for commercial settings, autonomous operation and AI methods, improved hydrodynamics and ship design, cybersecurity, and manufacturing.” Co-led by Sapsis and Fotini Christia, the Ford International Professor of the Social Sciences; director of the Institute for Data, Systems, and Society (IDSS); and director of the MIT Sociotechnical Systems Research Center, the newly-launched MIT Maritime Consortium (MC) brings together MIT collaborators from across campus, including the Center for Ocean Engineering, which is housed in the Department of Mechanical Engineering; IDSS, which is housed in the MIT Schwarzman College of Computing; the departments of Nuclear Science and Engineering and Civil and Environmental Engineering; MIT Sea Grant; and others, with a national and an international community of industry experts.The Maritime Consortium’s founding members are the American Bureau of Shipping (ABS), Capital Clean Energy Carriers Corp., and HD Korea Shipbuilding and Offshore Engineering. Innovation members are Foresight-Group, Navios Maritime Partners L.P., Singapore Maritime Institute, and Dorian LPG.“The challenges the maritime industry faces are challenges that no individual company or organization can address alone,” says Christia. “The solution involves almost every discipline from the School of Engineering, as well as AI and data-driven algorithms, and policy and regulation — it’s a true MIT problem.”Researchers will explore new designs for nuclear systems consistent with the techno-economic needs and constraints of commercial shipping, economic and environmental feasibility of alternative fuels, new data-driven algorithms and rigorous evaluation criteria for autonomous platforms in the maritime space, cyber-physical situational awareness and anomaly detection, as well as 3D printing technologies for onboard manufacturing. Collaborators will also advise on research priorities toward evidence-based standards related to MIT presidential priorities around climate, sustainability, and AI.MIT has been a leading center of ship research and design for over a century, and is widely recognized for contributions to hydrodynamics, ship structural mechanics and dynamics, propeller design, and overall ship design, and its unique educational program for U.S. Navy Officers, the Naval Construction and Engineering Program. Research today is at the forefront of ocean science and engineering, with significant efforts in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The consortium’s academic home at MIT also opens the door to cross-departmental collaboration across the Institute.The MC will launch multiple research projects designed to tackle challenges from a variety of angles, all united by cutting-edge data analysis and computation techniques. Collaborators will research new designs and methods that improve efficiency and reduce greenhouse gas emissions, explore feasibility of alternative fuels, and advance data-driven decision-making, manufacturing and materials, hydrodynamic performance, and cybersecurity.“This consortium brings a powerful collection of significant companies that, together, has the potential to be a global shipping shaper in itself,” says Christopher J. Wiernicki SM ’85, chair and chief executive officer of ABS. “The strength and uniqueness of this consortium is the members, which are all world-class organizations and real difference makers. The ability to harness the members’ experience and know-how, along with MIT’s technology reach, creates real jet fuel to drive progress,” Wiernicki says. “As well as researching key barriers, bottlenecks, and knowledge gaps in the emissions challenge, the consortium looks to enable development of the novel technology and policy innovation that will be key. Long term, the consortium hopes to provide the gravity we will need to bend the curve.” More

  • in

    Developing materials for stellar performance in fusion power plants

    When Zoe Fisher was in fourth grade, her art teacher asked her to draw her vision of a dream job on paper. At the time, those goals changed like the flavor of the week in an ice cream shop — “zookeeper” featured prominently for a while — but Zoe immediately knew what she wanted to put down: a mad scientist.When Fisher stumbled upon the drawing in her parents’ Chicago home recently, it felt serendipitous because, by all measures, she has realized that childhood dream. The second-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE) is studying materials for fusion power plants at the Plasma Science and Fusion Center (PSFC) under the advisement of Michael Short, associate professor at NSE. Dennis Whyte, Hitachi America Professor of Engineering at NSE, serves as co-advisor.On track to an MIT educationGrowing up in Chicago, Fisher had heard her parents remarking on her reasoning abilities. When she was barely a preschooler she argued that she couldn’t have been found in a purple speckled egg, as her parents claimed they had done.Fisher didn’t put together just how much she had gravitated toward science until a high school physics teacher encouraged her to apply to MIT. Passionate about both the arts and sciences, she initially worried that pursuing science would be very rigid, without room for creativity. But she knows now that exploring solutions to problems requires plenty of creative thinking.It was a visit to MIT through the Weekend Immersion in Science and Engineering (WISE) that truly opened her eyes to the potential of an MIT education. “It just seemed like the undergraduate experience here is where you can be very unapologetically yourself. There’s no fronting something you don’t want to be like. There’s so much authenticity compared to most other colleges I looked at,” Fisher says. Once admitted, Campus Preview Weekend confirmed that she belonged. “We got to be silly and weird — a version of the Mafia game was a hit — and I was like, ‘These are my people,’” Fisher laughs.Pursuing fusion at NSEBefore she officially started as a first-year in 2018, Fisher enrolled in the Freshman Pre-Orientation Program (FPOP), which starts a week before orientation starts. Each FPOP zooms into one field. “I’d applied to the nuclear one simply because it sounded cool and I didn’t know anything about it,” Fisher says. She was intrigued right away. “They really got me with that ‘star in a bottle’ line,” she laughs. (The quest for commercial fusion is to create the energy equivalent of a star in a bottle). Excited by a talk by Zachary Hartwig, Robert N. Noyce Career Development Professor at NSE, Fisher asked if she could work on fusion as an undergraduate as part of an Undergraduate Research Opportunities Program (UROP) project. She started with modeling solders for power plants and was hooked. When Fisher requested more experimental work, Hartwig put her in touch with Research Scientist David Fischer at the Plasma Science and Fusion Center (PSFC). Fisher eventually moved on to explore superconductors, which eventually morphed into research for her master’s thesis.For her doctoral research, Fisher is extending her master’s work to explore defects in ceramics, specifically in alumina (aluminum oxide). Sapphire coatings are the single-crystal equivalent of alumina, an insulator being explored for use in fusion power plants. “I eventually want to figure out what types of charge defects form in ceramics during radiation damage so we can ultimately engineer radiation-resistant sapphire,” Fisher says.When you introduce a material in a fusion power plant, stray high-energy neutrons born from the plasma can collide and fundamentally reorder the lattice, which is likely to change a range of thermal, electrical, and structural properties. “Think of a scaffolding outside a building, with each one of those joints as a different atom that holds your material in place. If you go in and you pull a joint out, there’s a chance that you pulled out a joint that wasn’t structurally sound, in which case everything would be fine. But there’s also a chance that you pull a joint out and everything alters. And [such unpredictability] is a problem,” Fisher says. “We need to be able to account for exactly how these neutrons are going to alter the lattice property,” Fisher says, and it’s one of the topics her research explores.The studies, in turn, can function as a jumping-off point for irradiating superconductors. The goals are two-fold: “I want to figure out how I can make an industry-usable ceramic you can use to insulate the inside of a fusion power plant, and then also figure out if I can take this information that I’m getting with ceramics and make it superconductor-relevant,” Fisher says. “Superconductors are the electromagnets we will use to contain the plasma inside fusion power plants. However, they prove pretty difficult to study. Since they are also ceramic, you can draw a lot of parallels between alumina and yttrium barium copper oxide (YBCO), the specific superconductor we use,” she adds. Fisher is also excited about the many experiments she performs using a particle accelerator, one of which involves measuring exactly how surface thermal properties change during radiation.Sailing new pathsIt’s not just her research that Fisher loves. As an undergrad, and during her master’s, she was on the varsity sailing team. “I worked my way into sailing with literal Olympians, I did not see that coming,” she says. Fisher participates in Chicago’s Race to Mackinac and the Melges 15 Series every chance she gets. Of all the types of boats she has sailed, she prefers dinghy sailing the most. “It’s more physical, you have to throw yourself around a lot and there’s this immediate cause and effect, which I like,” Fisher says. She also teaches sailing lessons in the summer at MIT’s Sailing Pavilion — you can find her on a small motorboat, issuing orders through a speaker.Teaching has figured prominently throughout Fisher’s time at MIT. Through MISTI, Fisher has taught high school classes in Germany and a radiation and materials class in Armenia in her senior year. She was delighted by the food and culture in Armenia and by how excited people were to learn new ideas. Her love of teaching continues, as she has reached out to high schools in the Boston area. “I like talking to groups and getting them excited about fusion, or even maybe just the concept of attending graduate school,” Fisher says, adding that teaching the ropes of an experiment one-on-one is “one of the most rewarding things.”She also learned the value of resilience and quick thinking on various other MISTI trips. Despite her love of travel, Fisher has had a few harrowing experiences with tough situations and plans falling through at the last minute. It’s when she tells herself, “Well, the only thing that you’re gonna do is you’re gonna keep doing what you wanted to do.”That eyes-on-the-prize focus has stood Fisher in good stead, and continues to serve her well in her research today. More

  • in

    Will neutrons compromise the operation of superconducting magnets in a fusion plant?

    High-temperature superconducting magnets made from REBCO, an acronym for rare earth barium copper oxide, make it possible to create an intense magnetic field that can confine the extremely hot plasma needed for fusion reactions, which combine two hydrogen atoms to form an atom of helium, releasing a neutron in the process.But some early tests suggested that neutron irradiation inside a fusion power plant might instantaneously suppress the superconducting magnets’ ability to carry current without resistance (called critical current), potentially causing a reduction in the fusion power output.Now, a series of experiments has clearly demonstrated that this instantaneous effect of neutron bombardment, known as the “beam on effect,” should not be an issue during reactor operation, thus clearing the path for projects such as the ARC fusion system being developed by MIT spinoff company Commonwealth Fusion Systems.The findings were reported in the journal Superconducting Science and Technology, in a paper by MIT graduate student Alexis Devitre and professors Michael Short, Dennis Whyte, and Zachary Hartwig, along with six others.“Nobody really knew if it would be a concern,” Short explains. He recalls looking at these early findings: “Our group thought, man, somebody should really look into this. But now, luckily, the result of the paper is: It’s conclusively not a concern.”The possible issue first arose during some initial tests of the REBCO tapes planned for use in the ARC system. “I can remember the night when we first tried the experiment,” Devitre recalls. “We were all down in the accelerator lab, in the basement. It was a big shocker because suddenly the measurement we were looking at, the critical current, just went down by 30 percent” when it was measured under radiation conditions (approximating those of the fusion system), as opposed to when it was only measured after irradiation.Before that, researchers had irradiated the REBCO tapes and then tested them afterward, Short says. “We had the idea to measure while irradiating, the way it would be when the reactor’s really on,” he says. “And then we observed this giant difference, and we thought, oh, this is a big deal. It’s a margin you’d want to know about if you’re designing a reactor.”After a series of carefully calibrated tests, it turned out the drop in critical current was not caused by the irradiation at all, but was just an effect of temperature changes brought on by the proton beam used for the irradiation experiments. This is something that would not be a factor in an actual fusion plant, Short says.“We repeated experiments ‘oh so many times’ and collected about a thousand data points,” Devitre says. They then went through a detailed statistical analysis to show that the effects were exactly the same, under conditions where the material was just heated as when it was both heated and irradiated.This excluded the possibility that the instantaneous suppression of the critical current had anything to do with the “beam on effect,” at least within the sensitivity of their tests. “Our experiments are quite sensitive,” Short says. “We can never say there’s no effect, but we can say that there’s no important effect.”To carry out these tests required building a special facility for the purpose. Only a few such facilities exist in the world. “They’re all custom builds, and without this, we wouldn’t have been able to find out the answer,” he says.The finding that this specific issue is not a concern for the design of fusion plants “illustrates the power of negative results. If you can conclusively prove that something doesn’t happen, you can stop scientists from wasting their time hunting for something that doesn’t exist.” And in this case, Short says, “You can tell the fusion companies: ‘You might have thought this effect would be real, but we’ve proven that it’s not, and you can ignore it in your designs.’ So that’s one more risk retired.”That could be a relief to not only Commonwealth Fusion Systems but also several other companies that are also pursuing fusion plant designs, Devitre says. “There’s a bunch. And it’s not just fusion companies,” he adds. There remains the important issue of longer-term degradation of the REBCO that would occur over years or decades, which the group is presently investigating. Others are pursuing the use of these magnets for satellite thrusters and particle accelerators to study subatomic physics, where the effect could also have been a concern. For all these uses, “this is now one less thing to be concerned about,” Devitre says.The research team also included David Fischer, Kevin Woller, Maxwell Rae, Lauryn Kortman, and Zoe Fisher at MIT, and N. Riva at Proxima Fusion in Germany. This research was supported by Eni S.p.A. through the MIT Energy Initiative. More

  • in

    Unlocking the secrets of fusion’s core with AI-enhanced simulations

    Creating and sustaining fusion reactions — essentially recreating star-like conditions on Earth — is extremely difficult, and Nathan Howard PhD ’12, a principal research scientist at the MIT Plasma Science and Fusion Center (PSFC), thinks it’s one of the most fascinating scientific challenges of our time. “Both the science and the overall promise of fusion as a clean energy source are really interesting. That motivated me to come to grad school [at MIT] and work at the PSFC,” he says.Howard is member of the Magnetic Fusion Experiments Integrated Modeling (MFE-IM) group at the PSFC. Along with MFE-IM group leader Pablo Rodriguez-Fernandez, Howard and the team use simulations and machine learning to predict how plasma will behave in a fusion device. MFE-IM and Howard’s research aims to forecast a given technology or configuration’s performance before it’s piloted in an actual fusion environment, allowing for smarter design choices. To ensure their accuracy, these models are continuously validated using data from previous experiments, keeping their simulations grounded in reality.In a recent open-access paper titled “Prediction of Performance and Turbulence in ITER Burning Plasmas via Nonlinear Gyrokinetic Profile Prediction,” published in the January issue of Nuclear Fusion, Howard explains how he used high-resolution simulations of the swirling structures present in plasma, called turbulence, to confirm that the world’s largest experimental fusion device, currently under construction in Southern France, will perform as expected when switched on. He also demonstrates how a different operating setup could produce nearly the same amount of energy output but with less energy input, a discovery that could positively affect the efficiency of fusion devices in general.The biggest and best of what’s never been builtForty years ago, the United States and six other member nations came together to build ITER (Latin for “the way”), a fusion device that, once operational, would yield 500 megawatts of fusion power, and a plasma able to generate 10 times more energy than it absorbs from external heating. The plasma setup designed to achieve these goals — the most ambitious of any fusion experiment — is called the ITER baseline scenario, and as fusion science and plasma physics have progressed, ways to achieve this plasma have been refined using increasingly more powerful simulations like the modeling framework Howard used.In his work to verify the baseline scenario, Howard used CGYRO, a computer code developed by Howard’s collaborators at General Atomics. CGYRO applies a complex plasma physics model to a set of defined fusion operating conditions. Although it is time-intensive, CGYRO generates very detailed simulations on how plasma behaves at different locations within a fusion device.The comprehensive CGYRO simulations were then run through the PORTALS framework, a collection of tools originally developed at MIT by Rodriguez-Fernandez. “PORTALS takes the high-fidelity [CGYRO] runs and uses machine learning to build a quick model called a ‘surrogate’ that can mimic the results of the more complex runs, but much faster,” Rodriguez-Fernandez explains. “Only high-fidelity modeling tools like PORTALS give us a glimpse into the plasma core before it even forms. This predict-first approach allows us to create more efficient plasmas in a device like ITER.”After the first pass, the surrogates’ accuracy was checked against the high-fidelity runs, and if a surrogate wasn’t producing results in line with CGYRO’s, PORTALS was run again to refine the surrogate until it better mimicked CGYRO’s results. “The nice thing is, once you have built a well-trained [surrogate] model, you can use it to predict conditions that are different, with a very much reduced need for the full complex runs.” Once they were fully trained, the surrogates were used to explore how different combinations of inputs might affect ITER’s predicted performance and how it achieved the baseline scenario. Notably, the surrogate runs took a fraction of the time, and they could be used in conjunction with CGYRO to give it a boost and produce detailed results more quickly.“Just dropped in to see what condition my condition was in”Howard’s work with CGYRO, PORTALS, and surrogates examined a specific combination of operating conditions that had been predicted to achieve the baseline scenario. Those conditions included the magnetic field used, the methods used to control plasma shape, the external heating applied, and many other variables. Using 14 iterations of CGYRO, Howard was able to confirm that the current baseline scenario configuration could achieve 10 times more power output than input into the plasma. Howard says of the results, “The modeling we performed is maybe the highest fidelity possible at this time, and almost certainly the highest fidelity published.”The 14 iterations of CGYRO used to confirm the plasma performance included running PORTALS to build surrogate models for the input parameters and then tying the surrogates to CGYRO to work more efficiently. It only took three additional iterations of CGYRO to explore an alternate scenario that predicted ITER could produce almost the same amount of energy with about half the input power. The surrogate-enhanced CGYRO model revealed that the temperature of the plasma core — and thus the fusion reactions — wasn’t overly affected by less power input; less power input equals more efficient operation. Howard’s results are also a reminder that there may be other ways to improve ITER’s performance; they just haven’t been discovered yet.Howard reflects, “The fact that we can use the results of this modeling to influence the planning of experiments like ITER is exciting. For years, I’ve been saying that this was the goal of our research, and now that we actually do it — it’s an amazing arc, and really fulfilling.”  More