More stories

  • in

    Ensuring a durable transition

    To fend off the worst impacts of climate change, “we have to decarbonize, and do it even faster,” said William H. Green, director of the MIT Energy Initiative (MITEI) and Hoyt C. Hottel Professor, MIT Department of Chemical Engineering, at MITEI’s Annual Research Conference.“But how the heck do we actually achieve this goal when the United States is in the middle of a divisive election campaign, and globally, we’re facing all kinds of geopolitical conflicts, trade protectionism, weather disasters, increasing demand from developing countries building a middle class, and data centers in countries like the U.S.?”Researchers, government officials, and business leaders convened in Cambridge, Massachusetts, Sept. 25-26 to wrestle with this vexing question at the conference that was themed, “A durable energy transition: How to stay on track in the face of increasing demand and unpredictable obstacles.”“In this room we have a lot of power,” said Green, “if we work together, convey to all of society what we see as real pathways and policies to solve problems, and take collective action.”The critical role of consensus-building in driving the energy transition arose repeatedly in conference sessions, whether the topic involved developing and adopting new technologies, constructing and siting infrastructure, drafting and passing vital energy policies, or attracting and retaining a skilled workforce.Resolving conflictsThere is “blowback and a social cost” in transitioning away from fossil fuels, said Stephen Ansolabehere, the Frank G. Thompson Professor of Government at Harvard University, in a panel on the social barriers to decarbonization. “Companies need to engage differently and recognize the rights of communities,” he said.Nora DeDontney, director of development at Vineyard Offshore, described her company’s two years of outreach and negotiations to bring large cables from ocean-based wind turbines onshore.“Our motto is, ‘community first,’” she said. Her company works to mitigate any impacts towns might feel because of offshore wind infrastructure construction with projects, such as sewer upgrades; provides workforce training to Tribal Nations; and lays out wind turbines in a manner that provides safe and reliable areas for local fisheries.Elsa A. Olivetti, professor in the Department of Materials Science and Engineering at MIT and the lead of the Decarbonization Mission of MIT’s new Climate Project, discussed the urgent need for rapid scale-up of mineral extraction. “Estimates indicate that to electrify the vehicle fleet by 2050, about six new large copper mines need to come on line each year,” she said. To meet the demand for metals in the United States means pushing into Indigenous lands and environmentally sensitive habitats. “The timeline of permitting is not aligned with the temporal acceleration needed,” she said.Larry Susskind, the Ford Professor of Urban and Environmental Planning in the MIT Department of Urban Studies and Planning, is trying to resolve such tensions with universities playing the role of mediators. He is creating renewable energy clinics where students train to participate in emerging disputes over siting. “Talk to people before decisions are made, conduct joint fact finding, so that facilities reduce harms and share the benefits,” he said.Clean energy boom and pressureA relatively recent and unforeseen increase in demand for energy comes from data centers, which are being built by large technology companies for new offerings, such as artificial intelligence.“General energy demand was flat for 20 years — and now, boom,” said Sean James, Microsoft’s senior director of data center research. “It caught utilities flatfooted.” With the expansion of AI, the rush to provision data centers with upwards of 35 gigawatts of new (and mainly renewable) power in the near future, intensifies pressure on big companies to balance the concerns of stakeholders across multiple domains. Google is pursuing 24/7 carbon-free energy by 2030, said Devon Swezey, the company’s senior manager for global energy and climate.“We’re pursuing this by purchasing more and different types of clean energy locally, and accelerating technological innovation such as next-generation geothermal projects,” he said. Pedro Gómez Lopez, strategy and development director, Ferrovial Digital, which designs and constructs data centers, incorporates renewable energy into their projects, which contributes to decarbonization goals and benefits to locales where they are sited. “We can create a new supply of power, taking the heat generated by a data center to residences or industries in neighborhoods through District Heating initiatives,” he said.The Inflation Reduction Act and other legislation has ramped up employment opportunities in clean energy nationwide, touching every region, including those most tied to fossil fuels. “At the start of 2024 there were about 3.5 million clean energy jobs, with ‘red’ states showing the fastest growth in clean energy jobs,” said David S. Miller, managing partner at Clean Energy Ventures. “The majority (58 percent) of new jobs in energy are now in clean energy — that transition has happened. And one-in-16 new jobs nationwide were in clean energy, with clean energy jobs growing more than three times faster than job growth economy-wide”In this rapid expansion, the U.S. Department of Energy (DoE) is prioritizing economically marginalized places, according to Zoe Lipman, lead for good jobs and labor standards in the Office of Energy Jobs at the DoE. “The community benefit process is integrated into our funding,” she said. “We are creating the foundation of a virtuous circle,” encouraging benefits to flow to disadvantaged and energy communities, spurring workforce training partnerships, and promoting well-paid union jobs. “These policies incentivize proactive community and labor engagement, and deliver community benefits, both of which are key to building support for technological change.”Hydrogen opportunity and challengeWhile engagement with stakeholders helps clear the path for implementation of technology and the spread of infrastructure, there remain enormous policy, scientific, and engineering challenges to solve, said multiple conference participants. In a “fireside chat,” Prasanna V. Joshi, vice president of low-carbon-solutions technology at ExxonMobil, and Ernest J. Moniz, professor of physics and special advisor to the president at MIT, discussed efforts to replace natural gas and coal with zero-carbon hydrogen in order to reduce greenhouse gas emissions in such major industries as steel and fertilizer manufacturing.“We have gone into an era of industrial policy,” said Moniz, citing a new DoE program offering incentives to generate demand for hydrogen — more costly than conventional fossil fuels — in end-use applications. “We are going to have to transition from our current approach, which I would call carrots-and-twigs, to ultimately, carrots-and-sticks,” Moniz warned, in order to create “a self-sustaining, major, scalable, affordable hydrogen economy.”To achieve net zero emissions by 2050, ExxonMobil intends to use carbon capture and sequestration in natural gas-based hydrogen and ammonia production. Ammonia can also serve as a zero-carbon fuel. Industry is exploring burning ammonia directly in coal-fired power plants to extend the hydrogen value chain. But there are challenges. “How do you burn 100 percent ammonia?”, asked Joshi. “That’s one of the key technology breakthroughs that’s needed.” Joshi believes that collaboration with MIT’s “ecosystem of breakthrough innovation” will be essential to breaking logjams around the hydrogen and ammonia-based industries.MIT ingenuity essentialThe energy transition is placing very different demands on different regions around the world. Take India, where today per capita power consumption is one of the lowest. But Indians “are an aspirational people … and with increasing urbanization and industrial activity, the growth in power demand is expected to triple by 2050,” said Praveer Sinha, CEO and managing director of the Tata Power Co. Ltd., in his keynote speech. For that nation, which currently relies on coal, the move to clean energy means bringing another 300 gigawatts of zero-carbon capacity online in the next five years. Sinha sees this power coming from wind, solar, and hydro, supplemented by nuclear energy.“India plans to triple nuclear power generation capacity by 2032, and is focusing on advancing small modular reactors,” said Sinha. “The country also needs the rapid deployment of storage solutions to firm up the intermittent power.” The goal is to provide reliable electricity 24/7 to a population living both in large cities and in geographically remote villages, with the help of long-range transmission lines and local microgrids. “India’s energy transition will require innovative and affordable technology solutions, and there is no better place to go than MIT, where you have the best brains, startups, and technology,” he said.These assets were on full display at the conference. Among them a cluster of young businesses, including:the MIT spinout Form Energy, which has developed a 100-hour iron battery as a backstop to renewable energy sources in case of multi-day interruptions;startup Noya that aims for direct air capture of atmospheric CO2 using carbon-based materials;the firm Active Surfaces, with a lightweight material for putting solar photovoltaics in previously inaccessible places;Copernic Catalysts, with new chemistry for making ammonia and sustainable aviation fuel far more inexpensively than current processes; andSesame Sustainability, a software platform spun out of MITEI that gives industries a full financial analysis of the costs and benefits of decarbonization.The pipeline of research talent extended into the undergraduate ranks, with a conference “slam” competition showcasing students’ summer research projects in areas from carbon capture using enzymes to 3D design for the coils used in fusion energy confinement.“MIT students like me are looking to be the next generation of energy leaders, looking for careers where we can apply our engineering skills to tackle exciting climate problems and make a tangible impact,” said Trent Lee, a junior in mechanical engineering researching improvements in lithium-ion energy storage. “We are stoked by the energy transition, because it’s not just the future, but our chance to build it.” More

  • in

    Smart handling of neutrons is crucial to fusion power success

    In fall 2009, when Ethan Peterson ’13 arrived at MIT as an undergraduate, he already had some ideas about possible career options. He’d always liked building things, even as a child, so he imagined his future work would involve engineering of some sort. He also liked physics. And he’d recently become intent on reducing our dependence on fossil fuels and simultaneously curbing greenhouse gas emissions, which made him consider studying solar and wind energy, among other renewable sources.Things crystallized for him in the spring semester of 2010, when he took an introductory course on nuclear fusion, taught by Anne White, during which he discovered that when a deuterium nucleus and a tritium nucleus combine to produce a helium nucleus, an energetic (14 mega electron volt) neutron — traveling at one-sixth the speed of light — is released. Moreover, 1020 (100 billion billion) of these neutrons would be produced every second that a 500-megawatt fusion power plant operates. “It was eye-opening for me to learn just how energy-dense the fusion process is,” says Peterson, who became the Class of 1956 Career Development Professor of nuclear science and engineering in July 2024. “I was struck by the richness and interdisciplinary nature of the fusion field. This was an engineering discipline where I could apply physics to solve a real-world problem in a way that was both interesting and beautiful.”He soon became a physics and nuclear engineering double major, and by the time he graduated from MIT in 2013, the U.S. Department of Energy (DoE) had already decided to cut funding for MIT’s Alcator C-Mod fusion project. In view of that facility’s impending closure, Peterson opted to pursue graduate studies at the University of Wisconsin. There, he acquired a basic science background in plasma physics, which is central not only to nuclear fusion but also to astrophysical phenomena such as the solar wind.When Peterson received his PhD from Wisconsin in 2019, nuclear fusion had rebounded at MIT with the launch, a year earlier, of the SPARC project — a collaborative effort being carried out with the newly founded MIT spinout Commonwealth Fusion Systems. He returned to his alma mater as a postdoc and then a research scientist in the Plasma Science and Fusion Center, taking his time, at first, to figure out how to best make his mark in the field.Minding your neutronsAround that time, Peterson was participating in a community planning process, sponsored by the DoE, that focused on critical gaps that needed to be closed for a successful fusion program. In the course of these discussions, he came to realize that inadequate attention had been paid to the handling of neutrons, which carry 80 percent of the energy coming out of a fusion reaction — energy that needs to be harnessed for electrical generation. However, these neutrons are so energetic that they can penetrate through many tens of centimeters of material, potentially undermining the structural integrity of components and damaging vital equipment such as superconducting magnets. Shielding is also essential for protecting humans from harmful radiation.One goal, Peterson says, is to minimize the number of neutrons that escape and, in so doing, to reduce the amount of lost energy. A complementary objective, he adds, “is to get neutrons to deposit heat where you want them to and to stop them from depositing heat where you don’t want them to.” These considerations, in turn, can have a profound influence on fusion reactor design. This branch of nuclear engineering, called neutronics — which analyzes where neutrons are created and where they end up going — has become Peterson’s specialty.It was never a high-profile area of research in the fusion community — as plasma physics, for example, has always garnered more of the spotlight and more of the funding. That’s exactly why Peterson has stepped up. “The impacts of neutrons on fusion reactor design haven’t been a high priority for a long time,” he says. “I felt that some initiative needed to be taken,” and that prompted him to make the switch from plasma physics to neutronics. It has been his principal focus ever since — as a postdoc, a research scientist, and now as a faculty member.A code to design byThe best way to get a neutron to transfer its energy is to make it collide with a light atom. Lithium, with an atomic number of three, or lithium-containing materials are normally good choices — and necessary for producing tritium fuel. The placement of lithium “blankets,” which are intended to absorb energy from neutrons and produce tritium, “is a critical part of the design of fusion reactors,” Peterson says. High-density materials, such as lead and tungsten, can be used, conversely, to block the passage of neutrons and other types of radiation. “You might want to layer these high- and low-density materials in a complicated way that isn’t immediately intuitive” he adds. Determining which materials to put where — and of what thickness and mass — amounts to a tricky optimization problem, which will affect the size, cost, and efficiency of a fusion power plant.To that end, Peterson has developed modelling tools that can make analyses of these sorts easier and faster, thereby facilitating the design process. “This has traditionally been the step that takes the longest time and causes the biggest holdups,” he says. The models and algorithms that he and his colleagues are devising are general enough, moreover, to be compatible with a diverse range of fusion power plant concepts, including those that use magnets or lasers to confine the plasma.Now that he’s become a professor, Peterson is in a position to introduce more people to nuclear engineering, and to neutronics in particular. “I love teaching and mentoring students, sharing the things I’m excited about,” he says. “I was inspired by all the professors I had in physics and nuclear engineering at MIT, and I hope to give back to the community in the same way.”He also believes that if you are going to work on fusion, there is no better place to be than MIT, “where the facilities are second-to-none. People here are extremely innovative and passionate. And the sheer number of people who excel in their fields is staggering.” Great ideas can sometimes be sparked by off-the-cuff conversations in the hallway — something that happens more frequently than you expect, Peterson remarks. “All of these things taken together makes MIT a very special place.” More

  • in

    Study: Fusion energy could play a major role in the global response to climate change

    For many decades, fusion has been touted as the ultimate source of abundant, clean electricity. Now, as the world faces the need to reduce carbon emissions to prevent catastrophic climate change, making commercial fusion power a reality takes on new importance. In a power system dominated by low-carbon variable renewable energy sources (VREs) such as solar and wind, “firm” electricity sources are needed to kick in whenever demand exceeds supply — for example, when the sun isn’t shining or the wind isn’t blowing and energy storage systems aren’t up to the task. What is the potential role and value of fusion power plants (FPPs) in such a future electric power system — a system that is not only free of carbon emissions but also capable of meeting the dramatically increased global electricity demand expected in the coming decades?Working together for a year-and-a-half, investigators in the MIT Energy Initiative (MITEI) and the MIT Plasma Science and Fusion Center (PSFC) have been collaborating to answer that question. They found that — depending on its future cost and performance — fusion has the potential to be critically important to decarbonization. Under some conditions, the availability of FPPs could reduce the global cost of decarbonizing by trillions of dollars. More than 25 experts together examined the factors that will impact the deployment of FPPs, including costs, climate policy, operating characteristics, and other factors. They present their findings in a new report funded through MITEI and entitled “The Role of Fusion Energy in a Decarbonized Electricity System.”“Right now, there is great interest in fusion energy in many quarters — from the private sector to government to the general public,” says the study’s principal investigator (PI) Robert C. Armstrong, MITEI’s former director and the Chevron Professor of Chemical Engineering, Emeritus. “In undertaking this study, our goal was to provide a balanced, fact-based, analysis-driven guide to help us all understand the prospects for fusion going forward.” Accordingly, the study takes a multidisciplinary approach that combines economic modeling, electric grid modeling, techno-economic analysis, and more to examine important factors that are likely to shape the future deployment and utilization of fusion energy. The investigators from MITEI provided the energy systems modeling capability, while the PSFC participants provided the fusion expertise.Fusion technologies may be a decade away from commercial deployment, so the detailed technology and costs of future commercial FPPs are not known at this point. As a result, the MIT research team focused on determining what cost levels fusion plants must reach by 2050 to achieve strong market penetration and make a significant contribution to the decarbonization of global electricity supply in the latter half of the century.The value of having FPPs available on an electric grid will depend on what other options are available, so to perform their analyses, the researchers needed estimates of the future cost and performance of those options, including conventional fossil fuel generators, nuclear fission power plants, VRE generators, and energy storage technologies, as well as electricity demand for specific regions of the world. To find the most reliable data, they searched the published literature as well as results of previous MITEI and PSFC analyses.Overall, the analyses showed that — while the technology demands of harnessing fusion energy are formidable — so are the potential economic and environmental payoffs of adding this firm, low-carbon technology to the world’s portfolio of energy options.Perhaps the most remarkable finding is the “societal value” of having commercial FPPs available. “Limiting warming to 1.5 degrees C requires that the world invest in wind, solar, storage, grid infrastructure, and everything else needed to decarbonize the electric power system,” explains Randall Field, executive director of the fusion study and MITEI’s director of research. “The cost of that task can be far lower when FPPs are available as a source of clean, firm electricity.” And the benefit varies depending on the cost of the FPPs. For example, assuming that the cost of building a FPP is $8,000 per kilowatt (kW) in 2050 and falls to $4,300/kW in 2100, the global cost of decarbonizing electric power drops by $3.6 trillion. If the cost of a FPP is $5,600/kW in 2050 and falls to $3,000/kW in 2100, the savings from having the fusion plants available would be $8.7 trillion. (Those calculations are based on differences in global gross domestic product and assume a discount rate of 6 percent. The undiscounted value is about 20 times larger.)The goal of other analyses was to determine the scale of deployment worldwide at selected FPP costs. Again, the results are striking. For a deep decarbonization scenario, the total global share of electricity generation from fusion in 2100 ranges from less than 10 percent if the cost of fusion is high to more than 50 percent if the cost of fusion is low.Other analyses showed that the scale and timing of fusion deployment vary in different parts of the world. Early deployment of fusion can be expected in wealthy nations such as European countries and the United States that have the most aggressive decarbonization policies. But certain other locations — for example, India and the continent of Africa — will have great growth in fusion deployment in the second half of the century due to a large increase in demand for electricity during that time. “In the U.S. and Europe, the amount of demand growth will be low, so it’ll be a matter of switching away from dirty fuels to fusion,” explains Sergey Paltsev, deputy director of the MIT Center for Sustainability Science and Strategy and a senior research scientist at MITEI. “But in India and Africa, for example, the tremendous growth in overall electricity demand will be met with significant amounts of fusion along with other low-carbon generation resources in the later part of the century.”A set of analyses focusing on nine subregions of the United States showed that the availability and cost of other low-carbon technologies, as well as how tightly carbon emissions are constrained, have a major impact on how FPPs would be deployed and used. In a decarbonized world, FPPs will have the highest penetration in locations with poor diversity, capacity, and quality of renewable resources, and limits on carbon emissions will have a big impact. For example, the Atlantic and Southeast subregions have low renewable resources. In those subregions, wind can produce only a small fraction of the electricity needed, even with maximum onshore wind buildout. Thus, fusion is needed in those subregions, even when carbon constraints are relatively lenient, and any available FPPs would be running much of the time. In contrast, the Central subregion of the United States has excellent renewable resources, especially wind. Thus, fusion competes in the Central subregion only when limits on carbon emissions are very strict, and FPPs will typically be operated only when the renewables can’t meet demand.An analysis of the power system that serves the New England states provided remarkably detailed results. Using a modeling tool developed at MITEI, the fusion team explored the impact of using different assumptions about not just cost and emissions limits but even such details as potential land-use constraints affecting the use of specific VREs. This approach enabled them to calculate the FPP cost at which fusion units begin to be installed. They were also able to investigate how that “threshold” cost changed with changes in the cap on carbon emissions. The method can even show at what price FPPs begin to replace other specific generating sources. In one set of runs, they determined the cost at which FPPs would begin to displace floating platform offshore wind and rooftop solar.“This study is an important contribution to fusion commercialization because it provides economic targets for the use of fusion in the electricity markets,” notes Dennis G. Whyte, co-PI of the fusion study, former director of the PSFC, and the Hitachi America Professor of Engineering in the Department of Nuclear Science and Engineering. “It better quantifies the technical design challenges for fusion developers with respect to pricing, availability, and flexibility to meet changing demand in the future.”The researchers stress that while fission power plants are included in the analyses, they did not perform a “head-to-head” comparison between fission and fusion, because there are too many unknowns. Fusion and nuclear fission are both firm, low-carbon electricity-generating technologies; but unlike fission, fusion doesn’t use fissile materials as fuels, and it doesn’t generate long-lived nuclear fuel waste that must be managed. As a result, the regulatory requirements for FPPs will be very different from the regulations for today’s fission power plants — but precisely how they will differ is unclear. Likewise, the future public perception and social acceptance of each of these technologies cannot be projected, but could have a major influence on what generation technologies are used to meet future demand.The results of the study convey several messages about the future of fusion. For example, it’s clear that regulation can be a potentially large cost driver. This should motivate fusion companies to minimize their regulatory and environmental footprint with respect to fuels and activated materials. It should also encourage governments to adopt appropriate and effective regulatory policies to maximize their ability to use fusion energy in achieving their decarbonization goals. And for companies developing fusion technologies, the study’s message is clearly stated in the report: “If the cost and performance targets identified in this report can be achieved, our analysis shows that fusion energy can play a major role in meeting future electricity needs and achieving global net-zero carbon goals.” More

  • in

    Applying risk and reliability analysis across industries

    On Feb. 1, 2003, the space shuttle Columbia disintegrated as it returned to Earth, killing all seven astronauts on board. The tragic incident compelled NASA to amp up their risk safety assessments and protocols. They knew whom to call: Curtis Smith PhD ’02, who is now the KEPCO Professor of the Practice of Nuclear Science and Engineering at MIT.The nuclear community has always been a leader in probabilistic risk analysis and Smith’s work in risk-related research had made him an established expert in the field. When NASA came knocking, Smith had been working for the Nuclear Regulatory Commission (NRC) at the Idaho National Laboratory (INL). He pivoted quickly. For the next decade, Smith worked with NASA’s Office of Safety and Mission Assurance supporting their increased use of risk analysis. It was a software tool that Smith helped develop, SAPHIRE, that NASA would adopt to bolster its own risk analysis program.At MIT, Smith’s focus is on both sides of system operation: risk and reliability. A research project he has proposed involves evaluating the reliability of 3D-printed components and parts for nuclear reactors.Growing up in IdahoMIT is a distance from where Smith grew up on the Shoshone-Bannock Native American reservation in Fort Hall, Idaho. His father worked at a chemical manufacturing plant, while his mother and grandmother operated a small restaurant on the reservation.Southeast Idaho had a significant population of migrant workers and Smith grew up with a diverse group of friends, mostly Native American and Hispanic. “It was a largely positive time and set a worldview for me in many wonderful ways,” Smith remembers. When he was a junior in high school, the family moved to Pingree, Idaho, a small town of barely 500. Smith attended Snake River High, a regional school, and remembered the deep impact his teachers had. “I learned a lot in grade school and had great teachers, so my love for education probably started there. I tried to emulate my teachers,” Smith says.Smith went to Idaho State University in Pocatello for college, a 45-minute drive from his family. Drawn to science, he decided he wanted to study a subject that would benefit humanity the most: nuclear engineering. Fortunately, Idaho State has a strong nuclear engineering program. Smith completed a master’s degree in the same field at ISU while working for the Federal Bureau of Investigation in the security department during the swing shift — 5 p.m. to 1 a.m. — at the FBI offices in Pocatello. “It was a perfect job while attending grad school,” Smith says.His KEPCO Professor of the Practice appointment is the second stint for Smith at MIT: He completed his PhD in the Department of Nuclear Science and Engineering (NSE) under the advisement of Professor George Apostolakis in 2002.A career in risk analysis and managementAfter a doctorate at MIT, Smith returned to Idaho, conducting research in risk analysis for the NRC. He also taught technical courses and developed risk analysis software. “We did a whole host of work that supported the current fleet of nuclear reactors that we have,” Smith says.He was 10 years into his career at INL when NASA recruited him, leaning on his expertise in risk analysis to translate it into space missions. “I didn’t really have a background in aerospace, but I was able to bring all the engineering I knew, conducting risk analysis for nuclear missions. It was really exciting and I learned a lot about aerospace,” Smith says.Risk analysis uses statistics and data to answer complex questions involving safety. Among his projects: analyzing the risk involved in a Mars rover mission with a radioisotope-generated power source for the rover. Even if the necessary plutonium is encased in really strong material, calculations for risk have to factor in all eventualities, including the rocket blowing up.When the Fukushima incident happened in 2011, the Department of Energy (DoE) was more supportive of safety and risk analysis research. Smith found himself in the center of the action again, supporting large DoE research programs. He then moved to become the director of the Nuclear Safety and Regulatory Research Division at the INL. Smith found he loved the role, mentoring and nurturing the careers of a diverse set of scientists. “It turned out to be much more rewarding than I had expected,” Smith says. Under his leadership, the division grew from 45 to almost 90 research staff and won multiple national awards.Return to MITMIT NSE came calling in 2022, looking to fill the position of professor of the practice, an offer Smith couldn’t refuse. The department was looking to bulk up its risk and reliability offerings and Smith made a great fit. The DoE division he had been supervising had grown wings enough for Smith to seek out something new.“Just getting back to Boston is exciting,” Smith says. The last go-around involved bringing the family to the city and included a lot of sleepless nights. Smith’s wife, Jacquie, is also excited about being closer to the New England fan base. The couple has invested in season tickets for the Patriots and look to attend as many sporting events as possible.Smith is most excited about adding to the risk and reliability offerings at MIT at a time when the subject has become especially important for nuclear power. “I’m grateful for the opportunity to bring my knowledge and expertise from the last 30 years to the field,” he says. Being a professor of the practice of NSE carries with it a responsibility to unite theory and practice, something Smith is especially good at. “We always have to answer the question of, ‘How do I take the research and make that practical,’ especially for something important like nuclear power, because we need much more of these ideas in industry,” he says.He is particularly excited about developing the next generation of nuclear scientists. “Having the ability to do this at a place like MIT is especially fulfilling and something I have been desiring my whole career,” Smith says. More

  • in

    Aligning economic and regulatory frameworks for today’s nuclear reactor technology

    Liam Hines ’22 didn’t move to Sarasota, Florida, until high school, but he’s a Floridian through and through. He jokes that he’s even got a floral shirt, what he calls a “Florida formal,” for every occasion.Which is why it broke his heart when toxic red algae used to devastate the Sunshine State’s coastline, including at his favorite beach, Caspersen. The outbreak made headline news during his high school years, with the blooms destroying marine wildlife and adversely impacting the state’s tourism-driven economy.In Florida, Hines says, environmental awareness is pretty high because everyday citizens are being directly impacted by climate change. After all, it’s hard not to worry when beautiful white sand beaches are covered in dead fish. Ongoing concerns about the climate cemented Hines’ resolve to pick a career that would have a strong “positive environmental impact.” He chose nuclear, as he saw it as “a green, low-carbon-emissions energy source with a pretty straightforward path to implementation.”

    Liam Hines: Ensuring that nuclear policy keeps up with nuclear technology.

    Undergraduate studies at MITKnowing he wanted a career in the sciences, Hines applied and got accepted to MIT for undergraduate studies in fall 2018. An orientation program hosted by the Department of Nuclear Science and Engineering (NSE) sold him on the idea of pursuing the field. “The department is just a really tight-knit community, and that really appealed to me,” Hines says.During his undergraduate years, Hines realized he needed a job to pay part of his bills. “Instead of answering calls at the dorm front desk or working in the dining halls, I decided I’m going to become a licensed nuclear operator onsite,” he says. “Reactor operations offer so much hands-on experience with real nuclear systems. It doesn’t hurt that it pays better.” Becoming a licensed nuclear reactor operator is hard work, however, involving a year-long training process studying maintenance, operations, and equipment oversight. A bonus: The job, supervising the MIT Nuclear Reactor Laboratory, taught him the fundamentals of nuclear physics and engineering.Always interested in research, Hines got an early start by exploring the regulatory challenges of advanced fusion systems. There have been questions related to licensing requirements and the safety consequences of the onsite radionuclide inventory. Hines’ undergraduate research work involved studying precedent for such fusion facilities and comparing them to experimental facilities such as the Tokamak Fusion Test Reactor at the Princeton Plasma Physics Laboratory.Doctoral focus on legal and regulatory frameworksWhen scientists want to make technologies as safe as possible, they have to do two things in concert: First they evaluate the safety of the technology, and then make sure legal and regulatory structures take into account the evolution of these advanced technologies. Hines is taking such a two-pronged approach to his doctoral work on nuclear fission systems.Under the guidance of Professor Koroush Shirvan, Hines is conducting systems modeling of various reactor cores that include graphite, and simulating operations under long time spans. He then studies radionuclide transport from low-level waste facilities — the consequences of offsite storage after 50 or 100 or even 10,000 years of storage. The work has to make sure to hit safety and engineering margins, but also tread a fine line. “You want to make sure you’re not over-engineering systems and adding undue cost, but also making sure to assess the unique hazards of these advanced technologies as accurately as possible,” Hines says.On a parallel track, under Professor Haruko Wainwright’s advisement, Hines is applying the current science on radionuclide geochemistry to track radionuclide wastes and map their profile for hazards. One of the challenges fission reactors face is that existing low-level waste regulations were fine-tuned to old reactors. Regulations have not kept up: “Now that we have new technologies with new wastes, some of the hazards of the new waste are completely missed by existing standards,” Hines says. He is working to seal these gaps.A philosophy-driven outlookHines is grateful for the dynamic learning environment at NSE. “A lot of the faculty have that go-getter attitude,” he points out, impressed by the entrepreneurial spirit on campus. “It’s made me confident to really tackle the things that I care about.”An ethics class as an undergraduate made Hines realize there were discussions in class he could apply to the nuclear realm, especially when it came to teasing apart the implications of the technology — where the devices would be built and who they would serve. He eventually went on to double-major in NSE and philosophy.The framework style of reading and reasoning involved in studying philosophy is particularly relevant in his current line of work, where he has to extract key points regarding nuclear regulatory issues. Much like philosophy discussions today that involve going over material that has been discussed for centuries and framing them through new perspectives, nuclear regulatory issues too need to take the long view.“In philosophy, we have to insert ourselves into very large conversations. Similarly, in nuclear engineering, you have to understand how to take apart the discourse that’s most relevant to your research and frame it,” Hines says. This technique is especially necessary because most of the time the nuclear regulatory issues might seem like wading in the weeds of nitty-gritty technical matters, but they can have a huge impact on the public and public perception, Hines adds.As for Florida, Hines visits every chance he can get. The red tide still surfaces but not as consistently as it once did. And since he started his job as a nuclear operator in his undergraduate days, Hines has progressed to senior reactor operator. This time around he gets to sign off on the checklists. “It’s much like when I was shift lead at Dunkin’ Donuts in high school,” Hines says, “everyone is kind of doing the same thing, but you get to be in charge for the afternoon.” More

  • in

    More durable metals for fusion power reactors

    For many decades, nuclear fusion power has been viewed as the ultimate energy source. A fusion power plant could generate carbon-free energy at a scale needed to address climate change. And it could be fueled by deuterium recovered from an essentially endless source — seawater.Decades of work and billions of dollars in research funding have yielded many advances, but challenges remain. To Ju Li, the TEPCO Professor in Nuclear Science and Engineering and a professor of materials science and engineering at MIT, there are still two big challenges. The first is to build a fusion power plant that generates more energy than is put into it; in other words, it produces a net output of power. Researchers worldwide are making progress toward meeting that goal.The second challenge that Li cites sounds straightforward: “How do we get the heat out?” But understanding the problem and finding a solution are both far from obvious.Research in the MIT Energy Initiative (MITEI) includes development and testing of advanced materials that may help address those challenges, as well as many other challenges of the energy transition. MITEI has multiple corporate members that have been supporting MIT’s efforts to advance technologies required to harness fusion energy.The problem: An abundance of helium, a destructive forceKey to a fusion reactor is a superheated plasma — an ionized gas — that’s reacting inside a vacuum vessel. As light atoms in the plasma combine to form heavier ones, they release fast neutrons with high kinetic energy that shoot through the surrounding vacuum vessel into a coolant. During this process, those fast neutrons gradually lose their energy by causing radiation damage and generating heat. The heat that’s transferred to the coolant is eventually used to raise steam that drives an electricity-generating turbine.The problem is finding a material for the vacuum vessel that remains strong enough to keep the reacting plasma and the coolant apart, while allowing the fast neutrons to pass through to the coolant. If one considers only the damage due to neutrons knocking atoms out of position in the metal structure, the vacuum vessel should last a full decade. However, depending on what materials are used in the fabrication of the vacuum vessel, some projections indicate that the vacuum vessel will last only six to 12 months. Why is that? Today’s nuclear fission reactors also generate neutrons, and those reactors last far longer than a year.The difference is that fusion neutrons possess much higher kinetic energy than fission neutrons do, and as they penetrate the vacuum vessel walls, some of them interact with the nuclei of atoms in the structural material, giving off particles that rapidly turn into helium atoms. The result is hundreds of times more helium atoms than are present in a fission reactor. Those helium atoms look for somewhere to land — a place with low “embedding energy,” a measure that indicates how much energy it takes for a helium atom to be absorbed. As Li explains, “The helium atoms like to go to places with low helium embedding energy.” And in the metals used in fusion vacuum vessels, there are places with relatively low helium embedding energy — namely, naturally occurring openings called grain boundaries.Metals are made up of individual grains inside which atoms are lined up in an orderly fashion. Where the grains come together there are gaps where the atoms don’t line up as well. That open space has relatively low helium embedding energy, so the helium atoms congregate there. Worse still, helium atoms have a repellent interaction with other atoms, so the helium atoms basically push open the grain boundary. Over time, the opening grows into a continuous crack, and the vacuum vessel breaks.That congregation of helium atoms explains why the structure fails much sooner than expected based just on the number of helium atoms that are present. Li offers an analogy to illustrate. “Babylon is a city of a million people. But the claim is that 100 bad persons can destroy the whole city — if all those bad persons work at the city hall.” The solution? Give those bad persons other, more attractive places to go, ideally in their own villages.To Li, the problem and possible solution are the same in a fusion reactor. If many helium atoms go to the grain boundary at once, they can destroy the metal wall. The solution? Add a small amount of a material that has a helium embedding energy even lower than that of the grain boundary. And over the past two years, Li and his team have demonstrated — both theoretically and experimentally — that their diversionary tactic works. By adding nanoscale particles of a carefully selected second material to the metal wall, they’ve found they can keep the helium atoms that form from congregating in the structurally vulnerable grain boundaries in the metal.Looking for helium-absorbing compoundsTo test their idea, So Yeon Kim ScD ’23 of the Department of Materials Science and Engineering and Haowei Xu PhD ’23 of the Department of Nuclear Science and Engineering acquired a sample composed of two materials, or “phases,” one with a lower helium embedding energy than the other. They and their collaborators then implanted helium ions into the sample at a temperature similar to that in a fusion reactor and watched as bubbles of helium formed. Transmission electron microscope images confirmed that the helium bubbles occurred predominantly in the phase with the lower helium embedding energy. As Li notes, “All the damage is in that phase — evidence that it protected the phase with the higher embedding energy.”Having confirmed their approach, the researchers were ready to search for helium-absorbing compounds that would work well with iron, which is often the principal metal in vacuum vessel walls. “But calculating helium embedding energy for all sorts of different materials would be computationally demanding and expensive,” says Kim. “We wanted to find a metric that is easy to compute and a reliable indicator of helium embedding energy.”They found such a metric: the “atomic-scale free volume,” which is basically the maximum size of the internal vacant space available for helium atoms to potentially settle. “This is just the radius of the largest sphere that can fit into a given crystal structure,” explains Kim. “It is a simple calculation.” Examination of a series of possible helium-absorbing ceramic materials confirmed that atomic free volume correlates well with helium embedding energy. Moreover, many of the ceramics they investigated have higher free volume, thus lower embedding energy, than the grain boundaries do.However, in order to identify options for the nuclear fusion application, the screening needed to include some other factors. For example, in addition to the atomic free volume, a good second phase must be mechanically robust (able to sustain a load); it must not get very radioactive with neutron exposure; and it must be compatible — but not too cozy — with the surrounding metal, so it disperses well but does not dissolve into the metal. “We want to disperse the ceramic phase uniformly in the bulk metal to ensure that all grain boundary regions are close to the dispersed ceramic phase so it can provide protection to those regions,” says Li. “The two phases need to coexist, so the ceramic won’t either clump together or totally dissolve in the iron.”Using their analytical tools, Kim and Xu examined about 50,000 compounds and identified 750 potential candidates. Of those, a good option for inclusion in a vacuum vessel wall made mainly of iron was iron silicate.Experimental testingThe researchers were ready to examine samples in the lab. To make the composite material for proof-of-concept demonstrations, Kim and collaborators dispersed nanoscale particles of iron silicate into iron and implanted helium into that composite material. She took X-ray diffraction (XRD) images before and after implanting the helium and also computed the XRD patterns. The ratio between the implanted helium and the dispersed iron silicate was carefully controlled to allow a direct comparison between the experimental and computed XRD patterns. The measured XRD intensity changed with the helium implantation exactly as the calculations had predicted. “That agreement confirms that atomic helium is being stored within the bulk lattice of the iron silicate,” says Kim.To follow up, Kim directly counted the number of helium bubbles in the composite. In iron samples without the iron silicate added, grain boundaries were flanked by many helium bubbles. In contrast, in the iron samples with the iron silicate ceramic phase added, helium bubbles were spread throughout the material, with many fewer occurring along the grain boundaries. Thus, the iron silicate had provided sites with low helium-embedding energy that lured the helium atoms away from the grain boundaries, protecting those vulnerable openings and preventing cracks from opening up and causing the vacuum vessel to fail catastrophically.The researchers conclude that adding just 1 percent (by volume) of iron silicate to the iron walls of the vacuum vessel will cut the number of helium bubbles in half and also reduce their diameter by 20 percent — “and having a lot of small bubbles is OK if they’re not in the grain boundaries,” explains Li.Next stepsThus far, Li and his team have gone from computational studies of the problem and a possible solution to experimental demonstrations that confirm their approach. And they’re well on their way to commercial fabrication of components. “We’ve made powders that are compatible with existing commercial 3D printers and are preloaded with helium-absorbing ceramics,” says Li. The helium-absorbing nanoparticles are well dispersed and should provide sufficient helium uptake to protect the vulnerable grain boundaries in the structural metals of the vessel walls. While Li confirms that there’s more scientific and engineering work to be done, he, along with Alexander O’Brien PhD ’23 of the Department of Nuclear Science and Engineering and Kang Pyo So, a former postdoc in the same department, have already developed a startup company that’s ready to 3D print structural materials that can meet all the challenges faced by the vacuum vessel inside a fusion reactor.This research was supported by Eni S.p.A. through the MIT Energy Initiative. Additional support was provided by a Kwajeong Scholarship; the U.S. Department of Energy (DOE) Laboratory Directed Research and Development program at Idaho National Laboratory; U.S. DOE Lawrence Livermore National Laboratory; and Creative Materials Discovery Program through the National Research Foundation of Korea. More

  • in

    Nuno Loureiro named director of MIT’s Plasma Science and Fusion Center

    Nuno Loureiro, professor of nuclear science and engineering and of physics, has been appointed the new director of the MIT Plasma Science and Fusion Center, effective May 1.Loureiro is taking the helm of one of MIT’s largest labs: more than 250 full-time researchers, staff members, and students work and study in seven buildings with 250,000 square feet of lab space. A theoretical physicist and fusion scientist, Loureiro joined MIT as a faculty member in 2016, and was appointed deputy director of the Plasma Science and Fusion Center (PSFC) in 2022. Loureiro succeeds Dennis Whyte, who stepped down at the end of 2023 to return to teaching and research.Stepping into his new role as director, Loureiro says, “The PSFC has an impressive tradition of discovery and leadership in plasma and fusion science and engineering. Becoming director of the PSFC is an incredible opportunity to shape the future of these fields. We have a world-class team, and it’s an honor to be chosen as its leader.”Loureiro’s own research ranges widely. He is recognized for advancing the understanding of multiple aspects of plasma behavior, particularly turbulence and the physics underpinning solar flares and other astronomical phenomena. In the fusion domain, his work enables the design of fusion devices that can more efficiently control and harness the energy of fusing plasmas, bringing the dream of clean, near-limitless fusion power that much closer. Plasma physics is foundational to advancing fusion science, a fact Loureiro has embraced and that is relevant as he considers the direction of the PSFC’s multidisciplinary research. “But plasma physics is only one aspect of our focus. Building a scientific agenda that continues and expands on the PSFC’s history of innovation in all aspects of fusion science and engineering is vital, and a key facet of that work is facilitating our researchers’ efforts to produce the breakthroughs that are necessary for the realization of fusion energy.”As the climate crisis accelerates, fusion power continues to grow in appeal: It produces no carbon emissions, its fuel is plentiful, and dangerous “meltdowns” are impossible. The sooner that fusion power is commercially available, the greater impact it can have on reducing greenhouse gas emissions and meeting global climate goals. While technical challenges remain, “the PSFC is well poised to meet them, and continue to show leadership. We are a mission-driven lab, and our students and staff are incredibly motivated,” Loureiro comments.“As MIT continues to lead the way toward the delivery of clean fusion power onto the grid, I have no doubt that Nuno is the right person to step into this key position at this critical time,” says Maria T. Zuber, MIT’s presidential advisor for science and technology policy. “I look forward to the steady advance of plasma physics and fusion science at MIT under Nuno’s leadership.”Over the last decade, there have been massive leaps forward in the field of fusion energy, driven in part by innovations like high-temperature superconducting magnets developed at the PSFC. Further progress is guaranteed: Loureiro believes that “The next few years are certain to be an exciting time for us, and for fusion as a whole. It’s the dawn of a new era with burning plasma experiments” — a reference to the collaboration between the PSFC and Commonwealth Fusion Systems, a startup company spun out of the PSFC, to build SPARC, a fusion device that is slated to turn on in 2026 and produce a burning plasma that yields more energy than it consumes. “It’s going to be a watershed moment,” says Loureiro.He continues, “In addition, we have strong connections to inertial confinement fusion experiments, including those at Lawrence Livermore National Lab, and we’re looking forward to expanding our research into stellarators, which are another kind of magnetic fusion device.” Over recent years, the PSFC has significantly increased its collaboration with industrial partners such Eni, IBM, and others. Loureiro sees great value in this: “These collaborations are mutually beneficial: they allow us to grow our research portfolio while advancing companies’ R&D efforts. It’s very dynamic and exciting.”Loureiro’s directorship begins as the PSFC is launching key tech development projects like LIBRA, a “blanket” of molten salt that can be wrapped around fusion vessels and perform double duty as a neutron energy absorber and a breeder for tritium (the fuel for fusion). Researchers at the PSFC have also developed a way to rapidly test the durability of materials being considered for use in a fusion power plant environment, and are now creating an experiment that will utilize a powerful particle accelerator called a gyrotron to irradiate candidate materials.Interest in fusion is at an all-time high; the demand for researchers and engineers, particularly in the nascent commercial fusion industry, is reflected by the record number of graduate students that are studying at the PSFC — more than 90 across seven affiliated MIT departments. The PSFC’s classrooms are full, and Loureiro notes a palpable sense of excitement. “Students are our greatest strength,” says Loureiro. “They come here to do world-class research but also to grow as individuals, and I want to give them a great place to do that. Supporting those experiences, making sure they can be as successful as possible is one of my top priorities.” Loureiro plans to continue teaching and advising students after his appointment begins.MIT President Sally Kornbluth’s recently announced Climate Project is a clarion call for Loureiro: “It’s not hyperbole to say MIT is where you go to find solutions to humanity’s biggest problems,” he says. “Fusion is a hard problem, but it can be solved with resolve and ingenuity — characteristics that define MIT. Fusion energy will change the course of human history. It’s both humbling and exciting to be leading a research center that will play a key role in enabling that change.”  More

  • in

    Lessons from Fukushima: Prepare for the unlikely

    When a devastating earthquake and tsunami overwhelmed the protective systems at the Fukushima Dai’ichi nuclear power plant complex in Japan in March 2011, it triggered a sequence of events leading to one of the worst releases of radioactive materials in the world to date. Although nuclear energy is having a revival as a low-emissions energy source to mitigate climate change, the Fukushima accident is still cited as a reason for hesitancy in adopting it.

    A new study synthesizes information from multidisciplinary sources to understand how the Fukushima Dai’ichi disaster unfolded, and points to the importance of mitigation measures and last lines of defense — even against accidents considered highly unlikely. These procedures have received relatively little attention, but they are critical in determining how severe the consequences of a reactor failure will be, the researchers say.

    The researchers note that their synthesis is one of the few attempts to look at data across disciplinary boundaries, including: the physics and engineering of what took place within the plant’s systems, the plant operators’ actions throughout the emergency, actions by emergency responders, the meteorology of radionuclide releases and transport, and the environmental and health consequences documented since the event.

    The study appears in the journal iScience, in an open-access paper by postdoc Ali Ayoub and Professor Haruko Wainwright at MIT, along with others in Switzerland, Japan, and New Mexico.

    Since 2013, Wainwright has been leading the research to integrate all the radiation monitoring data in the Fukushima region into integrated maps. “I was staring at the contamination map for nearly 10 years, wondering what created the main plume extending in the northwest direction, but I could not find exact information,” Wainwright says. “Our study is unique because we started from the consequence, the contamination map, and tried to identify the key factors for the consequence. Other people study the Fukushima accident from the root cause, the tsunami.”

    One thing they found was that while all the operating reactors, units 1, 2, and 3, suffered core meltdowns as a result of the failure of emergency cooling systems, units 1 and 3 — although they did experience hydrogen explosions — did not release as much radiation to the environment because their venting systems essentially worked to relieve pressure inside the containment vessels as intended. But the same system in unit 2 failed badly.

    “People think that the hydrogen explosion or the core meltdown were the worst things, or the major driver of the radiological consequences of the accident,” Wainright says, “but our analysis found that’s not the case.” Much more significant in terms of the radiological release was the failure of the one venting mechanism.

    “There is a pressure-release mechanism that goes through water where a lot of the radionuclides get filtered out,” she explains. That system was effective in units 1 and 3, filtering out more than 90 percent of the radioactive elements before the gas was vented. However, “in unit 2, that pressure release mechanism got stuck, and the operators could not manually open it.” A hydrogen explosion in unit 1 had damaged the pressure relief mechanism of unit 2. This led to a breach of the containment structure and direct, unfiltered venting to the atmosphere, which, according to the new study, was what produced the greatest amount of contamination from the whole weeks-long event.

    Another factor was the timing of the attempt to vent the pressure buildup in the reactor. Guidelines at the time, and to this day in many reactors, specified that no venting should take place until the pressure inside the reactor containment vessel reached a specified threshold, with no regard to the wind directions at the time. In the case of Fukushima, an earlier venting could have dramatically reduced the impact: Much of the release happened when winds were blowing directly inland, but earlier the wind had been blowing offshore.

    “That pressure-release mechanism has not been a major focus of the engineering community,” she says. While there is appropriate attention to measures that prevent a core meltdown in the first place, “this sort of last line of defense has not been the main focus and should get more attention.”

    Wainwright says the study also underlines several successes in the management of the Fukushima accident. Many of the safety systems did work as they were designed. For example, even though the oldest reactor, unit 1, suffered the greatest internal damage, it released little radioactive material. Most people were able to evacuate from the 20-kilometer (12-mile) zone before the largest release happened. The mitigation measures were “somewhat successful,” Wainwright says. But there was tremendous confusion and anger during and after the accident because there were no preparations in place for such an event.

    Much work has focused on ways to prevent the kind of accidents that happened at Fukushima — for example, in the U.S. reactor operators can deploy portable backup power supplies to maintain proper reactor cooling at any reactor site. But the ongoing situation at the Zaporizhzhia nuclear complex in Ukraine, where nuclear safety is challenged by acts of war, demonstrates that despite engineers’ and operators’ best efforts to prevent it, “the totally unexpected could still happen,” Wainwright says.

    “The big-picture message is that we should have equal attention to both prevention and mitigation of accidents,” she says. “This is the essence of resilience, and it applies beyond nuclear power plants to all essential infrastructure of a functioning society, for example, the electric grid, the food and water supply, the transportation sector, etc.”

    One thing the researchers recommend is that in designing evacuation protocols, planners should make more effort to learn from much more frequent disasters such as wildfires and hurricanes. “We think getting more interdisciplinary, transdisciplinary knowledge from other kinds of disasters would be essential,” she says. Most of the emergency response strategies presently in place, she says, were designed in the 1980s and ’90s, and need to be modernized. “Consequences can be mitigated. A nuclear accident does not have to be a catastrophe, as is often portrayed in popular culture,” Wainright says.

    The research team included Giovanni Sansavini at ETH Zurich in Switzerland; Randall Gauntt at Sandia National Laboratories in New Mexico; and Kimiaki Saito at the Japan Atomic Energy Agency. More