More stories

  • in

    Helping to make nuclear fusion a reality

    Up until she served in the Peace Corps in Malawi, Rachel Bielajew was open to a career reboot. Having studied nuclear engineering as an undergraduate at the University of Michigan at Ann Arbor, graduate school had been on her mind. But seeing the drastic impacts of climate change play out in real-time in Malawi — the lives of the country’s subsistence farmers swing wildly, depending on the rains — convinced Bielajew of the importance of nuclear engineering. Bielajew was struck that her high school students in the small town of Chisenga had a shaky understanding of math, but universally understood global warming. “The concept of the changing world due to human impact was evident, and they could see it,” Bielajew says.

    Bielajew was looking to work on solutions that could positively impact global problems and feed her love of physics. Nuclear engineering, especially the study of fusion as a carbon-free energy source, checked off both boxes. Bielajew is now a fourth-year doctoral candidate in the Department of Nuclear Science and Engineering (NSE). She researches magnetic confinement fusion in the Plasma Science and Fusion Center (PSFC) with Professor Anne White.

    Researching fusion’s big challenge

    You need to confine plasma effectively in order to generate the extremely high temperatures (100 million degrees Celsius) fusion needs, without melting the walls of the tokamak, the device that hosts these reactions. Magnets can do the job, but “plasmas are weird, they behave strangely and are challenging to understand,” Bielajew says. Small instabilities in plasma can coalesce into fluctuating turbulence that can drive heat and particles out of the machine.

    In high-confinement mode, the edges of the plasma have less tolerance for such unruly behavior. “The turbulence gets damped out and sheared apart at the edge,” Bielajew says. This might seem like a good thing, but high-confinement plasmas have their own challenges. They are so tightly bound that they create edge-localized modes (ELMs), bursts of damaging particles and energy, that can be extremely damaging to the machine.

    The questions Bielajew is looking to answer: How do we get high confinement without ELMs? How do turbulence and transport play a role in plasmas? “We do not fully understand turbulence, even though we have studied it for a long time,” Bielajew says, “It is a big and important problem to solve for fusion to be a reality. I like that challenge,” Bielajew adds.

    A love of science

    Confronting such challenges head-on has been part of Bielajew’s toolkit since she was a child growing up in Ann Arbor, Michigan. Her father, Alex Bielajew, is a professor of nuclear engineering at the University of Michigan, and Bielajew’s mother also pursued graduate studies.

    Bielajew’s parents encouraged her to follow her own path and she found it led to her father’s chosen profession: nuclear engineering. Once she decided to pursue research in fusion, MIT stood out as a school she could set her sights on. “I knew that MIT had an extensive program in fusion and a lot of faculty in the field,” Bielajew says. The mechanics of the application were challenging: Chisenga had limited internet access, so Bielajew had to ride on the back of a pickup truck to meet a friend in a city a few hours away and use his phone as a hotspot to send the documents.

    A similar tenacity has surfaced in Bielajew’s approach to research during the Covid-19 pandemic. Working off a blueprint, Bielajew built the Correlation Cyclotron Emission Diagnostic, which measures turbulent electron temperature fluctuations. Through a collaboration, Bielajew conducts her plasma research at the ASDEX Upgrade tokamak in Germany. Traditionally, Bielajew would ship the diagnostic to Germany, follow and install it, and conduct the research in person. The pandemic threw a wrench in the plans, so Bielajew shipped the diagnostic and relied on team members to install it. She Zooms into the control room and trusts others to run the plasma experiments.

    DEI advocate

    Bielajew is very hands-on with another endeavor: improving diversity, equity, and inclusion (DEI) in her own backyard. Having grown up with parental encouragement and in an environment that never doubted her place as a woman in engineering, Bielajew realizes not everyone has the same opportunities. “I wish that the world was in a place where all I had to do was care about my research, but it’s not,” Bielajew says. While science can solve many problems, more fundamental ones about equity need humans to act in specific ways, she points out. “I want to see more women represented, more people of color. Everyone needs a voice in building a better world,” Bielajew says.

    To get there, Bielajew co-launched NSE’s Graduate Application Assistance Program, which connects underrepresented student applicants with NSE mentors. She has been the DEI officer with NSE’s student group, ANS, and is very involved in the department’s DEI committee.

    As for future research, Bielajew hopes to concentrate on the experiments that make her question existing paradigms about plasmas under high confinement. Bielajew has registered more head-scratching “hmm” moments than “a-ha” ones. Measurements from her experiments drive the need for more intensive study.

    Bielajew’s dogs, Dobby and Winky, keep her company through it all. They came home with her from Malawi. More

  • in

    An energy-storage solution that flows like soft-serve ice cream

    Batteries made from an electrically conductive mixture the consistency of molasses could help solve a critical piece of the decarbonization puzzle. An interdisciplinary team from MIT has found that an electrochemical technology called a semisolid flow battery can be a cost-competitive form of energy storage and backup for variable renewable energy (VRE) sources such as wind and solar. The group’s research is described in a paper published in Joule.

    “The transition to clean energy requires energy storage systems of different durations for when the sun isn’t shining and the wind isn’t blowing,” says Emre Gençer, a research scientist with the MIT Energy Initiative (MITEI) and a member of the team. “Our work demonstrates that a semisolid flow battery could be a lifesaving as well as economical option when these VRE sources can’t generate power for a day or longer — in the case of natural disasters, for instance.”

    The rechargeable zinc-manganese dioxide (Zn-MnO2) battery the researchers created beat out other long-duration energy storage contenders. “We performed a comprehensive, bottom-up analysis to understand how the battery’s composition affects performance and cost, looking at all the trade-offs,” says Thaneer Malai Narayanan SM ’18, PhD ’21. “We showed that our system can be cheaper than others, and can be scaled up.”

    Narayanan, who conducted this work at MIT as part of his doctorate in mechanical engineering, is the lead author of the paper. Additional authors include Gençer, Yunguang Zhu, a postdoc in the MIT Electrochemical Energy Lab; Gareth McKinley, the School of Engineering Professor of Teaching Innovation and professor of mechanical engineering at MIT; and Yang Shao-Horn, the JR East Professor of Engineering, a professor of mechanical engineering and of materials science and engineering, and a member of the Research Laboratory of Electronics (RLE), who directs the MIT Electrochemical Energy Lab.

    Going with the flow

    In 2016, Narayanan began his graduate studies, joining the Electrochemical Energy Lab, a hotbed of research and exploration of solutions to mitigate climate change, which is centered on innovative battery chemistry and decarbonizing fuels and chemicals. One exciting opportunity for the lab: developing low- and no-carbon backup energy systems suitable for grid-scale needs when VRE generation flags.                                                  

    While the lab cast a wide net, investigating energy conversion and storage using solid oxide fuel cells, lithium-ion batteries, and metal-air batteries, among others, Narayanan took a particular interest in flow batteries. In these systems, two different chemical (electrolyte) solutions with either negative or positive ions are pumped from separate tanks, meeting across a membrane (called the stack). Here, the ion streams react, converting electrical energy to chemical energy — in effect, charging the battery. When there is demand for this stored energy, the solution gets pumped back to the stack to convert chemical energy into electrical energy again.

    The duration of time that flow batteries can discharge, releasing the stored electricity, is determined by the volume of positively and negatively charged electrolyte solutions streaming through the stack. In theory, as long as these solutions keep flowing, reacting, and converting the chemical energy to electrical energy, the battery systems can provide electricity.

    “For backup lasting more than a day, the architecture of flow batteries suggests they can be a cheap option,” says Narayanan. “You recharge the solution in the tanks from sun and wind power sources.” This renders the entire system carbon free.

    But while the promise of flow battery technologies has beckoned for at least a decade, the uneven performance and expense of materials required for these battery systems has slowed their implementation. So, Narayanan set out on an ambitious journey: to design and build a flow battery that could back up VRE systems for a day or more, storing and discharging energy with the same or greater efficiency than backup rivals; and to determine, through rigorous cost analysis, whether such a system could prove economically viable as a long-duration energy option.

    Multidisciplinary collaborators

    To attack this multipronged challenge, Narayanan’s project brought together, in his words, “three giants, scientists all well-known in their fields”:  Shao-Horn, who specializes in chemical physics and electrochemical science, and design of materials; Gençer, who creates detailed economic models of emergent energy systems at MITEI; and McKinley, an expert in rheology, the physics of flow. These three also served as his thesis advisors.

    “I was excited to work in such an interdisciplinary team, which offered a unique opportunity to create a novel battery architecture by designing charge transfer and ion transport within flowable semi-solid electrodes, and to guide battery engineering using techno-economics of such flowable batteries,” says Shao-Horn.

    While other flow battery systems in contention, such as the vanadium redox flow battery, offer the storage capacity and energy density to back up megawatt and larger power systems, they depend on expensive chemical ingredients that make them bad bets for long duration purposes. Narayanan was on the hunt for less-pricey chemical components that also feature rich energy potential.

    Through a series of bench experiments, the researchers came up with a novel electrode (electrical conductor) for the battery system: a mixture containing dispersed manganese dioxide (MnO2) particles, shot through with an electrically conductive additive, carbon black. This compound reacts with a conductive zinc solution or zinc plate at the stack, enabling efficient electrochemical energy conversion. The fluid properties of this battery are far removed from the watery solutions used by other flow batteries.

    “It’s a semisolid — a slurry,” says Narayanan. “Like thick, black paint, or perhaps a soft-serve ice cream,” suggests McKinley. The carbon black adds the pigment and the electric punch. To arrive at the optimal electrochemical mix, the researchers tweaked their formula many times.

    “These systems have to be able to flow under reasonable pressures, but also have a weak yield stress so that the active MnO2 particles don’t sink to the bottom of the flow tanks when the system isn’t being used, as well as not separate into a battery/oily clear fluid phase and a dense paste of carbon particles and MnO2,” says McKinley.

    This series of experiments informed the technoeconomic analysis. By “connecting the dots between composition, performance, and cost,” says Narayanan, he and Gençer were able to make system-level cost and efficiency calculations for the Zn-MnO2 battery.

    “Assessing the cost and performance of early technologies is very difficult, and this was an example of how to develop a standard method to help researchers at MIT and elsewhere,” says Gençer. “One message here is that when you include the cost analysis at the development stage of your experimental work, you get an important early understanding of your project’s cost implications.”

    In their final round of studies, Gençer and Narayanan compared the Zn-MnO2 battery to a set of equivalent electrochemical battery and hydrogen backup systems, looking at the capital costs of running them at durations of eight, 24, and 72 hours. Their findings surprised them: For battery discharges longer than a day, their semisolid flow battery beat out lithium-ion batteries and vanadium redox flow batteries. This was true even when factoring in the heavy expense of pumping the MnO2 slurry from tank to stack. “I was skeptical, and not expecting this battery would be competitive, but once I did the cost calculation, it was plausible,” says Gençer.

    But carbon-free battery backup is a very Goldilocks-like business: Different situations require different-duration solutions, whether an anticipated overnight loss of solar power, or a longer-term, climate-based disruption in the grid. “Lithium-ion is great for backup of eight hours and under, but the materials are too expensive for longer periods,” says Gençer. “Hydrogen is super expensive for very short durations, and good for very long durations, and we will need all of them.” This means it makes sense to continue working on the Zn-MnO2 system to see where it might fit in.

    “The next step is to take our battery system and build it up,” says Narayanan, who is working now as a battery engineer. “Our research also points the way to other chemistries that could be developed under the semi-solid flow battery platform, so we could be seeing this kind of technology used for energy storage in our lifetimes.”

    This research was supported by Eni S.p.A. through MITEI. Thaneer Malai Narayanan received an Eni-sponsored MIT Energy Fellowship during his work on the project. More

  • in

    The reasons behind lithium-ion batteries’ rapid cost decline

    Lithium-ion batteries, those marvels of lightweight power that have made possible today’s age of handheld electronics and electric vehicles, have plunged in cost since their introduction three decades ago at a rate similar to the drop in solar panel prices, as documented by a study published last March. But what brought about such an astonishing cost decline, of about 97 percent?

    Some of the researchers behind that earlier study have now analyzed what accounted for the extraordinary savings. They found that by far the biggest factor was work on research and development, particularly in chemistry and materials science. This outweighed the gains achieved through economies of scale, though that turned out to be the second-largest category of reductions.

    The new findings are being published today in the journal Energy and Environmental Science, in a paper by MIT postdoc Micah Ziegler, recent graduate student Juhyun Song PhD ’19, and Jessika Trancik, a professor in MIT’s Institute for Data, Systems and Society.

    The findings could be useful for policymakers and planners to help guide spending priorities in order to continue the pathway toward ever-lower costs for this and other crucial energy storage technologies, according to Trancik. Their work suggests that there is still considerable room for further improvement in electrochemical battery technologies, she says.

    The analysis required digging through a variety of sources, since much of the relevant information consists of closely held proprietary business data. “The data collection effort was extensive,” Ziegler says. “We looked at academic articles, industry and government reports, press releases, and specification sheets. We even looked at some legal filings that came out. We had to piece together data from many different sources to get a sense of what was happening.” He says they collected “about 15,000 qualitative and quantitative data points, across 1,000 individual records from approximately 280 references.”

    Data from the earliest times are hardest to access and can have the greatest uncertainties, Trancik says, but by comparing different data sources from the same period they have attempted to account for these uncertainties.

    Overall, she says, “we estimate that the majority of the cost decline, more than 50 percent, came from research-and-development-related activities.” That included both private sector and government-funded research and development, and “the vast majority” of that cost decline within that R&D category came from chemistry and materials research.

    That was an interesting finding, she says, because “there were so many variables that people were working on through very different kinds of efforts,” including the design of the battery cells themselves, their manufacturing systems, supply chains, and so on. “The cost improvement emerged from a diverse set of efforts and many people, and not from the work of only a few individuals.”

    The findings about the importance of investment in R&D were especially significant, Ziegler says, because much of this investment happened after lithium-ion battery technology was commercialized, a stage at which some analysts thought the research contribution would become less significant. Over roughly a 20-year period starting five years after the batteries’ introduction in the early 1990s, he says, “most of the cost reduction still came from R&D. The R&D contribution didn’t end when commercialization began. In fact, it was still the biggest contributor to cost reduction.”

    The study took advantage of an analytical approach that Trancik and her team initially developed to analyze the similarly precipitous drop in costs of silicon solar panels over the last few decades. They also applied the approach to understand the rising costs of nuclear energy. “This is really getting at the fundamental mechanisms of technological change,” she says. “And we can also develop these models looking forward in time, which allows us to uncover the levers that people could use to improve the technology in the future.”

    One advantage of the methodology Trancik and her colleagues have developed, she says, is that it helps to sort out the relative importance of different factors when many variables are changing all at once, which typically happens as a technology improves. “It’s not simply adding up the cost effects of these variables,” she says, “because many of these variables affect many different cost components. There’s this kind of intricate web of dependencies.” But the team’s methodology makes it possible to “look at how that overall cost change can be attributed to those variables, by essentially mapping out that network of dependencies,” she says.

    This can help provide guidance on public spending, private investments, and other incentives. “What are all the things that different decision makers could do?” she asks. “What decisions do they have agency over so that they could improve the technology, which is important in the case of low-carbon technologies, where we’re looking for solutions to climate change and we have limited time and limited resources? The new approach allows us to potentially be a bit more intentional about where we make those investments of time and money.”

    “This paper collects data available in a systematic way to determine changes in the cost components of lithium-ion batteries between 1990-1995 and 2010-2015,” says Laura Diaz Anadon, a professor of climate change policy at Cambridge University, who was not connected to this research. “This period was an important one in the history of the technology, and understanding the evolution of cost components lays the groundwork for future work on mechanisms and could help inform research efforts in other types of batteries.”

    The research was supported by the Alfred P. Sloan Foundation, the Environmental Defense Fund, and the MIT Technology and Policy Program. More

  • in

    Radio-frequency wave scattering improves fusion simulations

    In the quest for fusion energy, understanding how radio-frequency (RF) waves travel (or “propagate”) in the turbulent interior of a fusion furnace is crucial to maintaining an efficient, continuously operating power plant. Transmitted by an antenna in the doughnut-shaped vacuum chamber common to magnetic confinement fusion devices called tokamaks, RF waves heat the plasma fuel and drive its current around the toroidal interior. The efficiency of this process can be affected by how the wave’s trajectory is altered (or “scattered”) by conditions within the chamber.

    Researchers have tried to study these RF processes using computer simulations to match the experimental conditions. A good match would validate the computer model, and raise confidence in using it to explore new physics and design future RF antennas that perform efficiently. While the simulations can accurately calculate how much total current is driven by RF waves, they do a poor job at predicting where exactly in the plasma this current is produced.

    Now, in a paper published in the Journal of Plasma Physics, MIT researchers suggest that the models for RF wave propagation used for these simulations have not properly taken into account the way these waves are scattered as they encounter dense, turbulent filaments present in the edge of the plasma known as the “scrape-off layer” (SOL).

    Bodhi Biswas, a graduate student at the Plasma Science and Fusion Center (PSFC) under the direction of Senior Research Scientist Paul Bonoli, School of Engineering Distinguished Professor of Engineering Anne White, and Principal Research Scientist Abhay Ram, who is the paper’s lead author. Ram compares the scattering that occurs in this situation to a wave of water hitting a lily pad: “The wave crashing with the lily pad will excite a secondary, scattered wave that makes circular ripples traveling outward from the plant. The incoming wave has transferred energy to the scattered wave. Some of this energy is reflected backwards (in relation to the incoming wave), some travels forwards, and some is deflected to the side. The specifics all depend on the particular attributes of the wave, the water, and the lily pad. In our case, the lily pad is the plasma filament.”

    Until now, researchers have not properly taken these filaments and the scattering they provoke into consideration when modeling the turbulence inside a tokamak, leading to an underestimation of wave scattering. Using data from PSFC tokamak Alcator C-Mod, Biswas shows that using the new method of modeling RF-wave scattering from SOL turbulence provides results considerably different from older models, and a much better match to experiments. Notably, the “lower-hybrid” wave spectrum, crucial to driving plasma current in a steady-state tokamak, appears to scatter asymmetrically, an important effect not accounted for in previous models.

    Biswas’s advisor Paul Bonoli is well acquainted with traditional “ray-tracing” models, which evaluate a wave trajectory by dividing it into a series of rays. He has used this model, with its limitations, for decades in his own research to understand plasma behavior. Bonoli says he is pleased that “the research results in Bodhi’s doctoral thesis have refocused attention on the profound effect that edge turbulence can have on the propagation and absorption of radio-frequency power.”

    Although ray-tracing treatments of scattering do not fully capture all the wave physics, a “full-wave” model that does would be prohibitively expensive. To solve the problem economically, Biswas splits his analysis into two parts: (1) using ray tracing to model the trajectory of the wave in the tokamak assuming no turbulence, while (2) modifying this ray-trajectory with the new scattering model that accounts for the turbulent plasma filaments.

    “This scattering model is a full-wave model, but computed over a small region and in a simplified geometry so that it is very quick to do,” says Biswas. “The result is a ray-tracing model that, for the first time, accounts for full-wave scattering physics.”

    Biswas notes that this model bridges the gap between simple scattering models that fail to match experiment and full-wave models that are prohibitively expensive, providing reasonable accuracy at low cost.

    “Our results suggest scattering is an important effect, and that it must be taken into account when designing future RF antennas. The low cost of our scattering model makes this very doable.”

    “This is exciting progress,” says Syun’ichi Shiraiwa, staff research physicist at the Princeton Plasma Physics Laboratory. “I believe that Bodhi’s work provides a clear path to the end of a long tunnel we have been in. His work not only demonstrates that the wave scattering, once accurately accounted for, can explain the experimental results, but also answers a puzzling question: why previous scattering models were incomplete, and their results unsatisfying.”

    Work is now underway to apply this model to more plasmas from Alcator C-Mod and other tokamaks. Biswas believes that this new model will be particularly applicable to high-density tokamak plasmas, for which the standard ray-tracing model has been noticeably inaccurate. He is also excited that the model could be validated by DIII-D National Fusion Facility, a fusion experiment on which the PSFC collaborates.

    “The DIII-D tokamak will soon be capable of launching lower hybrid waves and measuring its electric field in the scrape-off layer. These measurements could provide direct evidence of the asymmetric scattering effect predicted by our model.” More

  • in

    MIT Energy Initiative awards seven Seed Fund grants for early-stage energy research

    The MIT Energy Initiative (MITEI) has awarded seven Seed Fund grants to support novel, early-stage energy research by faculty and researchers at MIT. The awardees hail from a range of disciplines, but all strive to bring their backgrounds and expertise to address the global climate crisis by improving the efficiency, scalability, and adoption of clean energy technologies.

    “Solving climate change is truly an interdisciplinary challenge,” says MITEI Director Robert C. Armstrong. “The Seed Fund grants foster collaboration and innovation from across all five of MIT’s schools and one college, encouraging an ‘all hands on deck approach’ to developing the energy solutions that will prove critical in combatting this global crisis.”

    This year, MITEI’s Seed Fund grant program received 70 proposals from 86 different principal investigators (PIs) across 25 departments, labs, and centers. Of these proposals, 31 involved collaborations between two or more PIs, including 24 that involved multiple departments.

    The winning projects reflect this collaborative nature with topics addressing the optimization of low-energy thermal cooling in buildings; the design of safe, robust, and resilient distributed power systems; and how to design and site wind farms with consideration of wind resource uncertainty due to climate change.

    Increasing public support for low-carbon technologies

    One winning team aims to leverage work done in the behavioral sciences to motivate sustainable behaviors and promote the adoption of clean energy technologies.

    “Objections to scalable low-carbon technologies such as nuclear energy and carbon sequestration have made it difficult to adopt these technologies and reduce greenhouse gas emissions,” says Howard Herzog, a senior research scientist at MITEI and co-PI. “These objections tend to neglect the sheer scale of energy generation required and the inability to meet this demand solely with other renewable energy technologies.”

    This interdisciplinary team — which includes researchers from MITEI, the Department of Nuclear Science and Engineering, and the MIT Sloan School of Management — plans to convene industry professionals and academics, as well as behavioral scientists, to identify common objections, design messaging to overcome them, and prove that these messaging campaigns have long-lasting impacts on attitudes toward scalable low-carbon technologies.

    “Our aim is to provide a foundation for shifting the public and policymakers’ views about these low-carbon technologies from something they, at best, tolerate, to something they actually welcome,” says co-PI David Rand, the Erwin H. Schell Professor and professor of management science and brain and cognitive sciences at MIT Sloan School of Management.

    Siting and designing wind farms

    Michael Howland, an assistant professor of civil and environmental engineering, will use his Seed Fund grant to develop a foundational methodology for wind farm siting and design that accounts for the uncertainty of wind resources resulting from climate change.

    “The optimal wind farm design and its resulting cost of energy is inherently dependent on the wind resource at the location of the farm,” says Howland. “But wind farms are currently sited and designed based on short-term climate records that do not account for the future effects of climate change on wind patterns.”

    Wind farms are capital-intensive infrastructure that cannot be relocated and often have lifespans exceeding 20 years — all of which make it especially important that developers choose the right locations and designs based not only on wind patterns in the historical climate record, but also based on future predictions. The new siting and design methodology has the potential to replace current industry standards to enable a more accurate risk analysis of wind farm development and energy grid expansion under climate change-driven energy resource uncertainty.

    Membraneless electrolyzers for hydrogen production

    Producing hydrogen from renewable energy-powered water electrolyzers is central to realizing a sustainable and low-carbon hydrogen economy, says Kripa Varanasi, a professor of mechanical engineering and a Seed Fund award recipient. The idea of using hydrogen as a fuel has existed for decades, but it has yet to be widely realized at a considerable scale. Varanasi hopes to change that with his Seed Fund grant.

    “The critical economic hurdle for successful electrolyzers to overcome is the minimization of the capital costs associated with their deployment,” says Varanasi. “So, an immediate task at hand to enable electrochemical hydrogen production at scale will be to maximize the effectiveness of the most mature, least complex, and least expensive water electrolyzer technologies.”

    To do this, he aims to combine the advantages of existing low-temperature alkaline electrolyzer designs with a novel membraneless electrolyzer technology that harnesses a gas management system architecture to minimize complexity and costs, while also improving efficiency. Varanasi hopes his project will demonstrate scalable concepts for cost-effective electrolyzer technology design to help realize a decarbonized hydrogen economy.

    Since its establishment in 2008, the MITEI Seed Fund Program has supported 194 energy-focused seed projects through grants totaling more than $26 million. This funding comes primarily from MITEI’s founding and sustaining members, supplemented by gifts from generous donors.

    Recipients of the 2021 MITEI Seed Fund grants are:

    “Design automation of safe, robust, and resilient distributed power systems” — Chuchu Fan of the Department of Aeronautics and Astronautics
    “Advanced MHD topping cycles: For fission, fusion, solar power plants” — Jeffrey Freidberg of the Department of Nuclear Science and Engineering and Dennis Whyte of the Plasma Science and Fusion Center
    “Robust wind farm siting and design under climate-change‐driven wind resource uncertainty” — Michael Howland of the Department of Civil and Environmental Engineering
    “Low-energy thermal comfort for buildings in the Global South: Optimal design of integrated structural-thermal systems” — Leslie Norford of the Department of Architecture and Caitlin Mueller of the departments of Architecture and Civil and Environmental Engineering
    “New low-cost, high energy-density boron-based redox electrolytes for nonaqueous flow batteries” — Alexander Radosevich of the Department of Chemistry
    “Increasing public support for scalable low-carbon energy technologies using behavorial science insights” — David Rand of the MIT Sloan School of Management, Koroush Shirvan of the Department of Nuclear Science and Engineering, Howard Herzog of the MIT Energy Initiative, and Jacopo Buongiorno of the Department of Nuclear Science and Engineering
    “Membraneless electrolyzers for efficient hydrogen production using nanoengineered 3D gas capture electrode architectures” — Kripa Varanasi of the Department of Mechanical Engineering More

  • in

    Coupling power and hydrogen sector pathways to benefit decarbonization

    Governments and companies worldwide are increasing their investments in hydrogen research and development, indicating a growing recognition that hydrogen could play a significant role in meeting global energy system decarbonization goals. Since hydrogen is light, energy-dense, storable, and produces no direct carbon dioxide emissions at the point of use, this versatile energy carrier has the potential to be harnessed in a variety of ways in a future clean energy system.

    Often considered in the context of grid-scale energy storage, hydrogen has garnered renewed interest, in part due to expectations that our future electric grid will be dominated by variable renewable energy (VRE) sources such as wind and solar, as well as decreasing costs for water electrolyzers — both of which could make clean, “green” hydrogen more cost-competitive with fossil-fuel-based production. But hydrogen’s versatility as a clean energy fuel also makes it an attractive option to meet energy demand and to open pathways for decarbonization in hard-to-abate sectors where direct electrification is difficult, such as transportation, buildings, and industry.

    “We’ve seen a lot of progress and analysis around pathways to decarbonize electricity, but we may not be able to electrify all end uses. This means that just decarbonizing electricity supply is not sufficient, and we must develop other decarbonization strategies as well,” says Dharik Mallapragada, a research scientist at the MIT Energy Initiative (MITEI). “Hydrogen is an interesting energy carrier to explore, but understanding the role for hydrogen requires us to study the interactions between the electricity system and a future hydrogen supply chain.”

    In a recent paper, researchers from MIT and Shell present a framework to systematically study the role and impact of hydrogen-based technology pathways in a future low-carbon, integrated energy system, taking into account interactions with the electric grid and the spatio-temporal variations in energy demand and supply. The developed framework co-optimizes infrastructure investment and operation across the electricity and hydrogen supply chain under various emissions price scenarios. When applied to a Northeast U.S. case study, the researchers find this approach results in substantial benefits — in terms of costs and emissions reduction — as it takes advantage of hydrogen’s potential to provide the electricity system with a large flexible load when produced through electrolysis, while also enabling decarbonization of difficult-to-electrify, end-use sectors.

    The research team includes Mallapragada; Guannan He, a postdoc at MITEI; Abhishek Bose, a graduate research assistant at MITEI; Clara Heuberger-Austin, a researcher at Shell; and Emre Gençer, a research scientist at MITEI. Their findings are published in the journal Energy & Environmental Science.

    Cross-sector modeling

    “We need a cross-sector framework to analyze each energy carrier’s economics and role across multiple systems if we are to really understand the cost/benefits of direct electrification or other decarbonization strategies,” says He.

    To do that analysis, the team developed the Decision Optimization of Low-carbon Power-HYdrogen Network (DOLPHYN) model, which allows the user to study the role of hydrogen in low-carbon energy systems, the effects of coupling the power and hydrogen sectors, and the trade-offs between various technology options across both supply chains — spanning production, transport, storage, and end use, and their impact on decarbonization goals.

    “We are seeing great interest from industry and government, because they are all asking questions about where to invest their money and how to prioritize their decarbonization strategies,” says Gençer. Heuberger-Austin adds, “Being able to assess the system-level interactions between electricity and the emerging hydrogen economy is of paramount importance to drive technology development and support strategic value chain decisions. The DOLPHYN model can be instrumental in tackling those kinds of questions.”

    For a predefined set of electricity and hydrogen demand scenarios, the model determines the least-cost technology mix across the power and hydrogen sectors while adhering to a variety of operation and policy constraints. The model can incorporate a range of technology options — from VRE generation to carbon capture and storage (CCS) used with both power and hydrogen generation to trucks and pipelines used for hydrogen transport. With its flexible structure, the model can be readily adapted to represent emerging technology options and evaluate their long-term value to the energy system.

    As an important addition, the model takes into account process-level carbon emissions by allowing the user to add a cost penalty on emissions in both sectors. “If you have a limited emissions budget, we are able to explore the question of where to prioritize the limited emissions to get the best bang for your buck in terms of decarbonization,” says Mallapragada.

    Insights from a case study

    To test their model, the researchers investigated the Northeast U.S. energy system under a variety of demand, technology, and carbon price scenarios. While their major conclusions can be generalized for other regions, the Northeast proved to be a particularly interesting case study. This region has current legislation and regulatory support for renewable generation, as well as increasing emission-reduction targets, a number of which are quite stringent. It also has a high demand for energy for heating — a sector that is difficult to electrify and could particularly benefit from hydrogen and from coupling the power and hydrogen systems.

    The researchers find that when combining the power and hydrogen sectors through electrolysis or hydrogen-based power generation, there is more operational flexibility to support VRE integration in the power sector and a reduced need for alternative grid-balancing supply-side resources such as battery storage or dispatchable gas generation, which in turn reduces the overall system cost. This increased VRE penetration also leads to a reduction in emissions compared to scenarios without sector-coupling. “The flexibility that electricity-based hydrogen production provides in terms of balancing the grid is as important as the hydrogen it is going to produce for decarbonizing other end uses,” says Mallapragada. They found this type of grid interaction to be more favorable than conventional hydrogen-based electricity storage, which can incur additional capital costs and efficiency losses when converting hydrogen back to power. This suggests that the role of hydrogen in the grid could be more beneficial as a source of flexible demand than as storage.

    The researchers’ multi-sector modeling approach also highlighted that CCS is more cost-effective when utilized in the hydrogen supply chain, versus the power sector. They note that counter to this observation, by the end of the decade, six times more CCS projects will be deployed in the power sector than for use in hydrogen production — a fact that emphasizes the need for more cross-sectoral modeling when planning future energy systems.

    In this study, the researchers tested the robustness of their conclusions against a number of factors, such as how the inclusion of non-combustion greenhouse gas emissions (including methane emissions) from natural gas used in power and hydrogen production impacts the model outcomes. They find that including the upstream emissions footprint of natural gas within the model boundary does not impact the value of sector coupling in regards to VRE integration and cost savings for decarbonization; in fact, the value actually grows because of the increased emphasis on electricity-based hydrogen production over natural gas-based pathways.

    “You cannot achieve climate targets unless you take a holistic approach,” says Gençer. “This is a systems problem. There are sectors that you cannot decarbonize with electrification, and there are other sectors that you cannot decarbonize without carbon capture, and if you think about everything together, there is a synergistic solution that significantly minimizes the infrastructure costs.”

    This research was supported, in part, by Shell Global Solutions International B.V. in Amsterdam, the Netherlands, and MITEI’s Low-Carbon Energy Centers for Electric Power Systems and Carbon Capture, Utilization, and Storage. More

  • in

    Crossing disciplines, adding fresh eyes to nuclear engineering

    Sometimes patterns repeat in nature. Spirals appear in sunflowers and hurricanes. Branches occur in veins and lightning. Limiao Zhang, a doctoral student in MIT’s Department of Nuclear Science and Engineering, has found another similarity: between street traffic and boiling water, with implications for preventing nuclear meltdowns.

    Growing up in China, Zhang enjoyed watching her father repair things around the house. He couldn’t fulfill his dream of becoming an engineer, instead joining the police force, but Zhang did have that opportunity and studied mechanical engineering at Three Gorges University. Being one of four girls among about 50 boys in the major didn’t discourage her. “My father always told me girls can do anything,” she says. She graduated at the top of her class.

    In college, she and a team of classmates won a national engineering competition. They designed and built a model of a carousel powered by solar, hydroelectric, and pedal power. One judge asked how long the system could operate safely. “I didn’t have a perfect answer,” she recalls. She realized that engineering means designing products that not only function, but are resilient. So for her master’s degree, at Beihang University, she turned to industrial engineering and analyzed the reliability of critical infrastructure, in particular traffic networks.

    “Among all the critical infrastructures, nuclear power plants are quite special,” Zhang says. “Although one can provide very enormous carbon-free energy, once it fails, it can cause catastrophic results.” So she decided to switch fields again and study nuclear engineering. At the time she had no nuclear background, and hadn’t studied in the United States, but “I tried to step out of my comfort zone,” she says. “I just applied and MIT welcomed me.” Her supervisor, Matteo Bucci, and her classmates explained the basics of fission reactions as she adjusted to the new material, language, and environment. She doubted herself — “my friend told me, ‘I saw clouds above your head’” — but she passed her first-year courses and published her first paper soon afterward.

    Much of the work in Bucci’s lab deals with what’s called the boiling crisis. In many applications, such as nuclear plants and powerful computers, water cools things. When a hot surface boils water, bubbles cling to the surface before rising, but if too many form, they merge into a layer of vapor that insulates the surface. The heat has nowhere to go — a boiling crisis.

    Bucci invited Zhang into his lab in part because she saw a connection between traffic and heat transfer. The data plots of both phenomena look surprisingly similar. “The mathematical tools she had developed for the study of traffic jams were a completely different way of looking into our problem” Bucci says, “by using something which is intuitively not connected.”

    One can view bubbles as cars. The more there are, the more they interfere with each other. People studying boiling had focused on the physics of individual bubbles. Zhang instead uses statistical physics to analyze collective patterns of behavior. “She brings a different set of skills, a different set of knowledge, to our research,” says Guanyu Su, a postdoc in the lab. “That’s very refreshing.”

    In her first paper on the boiling crisis, published in Physical Review Letters, Zhang used theory and simulations to identify scale-free behavior in boiling: just as in traffic, the same patterns appear whether zoomed in or out, in terms of space or time. Both small and large bubbles matter. Using this insight, the team found certain physical parameters that could predict a boiling crisis. Zhang’s mathematical tools both explain experimental data and suggest new experiments to try. For a second paper, the team collected more data and found ways to predict the boiling crisis in a wider variety of conditions.

    Zhang’s thesis and third paper, both in progress, propose a universal law for explaining the crisis. “She translated the mechanism into a physical law, like F=ma or E=mc2,” Bucci says. “She came up with an equally simple equation.” Zhang says she’s learned a lot from colleagues in the department who are pioneering new nuclear reactors or other technologies, “but for my own work, I try to get down to the very basics of a phenomenon.”

    Bucci describes Zhang as determined, open-minded, and commendably self-critical. Su says she’s careful, optimistic, and courageous. “If I imagine going from heat transfer to city planning, that would be almost impossible for me,” he says. “She has a strong mind.” Last year, Zhang gave birth to a boy, whom she’s raising on her own as she does her research. (Her husband is stuck in China during the pandemic.) “This, to me,” Bucci says, “is almost superhuman.”

    Zhang will graduate at the end of the year, and has started looking for jobs back in China. She wants to continue in the energy field, though maybe not nuclear. “I will use my interdisciplinary knowledge,” she says. “I hope I can design safer and more efficient and more reliable systems to provide energy for our society.” More

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More