More stories

  • in

    MIT Energy Initiative awards seven Seed Fund grants for early-stage energy research

    The MIT Energy Initiative (MITEI) has awarded seven Seed Fund grants to support novel, early-stage energy research by faculty and researchers at MIT. The awardees hail from a range of disciplines, but all strive to bring their backgrounds and expertise to address the global climate crisis by improving the efficiency, scalability, and adoption of clean energy technologies.

    “Solving climate change is truly an interdisciplinary challenge,” says MITEI Director Robert C. Armstrong. “The Seed Fund grants foster collaboration and innovation from across all five of MIT’s schools and one college, encouraging an ‘all hands on deck approach’ to developing the energy solutions that will prove critical in combatting this global crisis.”

    This year, MITEI’s Seed Fund grant program received 70 proposals from 86 different principal investigators (PIs) across 25 departments, labs, and centers. Of these proposals, 31 involved collaborations between two or more PIs, including 24 that involved multiple departments.

    The winning projects reflect this collaborative nature with topics addressing the optimization of low-energy thermal cooling in buildings; the design of safe, robust, and resilient distributed power systems; and how to design and site wind farms with consideration of wind resource uncertainty due to climate change.

    Increasing public support for low-carbon technologies

    One winning team aims to leverage work done in the behavioral sciences to motivate sustainable behaviors and promote the adoption of clean energy technologies.

    “Objections to scalable low-carbon technologies such as nuclear energy and carbon sequestration have made it difficult to adopt these technologies and reduce greenhouse gas emissions,” says Howard Herzog, a senior research scientist at MITEI and co-PI. “These objections tend to neglect the sheer scale of energy generation required and the inability to meet this demand solely with other renewable energy technologies.”

    This interdisciplinary team — which includes researchers from MITEI, the Department of Nuclear Science and Engineering, and the MIT Sloan School of Management — plans to convene industry professionals and academics, as well as behavioral scientists, to identify common objections, design messaging to overcome them, and prove that these messaging campaigns have long-lasting impacts on attitudes toward scalable low-carbon technologies.

    “Our aim is to provide a foundation for shifting the public and policymakers’ views about these low-carbon technologies from something they, at best, tolerate, to something they actually welcome,” says co-PI David Rand, the Erwin H. Schell Professor and professor of management science and brain and cognitive sciences at MIT Sloan School of Management.

    Siting and designing wind farms

    Michael Howland, an assistant professor of civil and environmental engineering, will use his Seed Fund grant to develop a foundational methodology for wind farm siting and design that accounts for the uncertainty of wind resources resulting from climate change.

    “The optimal wind farm design and its resulting cost of energy is inherently dependent on the wind resource at the location of the farm,” says Howland. “But wind farms are currently sited and designed based on short-term climate records that do not account for the future effects of climate change on wind patterns.”

    Wind farms are capital-intensive infrastructure that cannot be relocated and often have lifespans exceeding 20 years — all of which make it especially important that developers choose the right locations and designs based not only on wind patterns in the historical climate record, but also based on future predictions. The new siting and design methodology has the potential to replace current industry standards to enable a more accurate risk analysis of wind farm development and energy grid expansion under climate change-driven energy resource uncertainty.

    Membraneless electrolyzers for hydrogen production

    Producing hydrogen from renewable energy-powered water electrolyzers is central to realizing a sustainable and low-carbon hydrogen economy, says Kripa Varanasi, a professor of mechanical engineering and a Seed Fund award recipient. The idea of using hydrogen as a fuel has existed for decades, but it has yet to be widely realized at a considerable scale. Varanasi hopes to change that with his Seed Fund grant.

    “The critical economic hurdle for successful electrolyzers to overcome is the minimization of the capital costs associated with their deployment,” says Varanasi. “So, an immediate task at hand to enable electrochemical hydrogen production at scale will be to maximize the effectiveness of the most mature, least complex, and least expensive water electrolyzer technologies.”

    To do this, he aims to combine the advantages of existing low-temperature alkaline electrolyzer designs with a novel membraneless electrolyzer technology that harnesses a gas management system architecture to minimize complexity and costs, while also improving efficiency. Varanasi hopes his project will demonstrate scalable concepts for cost-effective electrolyzer technology design to help realize a decarbonized hydrogen economy.

    Since its establishment in 2008, the MITEI Seed Fund Program has supported 194 energy-focused seed projects through grants totaling more than $26 million. This funding comes primarily from MITEI’s founding and sustaining members, supplemented by gifts from generous donors.

    Recipients of the 2021 MITEI Seed Fund grants are:

    “Design automation of safe, robust, and resilient distributed power systems” — Chuchu Fan of the Department of Aeronautics and Astronautics
    “Advanced MHD topping cycles: For fission, fusion, solar power plants” — Jeffrey Freidberg of the Department of Nuclear Science and Engineering and Dennis Whyte of the Plasma Science and Fusion Center
    “Robust wind farm siting and design under climate-change‐driven wind resource uncertainty” — Michael Howland of the Department of Civil and Environmental Engineering
    “Low-energy thermal comfort for buildings in the Global South: Optimal design of integrated structural-thermal systems” — Leslie Norford of the Department of Architecture and Caitlin Mueller of the departments of Architecture and Civil and Environmental Engineering
    “New low-cost, high energy-density boron-based redox electrolytes for nonaqueous flow batteries” — Alexander Radosevich of the Department of Chemistry
    “Increasing public support for scalable low-carbon energy technologies using behavorial science insights” — David Rand of the MIT Sloan School of Management, Koroush Shirvan of the Department of Nuclear Science and Engineering, Howard Herzog of the MIT Energy Initiative, and Jacopo Buongiorno of the Department of Nuclear Science and Engineering
    “Membraneless electrolyzers for efficient hydrogen production using nanoengineered 3D gas capture electrode architectures” — Kripa Varanasi of the Department of Mechanical Engineering More

  • in

    Coupling power and hydrogen sector pathways to benefit decarbonization

    Governments and companies worldwide are increasing their investments in hydrogen research and development, indicating a growing recognition that hydrogen could play a significant role in meeting global energy system decarbonization goals. Since hydrogen is light, energy-dense, storable, and produces no direct carbon dioxide emissions at the point of use, this versatile energy carrier has the potential to be harnessed in a variety of ways in a future clean energy system.

    Often considered in the context of grid-scale energy storage, hydrogen has garnered renewed interest, in part due to expectations that our future electric grid will be dominated by variable renewable energy (VRE) sources such as wind and solar, as well as decreasing costs for water electrolyzers — both of which could make clean, “green” hydrogen more cost-competitive with fossil-fuel-based production. But hydrogen’s versatility as a clean energy fuel also makes it an attractive option to meet energy demand and to open pathways for decarbonization in hard-to-abate sectors where direct electrification is difficult, such as transportation, buildings, and industry.

    “We’ve seen a lot of progress and analysis around pathways to decarbonize electricity, but we may not be able to electrify all end uses. This means that just decarbonizing electricity supply is not sufficient, and we must develop other decarbonization strategies as well,” says Dharik Mallapragada, a research scientist at the MIT Energy Initiative (MITEI). “Hydrogen is an interesting energy carrier to explore, but understanding the role for hydrogen requires us to study the interactions between the electricity system and a future hydrogen supply chain.”

    In a recent paper, researchers from MIT and Shell present a framework to systematically study the role and impact of hydrogen-based technology pathways in a future low-carbon, integrated energy system, taking into account interactions with the electric grid and the spatio-temporal variations in energy demand and supply. The developed framework co-optimizes infrastructure investment and operation across the electricity and hydrogen supply chain under various emissions price scenarios. When applied to a Northeast U.S. case study, the researchers find this approach results in substantial benefits — in terms of costs and emissions reduction — as it takes advantage of hydrogen’s potential to provide the electricity system with a large flexible load when produced through electrolysis, while also enabling decarbonization of difficult-to-electrify, end-use sectors.

    The research team includes Mallapragada; Guannan He, a postdoc at MITEI; Abhishek Bose, a graduate research assistant at MITEI; Clara Heuberger-Austin, a researcher at Shell; and Emre Gençer, a research scientist at MITEI. Their findings are published in the journal Energy & Environmental Science.

    Cross-sector modeling

    “We need a cross-sector framework to analyze each energy carrier’s economics and role across multiple systems if we are to really understand the cost/benefits of direct electrification or other decarbonization strategies,” says He.

    To do that analysis, the team developed the Decision Optimization of Low-carbon Power-HYdrogen Network (DOLPHYN) model, which allows the user to study the role of hydrogen in low-carbon energy systems, the effects of coupling the power and hydrogen sectors, and the trade-offs between various technology options across both supply chains — spanning production, transport, storage, and end use, and their impact on decarbonization goals.

    “We are seeing great interest from industry and government, because they are all asking questions about where to invest their money and how to prioritize their decarbonization strategies,” says Gençer. Heuberger-Austin adds, “Being able to assess the system-level interactions between electricity and the emerging hydrogen economy is of paramount importance to drive technology development and support strategic value chain decisions. The DOLPHYN model can be instrumental in tackling those kinds of questions.”

    For a predefined set of electricity and hydrogen demand scenarios, the model determines the least-cost technology mix across the power and hydrogen sectors while adhering to a variety of operation and policy constraints. The model can incorporate a range of technology options — from VRE generation to carbon capture and storage (CCS) used with both power and hydrogen generation to trucks and pipelines used for hydrogen transport. With its flexible structure, the model can be readily adapted to represent emerging technology options and evaluate their long-term value to the energy system.

    As an important addition, the model takes into account process-level carbon emissions by allowing the user to add a cost penalty on emissions in both sectors. “If you have a limited emissions budget, we are able to explore the question of where to prioritize the limited emissions to get the best bang for your buck in terms of decarbonization,” says Mallapragada.

    Insights from a case study

    To test their model, the researchers investigated the Northeast U.S. energy system under a variety of demand, technology, and carbon price scenarios. While their major conclusions can be generalized for other regions, the Northeast proved to be a particularly interesting case study. This region has current legislation and regulatory support for renewable generation, as well as increasing emission-reduction targets, a number of which are quite stringent. It also has a high demand for energy for heating — a sector that is difficult to electrify and could particularly benefit from hydrogen and from coupling the power and hydrogen systems.

    The researchers find that when combining the power and hydrogen sectors through electrolysis or hydrogen-based power generation, there is more operational flexibility to support VRE integration in the power sector and a reduced need for alternative grid-balancing supply-side resources such as battery storage or dispatchable gas generation, which in turn reduces the overall system cost. This increased VRE penetration also leads to a reduction in emissions compared to scenarios without sector-coupling. “The flexibility that electricity-based hydrogen production provides in terms of balancing the grid is as important as the hydrogen it is going to produce for decarbonizing other end uses,” says Mallapragada. They found this type of grid interaction to be more favorable than conventional hydrogen-based electricity storage, which can incur additional capital costs and efficiency losses when converting hydrogen back to power. This suggests that the role of hydrogen in the grid could be more beneficial as a source of flexible demand than as storage.

    The researchers’ multi-sector modeling approach also highlighted that CCS is more cost-effective when utilized in the hydrogen supply chain, versus the power sector. They note that counter to this observation, by the end of the decade, six times more CCS projects will be deployed in the power sector than for use in hydrogen production — a fact that emphasizes the need for more cross-sectoral modeling when planning future energy systems.

    In this study, the researchers tested the robustness of their conclusions against a number of factors, such as how the inclusion of non-combustion greenhouse gas emissions (including methane emissions) from natural gas used in power and hydrogen production impacts the model outcomes. They find that including the upstream emissions footprint of natural gas within the model boundary does not impact the value of sector coupling in regards to VRE integration and cost savings for decarbonization; in fact, the value actually grows because of the increased emphasis on electricity-based hydrogen production over natural gas-based pathways.

    “You cannot achieve climate targets unless you take a holistic approach,” says Gençer. “This is a systems problem. There are sectors that you cannot decarbonize with electrification, and there are other sectors that you cannot decarbonize without carbon capture, and if you think about everything together, there is a synergistic solution that significantly minimizes the infrastructure costs.”

    This research was supported, in part, by Shell Global Solutions International B.V. in Amsterdam, the Netherlands, and MITEI’s Low-Carbon Energy Centers for Electric Power Systems and Carbon Capture, Utilization, and Storage. More

  • in

    Crossing disciplines, adding fresh eyes to nuclear engineering

    Sometimes patterns repeat in nature. Spirals appear in sunflowers and hurricanes. Branches occur in veins and lightning. Limiao Zhang, a doctoral student in MIT’s Department of Nuclear Science and Engineering, has found another similarity: between street traffic and boiling water, with implications for preventing nuclear meltdowns.

    Growing up in China, Zhang enjoyed watching her father repair things around the house. He couldn’t fulfill his dream of becoming an engineer, instead joining the police force, but Zhang did have that opportunity and studied mechanical engineering at Three Gorges University. Being one of four girls among about 50 boys in the major didn’t discourage her. “My father always told me girls can do anything,” she says. She graduated at the top of her class.

    In college, she and a team of classmates won a national engineering competition. They designed and built a model of a carousel powered by solar, hydroelectric, and pedal power. One judge asked how long the system could operate safely. “I didn’t have a perfect answer,” she recalls. She realized that engineering means designing products that not only function, but are resilient. So for her master’s degree, at Beihang University, she turned to industrial engineering and analyzed the reliability of critical infrastructure, in particular traffic networks.

    “Among all the critical infrastructures, nuclear power plants are quite special,” Zhang says. “Although one can provide very enormous carbon-free energy, once it fails, it can cause catastrophic results.” So she decided to switch fields again and study nuclear engineering. At the time she had no nuclear background, and hadn’t studied in the United States, but “I tried to step out of my comfort zone,” she says. “I just applied and MIT welcomed me.” Her supervisor, Matteo Bucci, and her classmates explained the basics of fission reactions as she adjusted to the new material, language, and environment. She doubted herself — “my friend told me, ‘I saw clouds above your head’” — but she passed her first-year courses and published her first paper soon afterward.

    Much of the work in Bucci’s lab deals with what’s called the boiling crisis. In many applications, such as nuclear plants and powerful computers, water cools things. When a hot surface boils water, bubbles cling to the surface before rising, but if too many form, they merge into a layer of vapor that insulates the surface. The heat has nowhere to go — a boiling crisis.

    Bucci invited Zhang into his lab in part because she saw a connection between traffic and heat transfer. The data plots of both phenomena look surprisingly similar. “The mathematical tools she had developed for the study of traffic jams were a completely different way of looking into our problem” Bucci says, “by using something which is intuitively not connected.”

    One can view bubbles as cars. The more there are, the more they interfere with each other. People studying boiling had focused on the physics of individual bubbles. Zhang instead uses statistical physics to analyze collective patterns of behavior. “She brings a different set of skills, a different set of knowledge, to our research,” says Guanyu Su, a postdoc in the lab. “That’s very refreshing.”

    In her first paper on the boiling crisis, published in Physical Review Letters, Zhang used theory and simulations to identify scale-free behavior in boiling: just as in traffic, the same patterns appear whether zoomed in or out, in terms of space or time. Both small and large bubbles matter. Using this insight, the team found certain physical parameters that could predict a boiling crisis. Zhang’s mathematical tools both explain experimental data and suggest new experiments to try. For a second paper, the team collected more data and found ways to predict the boiling crisis in a wider variety of conditions.

    Zhang’s thesis and third paper, both in progress, propose a universal law for explaining the crisis. “She translated the mechanism into a physical law, like F=ma or E=mc2,” Bucci says. “She came up with an equally simple equation.” Zhang says she’s learned a lot from colleagues in the department who are pioneering new nuclear reactors or other technologies, “but for my own work, I try to get down to the very basics of a phenomenon.”

    Bucci describes Zhang as determined, open-minded, and commendably self-critical. Su says she’s careful, optimistic, and courageous. “If I imagine going from heat transfer to city planning, that would be almost impossible for me,” he says. “She has a strong mind.” Last year, Zhang gave birth to a boy, whom she’s raising on her own as she does her research. (Her husband is stuck in China during the pandemic.) “This, to me,” Bucci says, “is almost superhuman.”

    Zhang will graduate at the end of the year, and has started looking for jobs back in China. She wants to continue in the energy field, though maybe not nuclear. “I will use my interdisciplinary knowledge,” she says. “I hope I can design safer and more efficient and more reliable systems to provide energy for our society.” More

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More

  • in

    Energy storage from a chemistry perspective

    The transition toward a more sustainable, environmentally sound electrical grid has driven an upsurge in renewables like solar and wind. But something as simple as cloud cover can cause grid instability, and wind power is inherently unpredictable. This intermittent nature of renewables has invigorated the competitive landscape for energy storage companies looking to enhance power system flexibility while enabling the integration of renewables.

    “Impact is what drives PolyJoule more than anything else,” says CEO Eli Paster. “We see impact from a renewable integration standpoint, from a curtailment standpoint, and also from the standpoint of transitioning from a centralized to a decentralized model of energy-power delivery.”

    PolyJoule is a Billerica, Massachusetts-based startup that’s looking to reinvent energy storage from a chemistry perspective. Co-founders Ian Hunter of MIT’s Department of Mechanical Engineering and Tim Swager of the Department of Chemistry are longstanding MIT professors considered luminaries in their respective fields. Meanwhile, the core team is a small but highly skilled collection of chemists, manufacturing specialists, supply chain optimizers, and entrepreneurs, many of whom have called MIT home at one point or another.

    “The ideas that we work on in the lab, you’ll see turned into products three to four years from now, and they will still be innovative and well ahead of the curve when they get to market,” Paster says. “But the concepts come from the foresight of thinking five to 10 years in advance. That’s what we have in our back pocket, thanks to great minds like Ian and Tim.”

    PolyJoule takes a systems-level approach married to high-throughput, analytical electrochemistry that has allowed the company to pinpoint a chemical cell design based on 10,000 trials. The result is a battery that is low-cost, safe, and has a long lifetime. It’s capable of responding to base loads and peak loads in microseconds, allowing the same battery to participate in multiple power markets and deployment use cases.

    In the energy storage sphere, interesting technologies abound, but workable solutions are few and far between. But Paster says PolyJoule has managed to bridge the gap between the lab and the real world by taking industry concerns into account from the beginning. “We’ve taken a slightly contrarian view to all of the other energy storage companies that have come before us that have said, ‘If we build it, they will come.’ Instead, we’ve gone directly to the customer and asked, ‘If you could have a better battery storage platform, what would it look like?’”

    With commercial input feeding into the thought processes behind their technological and commercial deployment, PolyJoule says they’ve designed a battery that is less expensive to make, less expensive to operate, safer, and easier to deploy.

    Traditionally, lithium-ion batteries have been the go-to energy storage solution. But lithium has its drawbacks, including cost, safety issues, and detrimental effects on the environment. But PolyJoule isn’t interested in lithium — or metals of any kind, in fact. “We start with the periodic table of organic elements,” says Paster, “and from there, we derive what works at economies of scale, what is easy to converge and convert chemically.”

    Having an inherently safer chemistry allows PolyJoule to save on system integration costs, among other things. PolyJoule batteries don’t contain flammable solvents, which means no added expenses related to fire mitigation. Safer chemistry also means ease of storage, and PolyJoule batteries are currently undergoing global safety certification (UL approval) to be allowed indoors and on airplanes. Finally, with high power built into the chemistry, PolyJoule’s cells can be charged and discharged to extremes, without the need for heating or cooling systems.

    “From raw material to product delivery, we examine each step in the value chain with an eye towards reducing costs,” says Paster. It all starts with designing the chemistry around earth-abundant elements, which allows the small startup to compete with larger suppliers, even at smaller scales. Consider the fact that PolyJoule’s differentiating material cost is less than $1 per kilogram, whereas lithium carbonate sells for $20 per kilogram.

    On the manufacturing side, Paster explains that PolyJoule cuts costs by making their cells in old paper mills and warehouses, employing off-the-shelf equipment previously used for tissue paper or newspaper printing. “We use equipment that has been around for decades because we don’t want to create a cutting-edge technology that requires cutting-edge manufacturing,” he says. “We want to create a cutting-edge technology that can be deployed in industrialized nations and in other nations that can benefit the most from energy storage.”

    PolyJoule’s first customer is an industrial distributed energy consumer with baseline energy consumption that increases by a factor of 10 when the heavy machinery kicks on twice a day. In the early morning and late afternoon, it consumes about 50 kilowatts for 20 minutes to an hour, compared to a baseline rate of 5  kilowatts. It’s an application model that is translatable to a variety of industries. Think wastewater treatment, food processing, and server farms — anything with a fluctuation in power consumption over a 24-hour period.

    By the end of the year, PolyJoule will have delivered its first 10 kilowatt-hour system, exiting stealth mode and adding commercial viability to demonstrated technological superiority. “What we’re seeing, now is massive amounts of energy storage being added to renewables and grid-edge applications,” says Paster. “We anticipated that by 12-18 months, and now we’re ramping up to catch up with some of the bigger players.” More

  • in

    Using aluminum and water to make clean hydrogen fuel — when and where it’s needed

    As the world works to move away from fossil fuels, many researchers are investigating whether clean hydrogen fuel can play an expanded role in sectors from transportation and industry to buildings and power generation. It could be used in fuel cell vehicles, heat-producing boilers, electricity-generating gas turbines, systems for storing renewable energy, and more.

    But while using hydrogen doesn’t generate carbon emissions, making it typically does. Today, almost all hydrogen is produced using fossil fuel-based processes that together generate more than 2 percent of all global greenhouse gas emissions. In addition, hydrogen is often produced in one location and consumed in another, which means its use also presents logistical challenges.

    A promising reaction

    Another option for producing hydrogen comes from a perhaps surprising source: reacting aluminum with water. Aluminum metal will readily react with water at room temperature to form aluminum hydroxide and hydrogen. That reaction doesn’t typically take place because a layer of aluminum oxide naturally coats the raw metal, preventing it from coming directly into contact with water.

    Using the aluminum-water reaction to generate hydrogen doesn’t produce any greenhouse gas emissions, and it promises to solve the transportation problem for any location with available water. Simply move the aluminum and then react it with water on-site. “Fundamentally, the aluminum becomes a mechanism for storing hydrogen — and a very effective one,” says Douglas P. Hart, professor of mechanical engineering at MIT. “Using aluminum as our source, we can ‘store’ hydrogen at a density that’s 10 times greater than if we just store it as a compressed gas.”

    Two problems have kept aluminum from being employed as a safe, economical source for hydrogen generation. The first problem is ensuring that the aluminum surface is clean and available to react with water. To that end, a practical system must include a means of first modifying the oxide layer and then keeping it from re-forming as the reaction proceeds.

    The second problem is that pure aluminum is energy-intensive to mine and produce, so any practical approach needs to use scrap aluminum from various sources. But scrap aluminum is not an easy starting material. It typically occurs in an alloyed form, meaning that it contains other elements that are added to change the properties or characteristics of the aluminum for different uses. For example, adding magnesium increases strength and corrosion-resistance, adding silicon lowers the melting point, and adding a little of both makes an alloy that’s moderately strong and corrosion-resistant.

    Despite considerable research on aluminum as a source of hydrogen, two key questions remain: What’s the best way to prevent the adherence of an oxide layer on the aluminum surface, and how do alloying elements in a piece of scrap aluminum affect the total amount of hydrogen generated and the rate at which it is generated?

    “If we’re going to use scrap aluminum for hydrogen generation in a practical application, we need to be able to better predict what hydrogen generation characteristics we’re going to observe from the aluminum-water reaction,” says Laureen Meroueh PhD ’20, who earned her doctorate in mechanical engineering.

    Since the fundamental steps in the reaction aren’t well understood, it’s been hard to predict the rate and volume at which hydrogen forms from scrap aluminum, which can contain varying types and concentrations of alloying elements. So Hart, Meroueh, and Thomas W. Eagar, a professor of materials engineering and engineering management in the MIT Department of Materials Science and Engineering, decided to examine — in a systematic fashion — the impacts of those alloying elements on the aluminum-water reaction and on a promising technique for preventing the formation of the interfering oxide layer.

    To prepare, they had experts at Novelis Inc. fabricate samples of pure aluminum and of specific aluminum alloys made of commercially pure aluminum combined with either 0.6 percent silicon (by weight), 1 percent magnesium, or both — compositions that are typical of scrap aluminum from a variety of sources. Using those samples, the MIT researchers performed a series of tests to explore different aspects of the aluminum-water reaction.

    Pre-treating the aluminum

    The first step was to demonstrate an effective means of penetrating the oxide layer that forms on aluminum in the air. Solid aluminum is made up of tiny grains that are packed together with occasional boundaries where they don’t line up perfectly. To maximize hydrogen production, researchers would need to prevent the formation of the oxide layer on all those interior grain surfaces.

    Research groups have already tried various ways of keeping the aluminum grains “activated” for reaction with water. Some have crushed scrap samples into particles so tiny that the oxide layer doesn’t adhere. But aluminum powders are dangerous, as they can react with humidity and explode. Another approach calls for grinding up scrap samples and adding liquid metals to prevent oxide deposition. But grinding is a costly and energy-intensive process.

    To Hart, Meroueh, and Eagar, the most promising approach — first introduced by Jonathan Slocum ScD ’18 while he was working in Hart’s research group — involved pre-treating the solid aluminum by painting liquid metals on top and allowing them to permeate through the grain boundaries.

    To determine the effectiveness of that approach, the researchers needed to confirm that the liquid metals would reach the internal grain surfaces, with and without alloying elements present. And they had to establish how long it would take for the liquid metal to coat all of the grains in pure aluminum and its alloys.

    They started by combining two metals — gallium and indium — in specific proportions to create a “eutectic” mixture; that is, a mixture that would remain in liquid form at room temperature. They coated their samples with the eutectic and allowed it to penetrate for time periods ranging from 48 to 96 hours. They then exposed the samples to water and monitored the hydrogen yield (the amount formed) and flow rate for 250 minutes. After 48 hours, they also took high-magnification scanning electron microscope (SEM) images so they could observe the boundaries between adjacent aluminum grains.

    Based on the hydrogen yield measurements and the SEM images, the MIT team concluded that the gallium-indium eutectic does naturally permeate and reach the interior grain surfaces. However, the rate and extent of penetration vary with the alloy. The permeation rate was the same in silicon-doped aluminum samples as in pure aluminum samples but slower in magnesium-doped samples.

    Perhaps most interesting were the results from samples doped with both silicon and magnesium — an aluminum alloy often found in recycling streams. Silicon and magnesium chemically bond to form magnesium silicide, which occurs as solid deposits on the internal grain surfaces. Meroueh hypothesized that when both silicon and magnesium are present in scrap aluminum, those deposits can act as barriers that impede the flow of the gallium-indium eutectic.

    The experiments and images confirmed her hypothesis: The solid deposits did act as barriers, and images of samples pre-treated for 48 hours showed that permeation wasn’t complete. Clearly, a lengthy pre-treatment period would be critical for maximizing the hydrogen yield from scraps of aluminum containing both silicon and magnesium.

    Meroueh cites several benefits to the process they used. “You don’t have to apply any energy for the gallium-indium eutectic to work its magic on aluminum and get rid of that oxide layer,” she says. “Once you’ve activated your aluminum, you can drop it in water, and it’ll generate hydrogen — no energy input required.” Even better, the eutectic doesn’t chemically react with the aluminum. “It just physically moves around in between the grains,” she says. “At the end of the process, I could recover all of the gallium and indium I put in and use it again” — a valuable feature as gallium and (especially) indium are costly and in relatively short supply.

    Impacts of alloying elements on hydrogen generation

    The researchers next investigated how the presence of alloying elements affects hydrogen generation. They tested samples that had been treated with the eutectic for 96 hours; by then, the hydrogen yield and flow rates had leveled off in all the samples.

    The presence of 0.6 percent silicon increased the hydrogen yield for a given weight of aluminum by 20 percent compared to pure aluminum — even though the silicon-containing sample had less aluminum than the pure aluminum sample. In contrast, the presence of 1 percent magnesium produced far less hydrogen, while adding both silicon and magnesium pushed the yield up, but not to the level of pure aluminum.

    The presence of silicon also greatly accelerated the reaction rate, producing a far higher peak in the flow rate but cutting short the duration of hydrogen output. The presence of magnesium produced a lower flow rate but allowed the hydrogen output to remain fairly steady over time. And once again, aluminum with both alloying elements produced a flow rate between that of magnesium-doped and pure aluminum.

    Those results provide practical guidance on how to adjust the hydrogen output to match the operating needs of a hydrogen-consuming device. If the starting material is commercially pure aluminum, adding small amounts of carefully selected alloying elements can tailor the hydrogen yield and flow rate. If the starting material is scrap aluminum, careful choice of the source can be key. For high, brief bursts of hydrogen, pieces of silicon-containing aluminum from an auto junkyard could work well. For lower but longer flows, magnesium-containing scraps from the frame of a demolished building might be better. For results somewhere in between, aluminum containing both silicon and magnesium should work well; such material is abundantly available from scrapped cars and motorcycles, yachts, bicycle frames, and even smartphone cases.

    It should also be possible to combine scraps of different aluminum alloys to tune the outcome, notes Meroueh. “If I have a sample of activated aluminum that contains just silicon and another sample that contains just magnesium, I can put them both into a container of water and let them react,” she says. “So I get the fast ramp-up in hydrogen production from the silicon and then the magnesium takes over and has that steady output.”

    Another opportunity for tuning: Reducing grain size

    Another practical way to affect hydrogen production could be to reduce the size of the aluminum grains — a change that should increase the total surface area available for reactions to occur.

    To investigate that approach, the researchers requested specially customized samples from their supplier. Using standard industrial procedures, the Novelis experts first fed each sample through two rollers, squeezing it from the top and bottom so that the internal grains were flattened. They then heated each sample until the long, flat grains had reorganized and shrunk to a targeted size.

    In a series of carefully designed experiments, the MIT team found that reducing the grain size increased the efficiency and decreased the duration of the reaction to varying degrees in the different samples. Again, the presence of particular alloying elements had a major effect on the outcome.

    Needed: A revised theory that explains observations

    Throughout their experiments, the researchers encountered some unexpected results. For example, standard corrosion theory predicts that pure aluminum will generate more hydrogen than silicon-doped aluminum will — the opposite of what they observed in their experiments.

    To shed light on the underlying chemical reactions, Hart, Meroueh, and Eagar investigated hydrogen “flux,” that is, the volume of hydrogen generated over time on each square centimeter of aluminum surface, including the interior grains. They examined three grain sizes for each of their four compositions and collected thousands of data points measuring hydrogen flux.

    Their results show that reducing grain size has significant effects. It increases the peak hydrogen flux from silicon-doped aluminum as much as 100 times and from the other three compositions by 10 times. With both pure aluminum and silicon-containing aluminum, reducing grain size also decreases the delay before the peak flux and increases the rate of decline afterward. With magnesium-containing aluminum, reducing the grain size brings about an increase in peak hydrogen flux and results in a slightly faster decline in the rate of hydrogen output. With both silicon and magnesium present, the hydrogen flux over time resembles that of magnesium-containing aluminum when the grain size is not manipulated. When the grain size is reduced, the hydrogen output characteristics begin to resemble behavior observed in silicon-containing aluminum. That outcome was unexpected because when silicon and magnesium are both present, they react to form magnesium silicide, resulting in a new type of aluminum alloy with its own properties.

    The researchers stress the benefits of developing a better fundamental understanding of the underlying chemical reactions involved. In addition to guiding the design of practical systems, it might help them find a replacement for the expensive indium in their pre-treatment mixture. Other work has shown that gallium will naturally permeate through the grain boundaries of aluminum. “At this point, we know that the indium in our eutectic is important, but we don’t really understand what it does, so we don’t know how to replace it,” says Hart.

    But already Hart, Meroueh, and Eagar have demonstrated two practical ways of tuning the hydrogen reaction rate: by adding certain elements to the aluminum and by manipulating the size of the interior aluminum grains. In combination, those approaches can deliver significant results. “If you go from magnesium-containing aluminum with the largest grain size to silicon-containing aluminum with the smallest grain size, you get a hydrogen reaction rate that differs by two orders of magnitude,” says Meroueh. “That’s huge if you’re trying to design a real system that would use this reaction.”

    This research was supported through the MIT Energy Initiative by ExxonMobil-MIT Energy Fellowships awarded to Laureen Meroueh PhD ’20 from 2018 to 2020.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Amy Watterson: Model engineer

    “I love that we are doing something that no one else is doing.”

    Amy Watterson is excited when she talks about SPARC, the pilot fusion plant being developed by MIT spinoff Commonwealth Fusion Systems (CSF). Since being hired as a mechanical engineer at the Plasma Science and Fusion Center (PSFC) two years ago, Watterson has found her skills stretching to accommodate the multiple needs of the project.

    Fusion, which fuels the sun and stars, has long been sought as a carbon-free energy source for the world. For decades researchers have pursued the “tokamak,” a doughnut-shaped vacuum chamber where hot plasma can be contained by magnetic fields and heated to the point where fusion occurs. Sustaining the fusion reactions long enough to draw energy from them has been a challenge.

    Watterson is intimately aware of this difficulty. Much of her life she has heard the quip, “Fusion is 50 years away and always will be.” The daughter of PSFC research scientist Catherine Fiore, who headed the PSFC’s Office of Environment, Safety and Health, and Reich Watterson, an optical engineer working at the center, she had watched her parents devote years to making fusion a reality. She determined before entering Rensselaer Polytechnic Institute that she could forgo any attempt to follow her parents into a field that might not produce results during her career.

    Working on SPARC has changed her mindset. Taking advantage of a novel high-temperature superconducting tape, SPARC’s magnets will be compact while generating magnetic fields stronger than would be possible from other mid-sized tokamaks, and producing more fusion power. It suggests a high-field device that produces net fusion gain is not 50 years away. SPARC is scheduled to be begin operation in 2025.

    An education in modeling

    Watterson’s current excitement, and focus, is due to an approaching milestone for SPARC: a test of the Toroidal Field Magnet Coil (TFMC), a scaled prototype for the HTS magnets that will surround SPARC’s toroidal vacuum chamber. Its design and manufacture have been shaped by computer models and simulations. As part of a large research team, Waterson has received an education in modeling over the past two years.

    Computer models move scientific experiments forward by allowing researchers to predict what will happen to an experiment — or its materials — if a parameter is changed. Modeling a component of the TFMC, for example, researchers can test how it is affected by varying amounts of current, different temperatures or different materials. With this information they can make choices that will improve the success of the experiment.

    In preparation for the magnet testing, Watterson has modeled aspects of the cryogenic system that will circulate helium gas around the TFMC to keep it cold enough to remain superconducting. Taking into consideration the amount of cooling entering the system, the flow rate of the helium, the resistance created by valves and transfer lines and other parameters, she can model how much helium flow will be necessary to guarantee the magnet stays cold enough. Adjusting a parameter can make the difference between a magnet remaining superconducting and becoming overheated or even damaged.

    Watterson and her teammates have also modeled pressures and stress on the inside of the TFMC. Pumping helium through the coil to cool it down will add 20 atmospheres of pressure, which could create a degree of flex in elements of the magnet that are welded down. Modeling can help determine how much pressure a weld can sustain.

    “How thick does a weld need to be, and where should you put the weld so that it doesn’t break — that’s something you don’t want to leave until you’re finally assembling it,” says Watterson.

    Modeling the behavior of helium is particularly challenging because its properties change significantly as the pressure and temperature change.

    “A few degrees or a little pressure will affect the fluid’s viscosity, density, thermal conductivity, and heat capacity,” says Watterson. “The flow has different pressures and temperatures at different places in the cryogenic loop. You end up with a set of equations that are very dependent on each other, which makes it a challenge to solve.”

    Role model

    Watterson notes that her modeling depends on the contributions of colleagues at the PSFC, and praises the collaborative spirit among researchers and engineers, a community that now feels like family. Her teammates have been her mentors. “I’ve learned so much more on the job in two years than I did in four years at school,” she says.

    She realizes that having her mother as a role model in her own family has always made it easier for her to imagine becoming a scientist or engineer. Tracing her early passion for engineering to a middle school Lego robotics tournament, her eyes widen as she talks about the need for more female engineers, and the importance of encouraging girls to believe they are equal to the challenge.

    “I want to be a role model and tell them ‘I’m a successful engineer, you can be too.’ Something I run into a lot is that little girls will say, ‘I can’t be an engineer, I’m not cut out for that.’ And I say, ‘Well that’s not true. Let me show you. If you can make this Lego robot, then you can be an engineer.’ And it turns out they usually can.”

    Then, as if making an adjustment to one of her computer models, she continues.

    “Actually, they always can.” More