More stories

  • in

    MIT-designed project achieves major advance toward fusion energy

    It was a moment three years in the making, based on intensive research and design work: On Sept. 5, for the first time, a large high-temperature superconducting electromagnet was ramped up to a field strength of 20 tesla, the most powerful magnetic field of its kind ever created on Earth. That successful demonstration helps resolve the greatest uncertainty in the quest to build the world’s first fusion power plant that can produce more power than it consumes, according to the project’s leaders at MIT and startup company Commonwealth Fusion Systems (CFS).

    That advance paves the way, they say, for the long-sought creation of practical, inexpensive, carbon-free power plants that could make a major contribution to limiting the effects of global climate change.

    “Fusion in a lot of ways is the ultimate clean energy source,” says Maria Zuber, MIT’s vice president for research and E. A. Griswold Professor of Geophysics. “The amount of power that is available is really game-changing.” The fuel used to create fusion energy comes from water, and “the Earth is full of water — it’s a nearly unlimited resource. We just have to figure out how to utilize it.”

    Developing the new magnet is seen as the greatest technological hurdle to making that happen; its successful operation now opens the door to demonstrating fusion in a lab on Earth, which has been pursued for decades with limited progress. With the magnet technology now successfully demonstrated, the MIT-CFS collaboration is on track to build the world’s first fusion device that can create and confine a plasma that produces more energy than it consumes. That demonstration device, called SPARC, is targeted for completion in 2025.

    “The challenges of making fusion happen are both technical and scientific,” says Dennis Whyte, director of MIT’s Plasma Science and Fusion Center, which is working with CFS to develop SPARC. But once the technology is proven, he says, “it’s an inexhaustible, carbon-free source of energy that you can deploy anywhere and at any time. It’s really a fundamentally new energy source.”

    Whyte, who is the Hitachi America Professor of Engineering, says this week’s demonstration represents a major milestone, addressing the biggest questions remaining about the feasibility of the SPARC design. “It’s really a watershed moment, I believe, in fusion science and technology,” he says.

    The sun in a bottle

    Fusion is the process that powers the sun: the merger of two small atoms to make a larger one, releasing prodigious amounts of energy. But the process requires temperatures far beyond what any solid material could withstand. To capture the sun’s power source here on Earth, what’s needed is a way of capturing and containing something that hot — 100,000,000 degrees or more — by suspending it in a way that prevents it from coming into contact with anything solid.

    That’s done through intense magnetic fields, which form a kind of invisible bottle to contain the hot swirling soup of protons and electrons, called a plasma. Because the particles have an electric charge, they are strongly controlled by the magnetic fields, and the most widely used configuration for containing them is a donut-shaped device called a tokamak. Most of these devices have produced their magnetic fields using conventional electromagnets made of copper, but the latest and largest version under construction in France, called ITER, uses what are known as low-temperature superconductors.

    The major innovation in the MIT-CFS fusion design is the use of high-temperature superconductors, which enable a much stronger magnetic field in a smaller space. This design was made possible by a new kind of superconducting material that became commercially available a few years ago. The idea initially arose as a class project in a nuclear engineering class taught by Whyte. The idea seemed so promising that it continued to be developed over the next few iterations of that class, leading to the ARC power plant design concept in early 2015. SPARC, designed to be about half the size of ARC, is a testbed to prove the concept before construction of the full-size, power-producing plant.

    Until now, the only way to achieve the colossally powerful magnetic fields needed to create a magnetic “bottle” capable of containing plasma heated up to hundreds of millions of degrees was to make them larger and larger. But the new high-temperature superconductor material, made in the form of a flat, ribbon-like tape, makes it possible to achieve a higher magnetic field in a smaller device, equaling the performance that would be achieved in an apparatus 40 times larger in volume using conventional low-temperature superconducting magnets. That leap in power versus size is the key element in ARC’s revolutionary design.

    The use of the new high-temperature superconducting magnets makes it possible to apply decades of experimental knowledge gained from the operation of tokamak experiments, including MIT’s own Alcator series. The new approach, led by Zach Hartwig, the MIT principal investigator and the Robert N. Noyce Career Development Assistant Professor of Nuclear Science and Engineering, uses a well-known design but scales everything down to about half the linear size and still achieves the same operational conditions because of the higher magnetic field.

    A series of scientific papers published last year outlined the physical basis and, by simulation, confirmed the viability of the new fusion device. The papers showed that, if the magnets worked as expected, the whole fusion system should indeed produce net power output, for the first time in decades of fusion research.

    Martin Greenwald, deputy director and senior research scientist at the PSFC, says unlike some other designs for fusion experiments, “the niche that we were filling was to use conventional plasma physics, and conventional tokamak designs and engineering, but bring to it this new magnet technology. So, we weren’t requiring innovation in a half-dozen different areas. We would just innovate on the magnet, and then apply the knowledge base of what’s been learned over the last decades.”

    That combination of scientifically established design principles and game-changing magnetic field strength is what makes it possible to achieve a plant that could be economically viable and developed on a fast track. “It’s a big moment,” says Bob Mumgaard, CEO of CFS. “We now have a platform that is both scientifically very well-advanced, because of the decades of research on these machines, and also commercially very interesting. What it does is allow us to build devices faster, smaller, and at less cost,” he says of the successful magnet demonstration. 

    Play video

    Proof of the concept

    Bringing that new magnet concept to reality required three years of intensive work on design, establishing supply chains, and working out manufacturing methods for magnets that may eventually need to be produced by the thousands.

    “We built a first-of-a-kind, superconducting magnet. It required a lot of work to create unique manufacturing processes and equipment. As a result, we are now well-prepared to ramp-up for SPARC production,” says Joy Dunn, head of operations at CFS. “We started with a physics model and a CAD design, and worked through lots of development and prototypes to turn a design on paper into this actual physical magnet.” That entailed building manufacturing capabilities and testing facilities, including an iterative process with multiple suppliers of the superconducting tape, to help them reach the ability to produce material that met the needed specifications — and for which CFS is now overwhelmingly the world’s biggest user.

    They worked with two possible magnet designs in parallel, both of which ended up meeting the design requirements, she says. “It really came down to which one would revolutionize the way that we make superconducting magnets, and which one was easier to build.” The design they adopted clearly stood out in that regard, she says.

    In this test, the new magnet was gradually powered up in a series of steps until reaching the goal of a 20 tesla magnetic field — the highest field strength ever for a high-temperature superconducting fusion magnet. The magnet is composed of 16 plates stacked together, each one of which by itself would be the most powerful high-temperature superconducting magnet in the world.

    “Three years ago we announced a plan,” says Mumgaard, “to build a 20-tesla magnet, which is what we will need for future fusion machines.” That goal has now been achieved, right on schedule, even with the pandemic, he says.

    Citing the series of physics papers published last year, Brandon Sorbom, the chief science officer at CFS, says “basically the papers conclude that if we build the magnet, all of the physics will work in SPARC. So, this demonstration answers the question: Can they build the magnet? It’s a very exciting time! It’s a huge milestone.”

    The next step will be building SPARC, a smaller-scale version of the planned ARC power plant. The successful operation of SPARC will demonstrate that a full-scale commercial fusion power plant is practical, clearing the way for rapid design and construction of that pioneering device can then proceed full speed.

    Zuber says that “I now am genuinely optimistic that SPARC can achieve net positive energy, based on the demonstrated performance of the magnets. The next step is to scale up, to build an actual power plant. There are still many challenges ahead, not the least of which is developing a design that allows for reliable, sustained operation. And realizing that the goal here is commercialization, another major challenge will be economic. How do you design these power plants so it will be cost effective to build and deploy them?”

    Someday in a hoped-for future, when there may be thousands of fusion plants powering clean electric grids around the world, Zuber says, “I think we’re going to look back and think about how we got there, and I think the demonstration of the magnet technology, for me, is the time when I believed that, wow, we can really do this.”

    The successful creation of a power-producing fusion device would be a tremendous scientific achievement, Zuber notes. But that’s not the main point. “None of us are trying to win trophies at this point. We’re trying to keep the planet livable.” More

  • in

    Making the case for hydrogen in a zero-carbon economy

    As the United States races to achieve its goal of zero-carbon electricity generation by 2035, energy providers are swiftly ramping up renewable resources such as solar and wind. But because these technologies churn out electrons only when the sun shines and the wind blows, they need backup from other energy sources, especially during seasons of high electric demand. Currently, plants burning fossil fuels, primarily natural gas, fill in the gaps.

    “As we move to more and more renewable penetration, this intermittency will make a greater impact on the electric power system,” says Emre Gençer, a research scientist at the MIT Energy Initiative (MITEI). That’s because grid operators will increasingly resort to fossil-fuel-based “peaker” plants that compensate for the intermittency of the variable renewable energy (VRE) sources of sun and wind. “If we’re to achieve zero-carbon electricity, we must replace all greenhouse gas-emitting sources,” Gençer says.

    Low- and zero-carbon alternatives to greenhouse-gas emitting peaker plants are in development, such as arrays of lithium-ion batteries and hydrogen power generation. But each of these evolving technologies comes with its own set of advantages and constraints, and it has proven difficult to frame the debate about these options in a way that’s useful for policymakers, investors, and utilities engaged in the clean energy transition.

    Now, Gençer and Drake D. Hernandez SM ’21 have come up with a model that makes it possible to pin down the pros and cons of these peaker-plant alternatives with greater precision. Their hybrid technological and economic analysis, based on a detailed inventory of California’s power system, was published online last month in Applied Energy. While their work focuses on the most cost-effective solutions for replacing peaker power plants, it also contains insights intended to contribute to the larger conversation about transforming energy systems.

    “Our study’s essential takeaway is that hydrogen-fired power generation can be the more economical option when compared to lithium-ion batteries — even today, when the costs of hydrogen production, transmission, and storage are very high,” says Hernandez, who worked on the study while a graduate research assistant for MITEI. Adds Gençer, “If there is a place for hydrogen in the cases we analyzed, that suggests there is a promising role for hydrogen to play in the energy transition.”

    Adding up the costs

    California serves as a stellar paradigm for a swiftly shifting power system. The state draws more than 20 percent of its electricity from solar and approximately 7 percent from wind, with more VRE coming online rapidly. This means its peaker plants already play a pivotal role, coming online each evening when the sun goes down or when events such as heat waves drive up electricity use for days at a time.

    “We looked at all the peaker plants in California,” recounts Gençer. “We wanted to know the cost of electricity if we replaced them with hydrogen-fired turbines or with lithium-ion batteries.” The researchers used a core metric called the levelized cost of electricity (LCOE) as a way of comparing the costs of different technologies to each other. LCOE measures the average total cost of building and operating a particular energy-generating asset per unit of total electricity generated over the hypothetical lifetime of that asset.

    Selecting 2019 as their base study year, the team looked at the costs of running natural gas-fired peaker plants, which they defined as plants operating 15 percent of the year in response to gaps in intermittent renewable electricity. In addition, they determined the amount of carbon dioxide released by these plants and the expense of abating these emissions. Much of this information was publicly available.

    Coming up with prices for replacing peaker plants with massive arrays of lithium-ion batteries was also relatively straightforward: “There are no technical limitations to lithium-ion, so you can build as many as you want; but they are super expensive in terms of their footprint for energy storage and the mining required to manufacture them,” says Gençer.

    But then came the hard part: nailing down the costs of hydrogen-fired electricity generation. “The most difficult thing is finding cost assumptions for new technologies,” says Hernandez. “You can’t do this through a literature review, so we had many conversations with equipment manufacturers and plant operators.”

    The team considered two different forms of hydrogen fuel to replace natural gas, one produced through electrolyzer facilities that convert water and electricity into hydrogen, and another that reforms natural gas, yielding hydrogen and carbon waste that can be captured to reduce emissions. They also ran the numbers on retrofitting natural gas plants to burn hydrogen as opposed to building entirely new facilities. Their model includes identification of likely locations throughout the state and expenses involved in constructing these facilities.

    The researchers spent months compiling a giant dataset before setting out on the task of analysis. The results from their modeling were clear: “Hydrogen can be a more cost-effective alternative to lithium-ion batteries for peaking operations on a power grid,” says Hernandez. In addition, notes Gençer, “While certain technologies worked better in particular locations, we found that on average, reforming hydrogen rather than electrolytic hydrogen turned out to be the cheapest option for replacing peaker plants.”

    A tool for energy investors

    When he began this project, Gençer admits he “wasn’t hopeful” about hydrogen replacing natural gas in peaker plants. “It was kind of shocking to see in our different scenarios that there was a place for hydrogen.” That’s because the overall price tag for converting a fossil-fuel based plant to one based on hydrogen is very high, and such conversions likely won’t take place until more sectors of the economy embrace hydrogen, whether as a fuel for transportation or for varied manufacturing and industrial purposes.

    A nascent hydrogen production infrastructure does exist, mainly in the production of ammonia for fertilizer. But enormous investments will be necessary to expand this framework to meet grid-scale needs, driven by purposeful incentives. “With any of the climate solutions proposed today, we will need a carbon tax or carbon pricing; otherwise nobody will switch to new technologies,” says Gençer.

    The researchers believe studies like theirs could help key energy stakeholders make better-informed decisions. To that end, they have integrated their analysis into SESAME, a life cycle and techno-economic assessment tool for a range of energy systems that was developed by MIT researchers. Users can leverage this sophisticated modeling environment to compare costs of energy storage and emissions from different technologies, for instance, or to determine whether it is cost-efficient to replace a natural gas-powered plant with one powered by hydrogen.

    “As utilities, industry, and investors look to decarbonize and achieve zero-emissions targets, they have to weigh the costs of investing in low-carbon technologies today against the potential impacts of climate change moving forward,” says Hernandez, who is currently a senior associate in the energy practice at Charles River Associates. Hydrogen, he believes, will become increasingly cost-competitive as its production costs decline and markets expand.

    A study group member of MITEI’s soon-to-be published Future of Storage study, Gençer knows that hydrogen alone will not usher in a zero-carbon future. But, he says, “Our research shows we need to seriously consider hydrogen in the energy transition, start thinking about key areas where hydrogen should be used, and start making the massive investments necessary.”

    Funding for this research was provided by MITEI’s Low-Carbon Energy Centers and Future of Storage study. More

  • in

    Designing better batteries for electric vehicles

    The urgent need to cut carbon emissions is prompting a rapid move toward electrified mobility and expanded deployment of solar and wind on the electric grid. If those trends escalate as expected, the need for better methods of storing electrical energy will intensify.

    “We need all the strategies we can get to address the threat of climate change,” says Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. “Obviously, developing technologies for grid-based storage at a large scale is critical. But for mobile applications — in particular, transportation — much research is focusing on adapting today’s lithium-ion battery to make versions that are safer, smaller, and can store more energy for their size and weight.”

    Traditional lithium-ion batteries continue to improve, but they have limitations that persist, in part because of their structure. A lithium-ion battery consists of two electrodes — one positive and one negative — sandwiched around an organic (carbon-containing) liquid. As the battery is charged and discharged, electrically charged particles (or ions) of lithium pass from one electrode to the other through the liquid electrolyte.

    One problem with that design is that at certain voltages and temperatures, the liquid electrolyte can become volatile and catch fire. “Batteries are generally safe under normal usage, but the risk is still there,” says Kevin Huang PhD ’15, a research scientist in Olivetti’s group.

    Another problem is that lithium-ion batteries are not well-suited for use in vehicles. Large, heavy battery packs take up space and increase a vehicle’s overall weight, reducing fuel efficiency. But it’s proving difficult to make today’s lithium-ion batteries smaller and lighter while maintaining their energy density — that is, the amount of energy they store per gram of weight.

    To solve those problems, researchers are changing key features of the lithium-ion battery to make an all-solid, or “solid-state,” version. They replace the liquid electrolyte in the middle with a thin, solid electrolyte that’s stable at a wide range of voltages and temperatures. With that solid electrolyte, they use a high-capacity positive electrode and a high-capacity, lithium metal negative electrode that’s far thinner than the usual layer of porous carbon. Those changes make it possible to shrink the overall battery considerably while maintaining its energy-storage capacity, thereby achieving a higher energy density.

    “Those features — enhanced safety and greater energy density — are probably the two most-often-touted advantages of a potential solid-state battery,” says Huang. He then quickly clarifies that “all of these things are prospective, hoped-for, and not necessarily realized.” Nevertheless, the possibility has many researchers scrambling to find materials and designs that can deliver on that promise.

    Thinking beyond the lab

    Researchers have come up with many intriguing options that look promising — in the lab. But Olivetti and Huang believe that additional practical considerations may be important, given the urgency of the climate change challenge. “There are always metrics that we researchers use in the lab to evaluate possible materials and processes,” says Olivetti. Examples might include energy-storage capacity and charge/discharge rate. When performing basic research — which she deems both necessary and important — those metrics are appropriate. “But if the aim is implementation, we suggest adding a few metrics that specifically address the potential for rapid scaling,” she says.

    Based on industry’s experience with current lithium-ion batteries, the MIT researchers and their colleague Gerbrand Ceder, the Daniel M. Tellep Distinguished Professor of Engineering at the University of California at Berkeley, suggest three broad questions that can help identify potential constraints on future scale-up as a result of materials selection. First, with this battery design, could materials availability, supply chains, or price volatility become a problem as production scales up? (Note that the environmental and other concerns raised by expanded mining are outside the scope of this study.) Second, will fabricating batteries from these materials involve difficult manufacturing steps during which parts are likely to fail? And third, do manufacturing measures needed to ensure a high-performance product based on these materials ultimately lower or raise the cost of the batteries produced?

    To demonstrate their approach, Olivetti, Ceder, and Huang examined some of the electrolyte chemistries and battery structures now being investigated by researchers. To select their examples, they turned to previous work in which they and their collaborators used text- and data-mining techniques to gather information on materials and processing details reported in the literature. From that database, they selected a few frequently reported options that represent a range of possibilities.

    Materials and availability

    In the world of solid inorganic electrolytes, there are two main classes of materials — the oxides, which contain oxygen, and the sulfides, which contain sulfur. Olivetti, Ceder, and Huang focused on one promising electrolyte option in each class and examined key elements of concern for each of them.

    The sulfide they considered was LGPS, which combines lithium, germanium, phosphorus, and sulfur. Based on availability considerations, they focused on the germanium, an element that raises concerns in part because it’s not generally mined on its own. Instead, it’s a byproduct produced during the mining of coal and zinc.

    To investigate its availability, the researchers looked at how much germanium was produced annually in the past six decades during coal and zinc mining and then at how much could have been produced. The outcome suggested that 100 times more germanium could have been produced, even in recent years. Given that supply potential, the availability of germanium is not likely to constrain the scale-up of a solid-state battery based on an LGPS electrolyte.

    The situation looked less promising with the researchers’ selected oxide, LLZO, which consists of lithium, lanthanum, zirconium, and oxygen. Extraction and processing of lanthanum are largely concentrated in China, and there’s limited data available, so the researchers didn’t try to analyze its availability. The other three elements are abundantly available. However, in practice, a small quantity of another element — called a dopant — must be added to make LLZO easy to process. So the team focused on tantalum, the most frequently used dopant, as the main element of concern for LLZO.

    Tantalum is produced as a byproduct of tin and niobium mining. Historical data show that the amount of tantalum produced during tin and niobium mining was much closer to the potential maximum than was the case with germanium. So the availability of tantalum is more of a concern for the possible scale-up of an LLZO-based battery.

    But knowing the availability of an element in the ground doesn’t address the steps required to get it to a manufacturer. So the researchers investigated a follow-on question concerning the supply chains for critical elements — mining, processing, refining, shipping, and so on. Assuming that abundant supplies are available, can the supply chains that deliver those materials expand quickly enough to meet the growing demand for batteries?

    In sample analyses, they looked at how much supply chains for germanium and tantalum would need to grow year to year to provide batteries for a projected fleet of electric vehicles in 2030. As an example, an electric vehicle fleet often cited as a goal for 2030 would require production of enough batteries to deliver a total of 100 gigawatt hours of energy. To meet that goal using just LGPS batteries, the supply chain for germanium would need to grow by 50 percent from year to year — a stretch, since the maximum growth rate in the past has been about 7 percent. Using just LLZO batteries, the supply chain for tantalum would need to grow by about 30 percent — a growth rate well above the historical high of about 10 percent.

    Those examples demonstrate the importance of considering both materials availability and supply chains when evaluating different solid electrolytes for their scale-up potential. “Even when the quantity of a material available isn’t a concern, as is the case with germanium, scaling all the steps in the supply chain to match the future production of electric vehicles may require a growth rate that’s literally unprecedented,” says Huang.

    Materials and processing

    In assessing the potential for scale-up of a battery design, another factor to consider is the difficulty of the manufacturing process and how it may impact cost. Fabricating a solid-state battery inevitably involves many steps, and a failure at any step raises the cost of each battery successfully produced. As Huang explains, “You’re not shipping those failed batteries; you’re throwing them away. But you’ve still spent money on the materials and time and processing.”

    As a proxy for manufacturing difficulty, Olivetti, Ceder, and Huang explored the impact of failure rate on overall cost for selected solid-state battery designs in their database. In one example, they focused on the oxide LLZO. LLZO is extremely brittle, and at the high temperatures involved in manufacturing, a large sheet that’s thin enough to use in a high-performance solid-state battery is likely to crack or warp.

    To determine the impact of such failures on cost, they modeled four key processing steps in assembling LLZO-based batteries. At each step, they calculated cost based on an assumed yield — that is, the fraction of total units that were successfully processed without failing. With the LLZO, the yield was far lower than with the other designs they examined; and, as the yield went down, the cost of each kilowatt-hour (kWh) of battery energy went up significantly. For example, when 5 percent more units failed during the final cathode heating step, cost increased by about $30/kWh — a nontrivial change considering that a commonly accepted target cost for such batteries is $100/kWh. Clearly, manufacturing difficulties can have a profound impact on the viability of a design for large-scale adoption.

    Materials and performance

    One of the main challenges in designing an all-solid battery comes from “interfaces” — that is, where one component meets another. During manufacturing or operation, materials at those interfaces can become unstable. “Atoms start going places that they shouldn’t, and battery performance declines,” says Huang.

    As a result, much research is devoted to coming up with methods of stabilizing interfaces in different battery designs. Many of the methods proposed do increase performance; and as a result, the cost of the battery in dollars per kWh goes down. But implementing such solutions generally involves added materials and time, increasing the cost per kWh during large-scale manufacturing.

    To illustrate that trade-off, the researchers first examined their oxide, LLZO. Here, the goal is to stabilize the interface between the LLZO electrolyte and the negative electrode by inserting a thin layer of tin between the two. They analyzed the impacts — both positive and negative — on cost of implementing that solution. They found that adding the tin separator increases energy-storage capacity and improves performance, which reduces the unit cost in dollars/kWh. But the cost of including the tin layer exceeds the savings so that the final cost is higher than the original cost.

    In another analysis, they looked at a sulfide electrolyte called LPSCl, which consists of lithium, phosphorus, and sulfur with a bit of added chlorine. In this case, the positive electrode incorporates particles of the electrolyte material — a method of ensuring that the lithium ions can find a pathway through the electrolyte to the other electrode. However, the added electrolyte particles are not compatible with other particles in the positive electrode — another interface problem. In this case, a standard solution is to add a “binder,” another material that makes the particles stick together.

    Their analysis confirmed that without the binder, performance is poor, and the cost of the LPSCl-based battery is more than $500/kWh. Adding the binder improves performance significantly, and the cost drops by almost $300/kWh. In this case, the cost of adding the binder during manufacturing is so low that essentially all the of the cost decrease from adding the binder is realized. Here, the method implemented to solve the interface problem pays off in lower costs.

    The researchers performed similar studies of other promising solid-state batteries reported in the literature, and their results were consistent: The choice of battery materials and processes can affect not only near-term outcomes in the lab but also the feasibility and cost of manufacturing the proposed solid-state battery at the scale needed to meet future demand. The results also showed that considering all three factors together — availability, processing needs, and battery performance — is important because there may be collective effects and trade-offs involved.

    Olivetti is proud of the range of concerns the team’s approach can probe. But she stresses that it’s not meant to replace traditional metrics used to guide materials and processing choices in the lab. “Instead, it’s meant to complement those metrics by also looking broadly at the sorts of things that could get in the way of scaling” — an important consideration given what Huang calls “the urgent ticking clock” of clean energy and climate change.

    This research was supported by the Seed Fund Program of the MIT Energy Initiative (MITEI) Low-Carbon Energy Center for Energy Storage; by Shell, a founding member of MITEI; and by the U.S. Department of Energy’s Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, under the Advanced Battery Materials Research Program. The text mining work was supported by the National Science Foundation, the Office of Naval Research, and MITEI.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Using aluminum and water to make clean hydrogen fuel — when and where it’s needed

    As the world works to move away from fossil fuels, many researchers are investigating whether clean hydrogen fuel can play an expanded role in sectors from transportation and industry to buildings and power generation. It could be used in fuel cell vehicles, heat-producing boilers, electricity-generating gas turbines, systems for storing renewable energy, and more.

    But while using hydrogen doesn’t generate carbon emissions, making it typically does. Today, almost all hydrogen is produced using fossil fuel-based processes that together generate more than 2 percent of all global greenhouse gas emissions. In addition, hydrogen is often produced in one location and consumed in another, which means its use also presents logistical challenges.

    A promising reaction

    Another option for producing hydrogen comes from a perhaps surprising source: reacting aluminum with water. Aluminum metal will readily react with water at room temperature to form aluminum hydroxide and hydrogen. That reaction doesn’t typically take place because a layer of aluminum oxide naturally coats the raw metal, preventing it from coming directly into contact with water.

    Using the aluminum-water reaction to generate hydrogen doesn’t produce any greenhouse gas emissions, and it promises to solve the transportation problem for any location with available water. Simply move the aluminum and then react it with water on-site. “Fundamentally, the aluminum becomes a mechanism for storing hydrogen — and a very effective one,” says Douglas P. Hart, professor of mechanical engineering at MIT. “Using aluminum as our source, we can ‘store’ hydrogen at a density that’s 10 times greater than if we just store it as a compressed gas.”

    Two problems have kept aluminum from being employed as a safe, economical source for hydrogen generation. The first problem is ensuring that the aluminum surface is clean and available to react with water. To that end, a practical system must include a means of first modifying the oxide layer and then keeping it from re-forming as the reaction proceeds.

    The second problem is that pure aluminum is energy-intensive to mine and produce, so any practical approach needs to use scrap aluminum from various sources. But scrap aluminum is not an easy starting material. It typically occurs in an alloyed form, meaning that it contains other elements that are added to change the properties or characteristics of the aluminum for different uses. For example, adding magnesium increases strength and corrosion-resistance, adding silicon lowers the melting point, and adding a little of both makes an alloy that’s moderately strong and corrosion-resistant.

    Despite considerable research on aluminum as a source of hydrogen, two key questions remain: What’s the best way to prevent the adherence of an oxide layer on the aluminum surface, and how do alloying elements in a piece of scrap aluminum affect the total amount of hydrogen generated and the rate at which it is generated?

    “If we’re going to use scrap aluminum for hydrogen generation in a practical application, we need to be able to better predict what hydrogen generation characteristics we’re going to observe from the aluminum-water reaction,” says Laureen Meroueh PhD ’20, who earned her doctorate in mechanical engineering.

    Since the fundamental steps in the reaction aren’t well understood, it’s been hard to predict the rate and volume at which hydrogen forms from scrap aluminum, which can contain varying types and concentrations of alloying elements. So Hart, Meroueh, and Thomas W. Eagar, a professor of materials engineering and engineering management in the MIT Department of Materials Science and Engineering, decided to examine — in a systematic fashion — the impacts of those alloying elements on the aluminum-water reaction and on a promising technique for preventing the formation of the interfering oxide layer.

    To prepare, they had experts at Novelis Inc. fabricate samples of pure aluminum and of specific aluminum alloys made of commercially pure aluminum combined with either 0.6 percent silicon (by weight), 1 percent magnesium, or both — compositions that are typical of scrap aluminum from a variety of sources. Using those samples, the MIT researchers performed a series of tests to explore different aspects of the aluminum-water reaction.

    Pre-treating the aluminum

    The first step was to demonstrate an effective means of penetrating the oxide layer that forms on aluminum in the air. Solid aluminum is made up of tiny grains that are packed together with occasional boundaries where they don’t line up perfectly. To maximize hydrogen production, researchers would need to prevent the formation of the oxide layer on all those interior grain surfaces.

    Research groups have already tried various ways of keeping the aluminum grains “activated” for reaction with water. Some have crushed scrap samples into particles so tiny that the oxide layer doesn’t adhere. But aluminum powders are dangerous, as they can react with humidity and explode. Another approach calls for grinding up scrap samples and adding liquid metals to prevent oxide deposition. But grinding is a costly and energy-intensive process.

    To Hart, Meroueh, and Eagar, the most promising approach — first introduced by Jonathan Slocum ScD ’18 while he was working in Hart’s research group — involved pre-treating the solid aluminum by painting liquid metals on top and allowing them to permeate through the grain boundaries.

    To determine the effectiveness of that approach, the researchers needed to confirm that the liquid metals would reach the internal grain surfaces, with and without alloying elements present. And they had to establish how long it would take for the liquid metal to coat all of the grains in pure aluminum and its alloys.

    They started by combining two metals — gallium and indium — in specific proportions to create a “eutectic” mixture; that is, a mixture that would remain in liquid form at room temperature. They coated their samples with the eutectic and allowed it to penetrate for time periods ranging from 48 to 96 hours. They then exposed the samples to water and monitored the hydrogen yield (the amount formed) and flow rate for 250 minutes. After 48 hours, they also took high-magnification scanning electron microscope (SEM) images so they could observe the boundaries between adjacent aluminum grains.

    Based on the hydrogen yield measurements and the SEM images, the MIT team concluded that the gallium-indium eutectic does naturally permeate and reach the interior grain surfaces. However, the rate and extent of penetration vary with the alloy. The permeation rate was the same in silicon-doped aluminum samples as in pure aluminum samples but slower in magnesium-doped samples.

    Perhaps most interesting were the results from samples doped with both silicon and magnesium — an aluminum alloy often found in recycling streams. Silicon and magnesium chemically bond to form magnesium silicide, which occurs as solid deposits on the internal grain surfaces. Meroueh hypothesized that when both silicon and magnesium are present in scrap aluminum, those deposits can act as barriers that impede the flow of the gallium-indium eutectic.

    The experiments and images confirmed her hypothesis: The solid deposits did act as barriers, and images of samples pre-treated for 48 hours showed that permeation wasn’t complete. Clearly, a lengthy pre-treatment period would be critical for maximizing the hydrogen yield from scraps of aluminum containing both silicon and magnesium.

    Meroueh cites several benefits to the process they used. “You don’t have to apply any energy for the gallium-indium eutectic to work its magic on aluminum and get rid of that oxide layer,” she says. “Once you’ve activated your aluminum, you can drop it in water, and it’ll generate hydrogen — no energy input required.” Even better, the eutectic doesn’t chemically react with the aluminum. “It just physically moves around in between the grains,” she says. “At the end of the process, I could recover all of the gallium and indium I put in and use it again” — a valuable feature as gallium and (especially) indium are costly and in relatively short supply.

    Impacts of alloying elements on hydrogen generation

    The researchers next investigated how the presence of alloying elements affects hydrogen generation. They tested samples that had been treated with the eutectic for 96 hours; by then, the hydrogen yield and flow rates had leveled off in all the samples.

    The presence of 0.6 percent silicon increased the hydrogen yield for a given weight of aluminum by 20 percent compared to pure aluminum — even though the silicon-containing sample had less aluminum than the pure aluminum sample. In contrast, the presence of 1 percent magnesium produced far less hydrogen, while adding both silicon and magnesium pushed the yield up, but not to the level of pure aluminum.

    The presence of silicon also greatly accelerated the reaction rate, producing a far higher peak in the flow rate but cutting short the duration of hydrogen output. The presence of magnesium produced a lower flow rate but allowed the hydrogen output to remain fairly steady over time. And once again, aluminum with both alloying elements produced a flow rate between that of magnesium-doped and pure aluminum.

    Those results provide practical guidance on how to adjust the hydrogen output to match the operating needs of a hydrogen-consuming device. If the starting material is commercially pure aluminum, adding small amounts of carefully selected alloying elements can tailor the hydrogen yield and flow rate. If the starting material is scrap aluminum, careful choice of the source can be key. For high, brief bursts of hydrogen, pieces of silicon-containing aluminum from an auto junkyard could work well. For lower but longer flows, magnesium-containing scraps from the frame of a demolished building might be better. For results somewhere in between, aluminum containing both silicon and magnesium should work well; such material is abundantly available from scrapped cars and motorcycles, yachts, bicycle frames, and even smartphone cases.

    It should also be possible to combine scraps of different aluminum alloys to tune the outcome, notes Meroueh. “If I have a sample of activated aluminum that contains just silicon and another sample that contains just magnesium, I can put them both into a container of water and let them react,” she says. “So I get the fast ramp-up in hydrogen production from the silicon and then the magnesium takes over and has that steady output.”

    Another opportunity for tuning: Reducing grain size

    Another practical way to affect hydrogen production could be to reduce the size of the aluminum grains — a change that should increase the total surface area available for reactions to occur.

    To investigate that approach, the researchers requested specially customized samples from their supplier. Using standard industrial procedures, the Novelis experts first fed each sample through two rollers, squeezing it from the top and bottom so that the internal grains were flattened. They then heated each sample until the long, flat grains had reorganized and shrunk to a targeted size.

    In a series of carefully designed experiments, the MIT team found that reducing the grain size increased the efficiency and decreased the duration of the reaction to varying degrees in the different samples. Again, the presence of particular alloying elements had a major effect on the outcome.

    Needed: A revised theory that explains observations

    Throughout their experiments, the researchers encountered some unexpected results. For example, standard corrosion theory predicts that pure aluminum will generate more hydrogen than silicon-doped aluminum will — the opposite of what they observed in their experiments.

    To shed light on the underlying chemical reactions, Hart, Meroueh, and Eagar investigated hydrogen “flux,” that is, the volume of hydrogen generated over time on each square centimeter of aluminum surface, including the interior grains. They examined three grain sizes for each of their four compositions and collected thousands of data points measuring hydrogen flux.

    Their results show that reducing grain size has significant effects. It increases the peak hydrogen flux from silicon-doped aluminum as much as 100 times and from the other three compositions by 10 times. With both pure aluminum and silicon-containing aluminum, reducing grain size also decreases the delay before the peak flux and increases the rate of decline afterward. With magnesium-containing aluminum, reducing the grain size brings about an increase in peak hydrogen flux and results in a slightly faster decline in the rate of hydrogen output. With both silicon and magnesium present, the hydrogen flux over time resembles that of magnesium-containing aluminum when the grain size is not manipulated. When the grain size is reduced, the hydrogen output characteristics begin to resemble behavior observed in silicon-containing aluminum. That outcome was unexpected because when silicon and magnesium are both present, they react to form magnesium silicide, resulting in a new type of aluminum alloy with its own properties.

    The researchers stress the benefits of developing a better fundamental understanding of the underlying chemical reactions involved. In addition to guiding the design of practical systems, it might help them find a replacement for the expensive indium in their pre-treatment mixture. Other work has shown that gallium will naturally permeate through the grain boundaries of aluminum. “At this point, we know that the indium in our eutectic is important, but we don’t really understand what it does, so we don’t know how to replace it,” says Hart.

    But already Hart, Meroueh, and Eagar have demonstrated two practical ways of tuning the hydrogen reaction rate: by adding certain elements to the aluminum and by manipulating the size of the interior aluminum grains. In combination, those approaches can deliver significant results. “If you go from magnesium-containing aluminum with the largest grain size to silicon-containing aluminum with the smallest grain size, you get a hydrogen reaction rate that differs by two orders of magnitude,” says Meroueh. “That’s huge if you’re trying to design a real system that would use this reaction.”

    This research was supported through the MIT Energy Initiative by ExxonMobil-MIT Energy Fellowships awarded to Laureen Meroueh PhD ’20 from 2018 to 2020.

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Electrifying cars and light trucks to meet Paris climate goals

    On Aug. 5, the White House announced that it seeks to ensure that 50 percent of all new passenger vehicles sold in the United States by 2030 are powered by electricity. The purpose of this target is to enable the U.S to remain competitive with China in the growing electric vehicle (EV) market and meet its international climate commitments. Setting ambitious EV sales targets and transitioning to zero-carbon power sources in the United States and other nations could lead to significant reductions in carbon dioxide and other greenhouse gas emissions in the transportation sector and move the world closer to achieving the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius relative to preindustrial levels.

    At this time, electrification of the transportation sector is occurring primarily in private light-duty vehicles (LDVs). In 2020, the global EV fleet exceeded 10 million, but that’s a tiny fraction of the cars and light trucks on the road. How much of the LDV fleet will need to go electric to keep the Paris climate goal in play? 

    To help answer that question, researchers at the MIT Joint Program on the Science and Policy of Global Change and MIT Energy Initiative have assessed the potential impacts of global efforts to reduce carbon dioxide emissions on the evolution of LDV fleets over the next three decades.

    Using an enhanced version of the multi-region, multi-sector MIT Economic Projection and Policy Analysis (EPPA) model that includes a representation of the household transportation sector, they projected changes for the 2020-50 period in LDV fleet composition, carbon dioxide emissions, and related impacts for 18 different regions. Projections were generated under four increasingly ambitious climate mitigation scenarios: a “Reference” scenario based on current market trends and fuel efficiency policies, a “Paris Forever” scenario in which current Paris Agreement commitments (Nationally Determined Contributions, or NDCs) are maintained but not strengthened after 2030, a “Paris to 2 C” scenario in which decarbonization actions are enhanced to be consistent with capping global warming at 2 C, and an “Accelerated Actions” scenario the caps global warming at 1.5 C through much more aggressive emissions targets than the current NDCs.

    Based on projections spanning the first three scenarios, the researchers found that the global EV fleet will likely grow to about 95-105 million EVs by 2030, and 585-823 million EVs by 2050. In the Accelerated Actions scenario, global EV stock reaches more than 200 million vehicles in 2030, and more than 1 billion in 2050, accounting for two-thirds of the global LDV fleet. The research team also determined that EV uptake will likely grow but vary across regions over the 30-year study time frame, with China, the United States, and Europe remaining the largest markets. Finally, the researchers found that while EVs play a role in reducing oil use, a more substantial reduction in oil consumption comes from economy-wide carbon pricing. The results appear in a study in the journal Economics of Energy & Environmental Policy.

    “Our study shows that EVs can contribute significantly to reducing global carbon emissions at a manageable cost,” says MIT Joint Program Deputy Director and MIT Energy Initiative Senior Research Scientist Sergey Paltsev, the lead author. “We hope that our findings will help decision-makers to design efficient pathways to reduce emissions.”  

    To boost the EV share of the global LDV fleet, the study’s co-authors recommend more ambitious policies to mitigate climate change and decarbonize the electric grid. They also envision an “integrated system approach” to transportation that emphasizes making internal combustion engine vehicles more efficient, a long-term shift to low- and net-zero carbon fuels, and systemic efficiency improvements through digitalization, smart pricing, and multi-modal integration. While the study focuses on EV deployment, the authors also stress for the need for investment in all possible decarbonization options related to transportation, including enhancing public transportation, avoiding urban sprawl through strategic land-use planning, and reducing the use of private motorized transport by mode switching to walking, biking, and mass transit.

    This research is an extension of the authors’ contribution to the MIT Mobility of the Future study. More

  • in

    Using graphene foam to filter toxins from drinking water

    Some kinds of water pollution, such as algal blooms and plastics that foul rivers, lakes, and marine environments, lie in plain sight. But other contaminants are not so readily apparent, which makes their impact potentially more dangerous. Among these invisible substances is uranium. Leaching into water resources from mining operations, nuclear waste sites, or from natural subterranean deposits, the element can now be found flowing out of taps worldwide.

    In the United States alone, “many areas are affected by uranium contamination, including the High Plains and Central Valley aquifers, which supply drinking water to 6 million people,” says Ahmed Sami Helal, a postdoc in the Department of Nuclear Science and Engineering. This contamination poses a near and present danger. “Even small concentrations are bad for human health,” says Ju Li, the Battelle Energy Alliance Professor of Nuclear Science and Engineering and professor of materials science and engineering.

    Now, a team led by Li has devised a highly efficient method for removing uranium from drinking water. Applying an electric charge to graphene oxide foam, the researchers can capture uranium in solution, which precipitates out as a condensed solid crystal. The foam may be reused up to seven times without losing its electrochemical properties. “Within hours, our process can purify a large quantity of drinking water below the EPA limit for uranium,” says Li.

    A paper describing this work was published in this week Advanced Materials. The two first co-authors are Helal and Chao Wang, a postdoc at MIT during the study, who is now with the School of Materials Science and Engineering at Tongji University, Shanghai. Researchers from Argonne National Laboratory, Taiwan’s National Chiao Tung University, and the University of Tokyo also participated in the research. The Defense Threat Reduction Agency (U.S. Department of Defense) funded later stages of this work.

    Targeting the contaminant

    The project, launched three years ago, began as an effort to find better approaches to environmental cleanup of heavy metals from mining sites. To date, remediation methods for such metals as chromium, cadmium, arsenic, lead, mercury, radium, and uranium have proven limited and expensive. “These techniques are highly sensitive to organics in water, and are poor at separating out the heavy metal contaminants,” explains Helal. “So they involve long operation times, high capital costs, and at the end of extraction, generate more toxic sludge.”

    To the team, uranium seemed a particularly attractive target. Field testing from the U.S. Geological Service and the Environmental Protection Agency (EPA) has revealed unhealthy levels of uranium moving into reservoirs and aquifers from natural rock sources in the northeastern United States, from ponds and pits storing old nuclear weapons and fuel in places like Hanford, Washington, and from mining activities located in many western states. This kind of contamination is prevalent in many other nations as well. An alarming number of these sites show uranium concentrations close to or above the EPA’s recommended ceiling of 30 parts per billion (ppb) — a level linked to kidney damage, cancer risk, and neurobehavioral changes in humans.

    The critical challenge lay in finding a practical remediation process exclusively sensitive to uranium, capable of extracting it from solution without producing toxic residues. And while earlier research showed that electrically charged carbon fiber could filter uranium from water, the results were partial and imprecise.

    Wang managed to crack these problems — based on her investigation of the behavior of graphene foam used for lithium-sulfur batteries. “The physical performance of this foam was unique because of its ability to attract certain chemical species to its surface,” she says. “I thought the ligands in graphene foam would work well with uranium.”

    Simple, efficient, and clean

    The team set to work transforming graphene foam into the equivalent of a uranium magnet. They learned that by sending an electric charge through the foam, splitting water and releasing hydrogen, they could increase the local pH and induce a chemical change that pulled uranium ions out of solution. The researchers found that the uranium would graft itself onto the foam’s surface, where it formed a never-before-seen crystalline uranium hydroxide. On reversal of the electric charge, the mineral, which resembles fish scales, slipped easily off the foam.

    It took hundreds of tries to get the chemical composition and electrolysis just right. “We kept changing the functional chemical groups to get them to work correctly,” says Helal. “And the foam was initially quite fragile, tending to break into pieces, so we needed to make it stronger and more durable,” says Wang.

    This uranium filtration process is simple, efficient, and clean, according to Li: “Each time it’s used, our foam can capture four times its own weight of uranium, and we can achieve an extraction capacity of 4,000 mg per gram, which is a major improvement over other methods,” he says. “We’ve also made a major breakthrough in reusability, because the foam can go through seven cycles without losing its extraction efficiency.” The graphene foam functions as well in seawater, where it reduces uranium concentrations from 3 parts per million to 19.9 ppb, showing that other ions in the brine do not interfere with filtration.

    The team believes its low-cost, effective device could become a new kind of home water filter, fitting on faucets like those of commercial brands. “Some of these filters already have activated carbon, so maybe we could modify these, add low-voltage electricity to filter uranium,” says Li.

    “The uranium extraction this device achieves is very impressive when compared to existing methods,” says Ho Jin Ryu, associate professor of nuclear and quantum engineering at the Korea Advanced Institute of Science and Technology. Ryu, who was not involved in the research, believes that the demonstration of graphene foam reusability is a “significant advance,” and that “the technology of local pH control to enhance uranium deposition will be impactful because the scientific principle can be applied more generally to heavy metal extraction from polluted water.”

    The researchers have already begun investigating broader applications of their method. “There is a science to this, so we can modify our filters to be selective for other heavy metals such as lead, mercury, and cadmium,” says Li. He notes that radium is another significant danger for locales in the United States and elsewhere that lack resources for reliable drinking water infrastructure.

    “In the future, instead of a passive water filter, we could be using a smart filter powered by clean electricity that turns on electrolytic action, which could extract multiple toxic metals, tell you when to regenerate the filter, and give you quality assurance about the water you’re drinking.” More

  • in

    Cleaning up industrial filtration

    If you wanted to get pasta out of a pot of water, would you boil off the water, or use a strainer? While home cooks would choose the strainer, many industries continue to use energy-intensive thermal methods of separating out liquids. In some cases, that’s because it’s difficult to make a filtration system for chemical separation, which requires pores small enough to separate atoms.

    In other cases, membranes exist to separate liquids, but they are made of fragile polymers, which can break down or gum up in industrial use.

    Via Separations, a startup that emerged from MIT in 2017, has set out to address these challenges with a membrane that is cost-effective and robust. Made of graphene oxide (a “cousin” of pencil lead), the membrane can reduce the amount of energy used in industrial separations by 90 percent, according to Shreya Dave PhD ’16, company co-founder and CEO.

    This is valuable because separation processes account for about 22 percent of all in-plant energy use in the United States, according to Oak Ridge National Laboratory. By making such processes significantly more efficient, Via Separations plans to both save energy and address the significant emissions produced by thermal processes. “Our goal is eliminating 500 megatons of carbon dioxide emissions by 2050,” Dave says.

    Play video

    What do our passions for pasta and decarbonizing the Earth have in common? MIT alumna Shreya Dave PhD ’16 explains how she and her team at Via Separations are building the equivalent of a pasta strainer to separate chemical compounds for industry.

    Via Separations began piloting its technology this year at a U.S. paper company and expects to deploy a full commercial system there in the spring of 2022. “Our vision is to help manufacturers slow carbon dioxide emissions next year,” Dave says.

    MITEI Seed Grant

    The story of Via Separations begins in 2012, when the MIT Energy Initiative (MITEI) awarded a Seed Fund grant to Professor Jeffrey Grossman, who is now the Morton and Claire Goulder and Family Professor in Environmental Systems and head of MIT’s Department of Materials Science and Engineering. Grossman was pursuing research into nanoporous membranes for water desalination. “We thought we could bring down the cost of desalination and improve access to clean water,” says Dave, who worked on the project as a graduate student in Grossman’s lab.

    There, she teamed up with Brent Keller PhD ’16, another Grossman graduate student and a 2016-17 ExxonMobil-MIT Energy Fellow, who was developing lab experiments to fabricate and test new materials. “We were early comrades in figuring out how to debug experiments or fix equipment,” says Keller, Via Separations’ co-founder and chief technology officer. “We were fast friends who spent a lot of time talking about science over burritos.”

    Dave went on to write her doctoral thesis on using graphene oxide for water desalination, but that turned out to be the wrong application of the technology from a business perspective, she says. “The cost of desalination doesn’t lie in the membrane materials,” she explains.

    So, after Dave and Keller graduated from MIT in 2016, they spent a lot of time talking to customers to learn more about the needs and opportunities for their new separation technology. This research led them to target the paper industry, because the environmental benefits of improving paper processing are enormous, Dave says. “The paper industry is particularly exciting because separation processes just in that industry account for more than 2 percent of U.S. energy consumption,” she says. “It’s a very concentrated, high-energy-use industry.”

    Most paper today is made by breaking down the chemical bonds in wood to create wood pulp, the primary ingredient of paper. This process generates a byproduct called black liquor, a toxic solution that was once simply dumped into waterways. To clean up this process, paper mills turned to boiling off the water from black liquor and recovering both water and chemicals for reuse in the pulping process. (Today, the most valuable way to use the liquor is as biomass feedstock to generate energy.) Via Separations plans to accomplish this same separation work by filtering black liquor through its graphene oxide membrane.

    “The advantage of graphene oxide is that it’s very robust,” Dave says. “It’s got carbon double bonds that hold together in a lot of environments, including at different pH levels and temperatures that are typically unfriendly to materials.”

    Such properties should also make the company’s membranes attractive to other industries that use membrane separation, Keller says, because today’s polymer membranes have drawbacks. “For most of the things we make — from plastics to paper and gasoline — those polymers will swell or react or degrade,” he says.

    Graphene oxide is significantly more durable, and Via Separations can customize the pores in the material to suit each industry’s application. “That’s our secret sauce,” Dave says, “modulating pore size while retaining robustness to operate in challenging environments.”

    “We’re building a catalog of products to serve different applications,” Keller says, noting that the next target market could be the food and beverage industry. “In that industry, instead of separating different corrosive paper chemicals from water, we’re trying to separate particular sugars and food ingredients from other things.”

    Future target customers include pharmaceutical companies, oil refineries, and semiconductor manufacturers, or even carbon capture businesses.

    Scaling up

    Dave, Keller, and Grossman launched Via Separations in 2017 — with a lot of help from MIT. After the seed grant, in 2015, the founders received a year of funding and support from the J-WAFS Solutions program to explore markets and to develop their business plans. The company’s first capital investment came from The Engine, a venture firm founded by MIT to support “tough tech” companies (tech businesses with transformative potential but long and challenging paths to success). They also received advice and support from MIT’s Deshpande Center for Technological Innovation, Venture Mentoring Service, and Technology Licensing Office. In addition, Grossman continues to serve the company as chief scientist.

    “We were incredibly fortunate to be starting a company in the MIT entrepreneurial ecosystem,” Keller says, noting that The Engine support alone “probably shaved years off our progress.”

    Already, Via Separations has grown to employ 17 people, while significantly scaling up its product. “Our customers are producing thousands of gallons per minute,” Keller explains. “To process that much liquid, we need huge areas of membrane.”

    Via Separations’ manufacturing process, which is now capable of making more than 10,000 square feet of membrane in one production run, is a key competitive advantage, Dave says. The company rolls 300-400 square feet of membrane into a module, and modules can be combined as needed to increase filtration capacity.

    The goal, Dave says, is to contribute to a more sustainable world by making an environmentally beneficial product that makes good business sense. “What we do is make manufacturing things more energy-efficient,” she says. “We allow a paper mill or chemical facility to make more product using less energy and with lower costs. So, there is a bottom-line benefit that’s significant on an industrial scale.”

    Keller says he shares Dave’s goal of building a more sustainable future. “Climate change and energy are central challenges of our time,” he says. “Working on something that has a chance to make a meaningful impact on something so important to everyone is really fulfilling.”

    This article appears in the Spring 2021 issue of Energy Futures, the magazine of the MIT Energy Initiative.  More

  • in

    Amy Watterson: Model engineer

    “I love that we are doing something that no one else is doing.”

    Amy Watterson is excited when she talks about SPARC, the pilot fusion plant being developed by MIT spinoff Commonwealth Fusion Systems (CSF). Since being hired as a mechanical engineer at the Plasma Science and Fusion Center (PSFC) two years ago, Watterson has found her skills stretching to accommodate the multiple needs of the project.

    Fusion, which fuels the sun and stars, has long been sought as a carbon-free energy source for the world. For decades researchers have pursued the “tokamak,” a doughnut-shaped vacuum chamber where hot plasma can be contained by magnetic fields and heated to the point where fusion occurs. Sustaining the fusion reactions long enough to draw energy from them has been a challenge.

    Watterson is intimately aware of this difficulty. Much of her life she has heard the quip, “Fusion is 50 years away and always will be.” The daughter of PSFC research scientist Catherine Fiore, who headed the PSFC’s Office of Environment, Safety and Health, and Reich Watterson, an optical engineer working at the center, she had watched her parents devote years to making fusion a reality. She determined before entering Rensselaer Polytechnic Institute that she could forgo any attempt to follow her parents into a field that might not produce results during her career.

    Working on SPARC has changed her mindset. Taking advantage of a novel high-temperature superconducting tape, SPARC’s magnets will be compact while generating magnetic fields stronger than would be possible from other mid-sized tokamaks, and producing more fusion power. It suggests a high-field device that produces net fusion gain is not 50 years away. SPARC is scheduled to be begin operation in 2025.

    An education in modeling

    Watterson’s current excitement, and focus, is due to an approaching milestone for SPARC: a test of the Toroidal Field Magnet Coil (TFMC), a scaled prototype for the HTS magnets that will surround SPARC’s toroidal vacuum chamber. Its design and manufacture have been shaped by computer models and simulations. As part of a large research team, Waterson has received an education in modeling over the past two years.

    Computer models move scientific experiments forward by allowing researchers to predict what will happen to an experiment — or its materials — if a parameter is changed. Modeling a component of the TFMC, for example, researchers can test how it is affected by varying amounts of current, different temperatures or different materials. With this information they can make choices that will improve the success of the experiment.

    In preparation for the magnet testing, Watterson has modeled aspects of the cryogenic system that will circulate helium gas around the TFMC to keep it cold enough to remain superconducting. Taking into consideration the amount of cooling entering the system, the flow rate of the helium, the resistance created by valves and transfer lines and other parameters, she can model how much helium flow will be necessary to guarantee the magnet stays cold enough. Adjusting a parameter can make the difference between a magnet remaining superconducting and becoming overheated or even damaged.

    Watterson and her teammates have also modeled pressures and stress on the inside of the TFMC. Pumping helium through the coil to cool it down will add 20 atmospheres of pressure, which could create a degree of flex in elements of the magnet that are welded down. Modeling can help determine how much pressure a weld can sustain.

    “How thick does a weld need to be, and where should you put the weld so that it doesn’t break — that’s something you don’t want to leave until you’re finally assembling it,” says Watterson.

    Modeling the behavior of helium is particularly challenging because its properties change significantly as the pressure and temperature change.

    “A few degrees or a little pressure will affect the fluid’s viscosity, density, thermal conductivity, and heat capacity,” says Watterson. “The flow has different pressures and temperatures at different places in the cryogenic loop. You end up with a set of equations that are very dependent on each other, which makes it a challenge to solve.”

    Role model

    Watterson notes that her modeling depends on the contributions of colleagues at the PSFC, and praises the collaborative spirit among researchers and engineers, a community that now feels like family. Her teammates have been her mentors. “I’ve learned so much more on the job in two years than I did in four years at school,” she says.

    She realizes that having her mother as a role model in her own family has always made it easier for her to imagine becoming a scientist or engineer. Tracing her early passion for engineering to a middle school Lego robotics tournament, her eyes widen as she talks about the need for more female engineers, and the importance of encouraging girls to believe they are equal to the challenge.

    “I want to be a role model and tell them ‘I’m a successful engineer, you can be too.’ Something I run into a lot is that little girls will say, ‘I can’t be an engineer, I’m not cut out for that.’ And I say, ‘Well that’s not true. Let me show you. If you can make this Lego robot, then you can be an engineer.’ And it turns out they usually can.”

    Then, as if making an adjustment to one of her computer models, she continues.

    “Actually, they always can.” More