More stories

  • in

    Turning carbon dioxide into valuable products

    Carbon dioxide (CO2) is a major contributor to climate change and a significant product of many human activities, notably industrial manufacturing. A major goal in the energy field has been to chemically convert emitted CO2 into valuable chemicals or fuels. But while CO2 is available in abundance, it has not yet been widely used to generate value-added products. Why not?

    The reason is that CO2 molecules are highly stable and therefore not prone to being chemically converted to a different form. Researchers have sought materials and device designs that could help spur that conversion, but nothing has worked well enough to yield an efficient, cost-effective system.

    Two years ago, Ariel Furst, the Raymond (1921) and Helen St. Laurent Career Development Professor of Chemical Engineering at MIT, decided to try using something different — a material that gets more attention in discussions of biology than of chemical engineering. Already, results from work in her lab suggest that her unusual approach is paying off.

    The stumbling block

    The challenge begins with the first step in the CO2 conversion process. Before being transformed into a useful product, CO2 must be chemically converted into carbon monoxide (CO). That conversion can be encouraged using electrochemistry, a process in which input voltage provides the extra energy needed to make the stable CO2 molecules react. The problem is that achieving the CO2-to-CO conversion requires large energy inputs — and even then, CO makes up only a small fraction of the products that are formed.

    To explore opportunities for improving this process, Furst and her research group focused on the electrocatalyst, a material that enhances the rate of a chemical reaction without being consumed in the process. The catalyst is key to successful operation. Inside an electrochemical device, the catalyst is often suspended in an aqueous (water-based) solution. When an electric potential (essentially a voltage) is applied to a submerged electrode, dissolved CO2 will — helped by the catalyst — be converted to CO.

    But there’s one stumbling block: The catalyst and the CO2 must meet on the surface of the electrode for the reaction to occur. In some studies, the catalyst is dispersed in the solution, but that approach requires more catalyst and isn’t very efficient, according to Furst. “You have to both wait for the diffusion of CO2 to the catalyst and for the catalyst to reach the electrode before the reaction can occur,” she explains. As a result, researchers worldwide have been exploring different methods of “immobilizing” the catalyst on the electrode.

    Connecting the catalyst and the electrode

    Before Furst could delve into that challenge, she needed to decide which of the two types of CO2 conversion catalysts to work with: the traditional solid-state catalyst or a catalyst made up of small molecules. In examining the literature, she concluded that small-molecule catalysts held the most promise. While their conversion efficiency tends to be lower than that of solid-state versions, molecular catalysts offer one important advantage: They can be tuned to emphasize reactions and products of interest.

    Two approaches are commonly used to immobilize small-molecule catalysts on an electrode. One involves linking the catalyst to the electrode by strong covalent bonds — a type of bond in which atoms share electrons; the result is a strong, essentially permanent connection. The other sets up a non-covalent attachment between the catalyst and the electrode; unlike a covalent bond, this connection can easily be broken.

    Neither approach is ideal. In the former case, the catalyst and electrode are firmly attached, ensuring efficient reactions; but when the activity of the catalyst degrades over time (which it will), the electrode can no longer be accessed. In the latter case, a degraded catalyst can be removed; but the exact placement of the small molecules of the catalyst on the electrode can’t be controlled, leading to an inconsistent, often decreasing, catalytic efficiency — and simply increasing the amount of catalyst on the electrode surface without concern for where the molecules are placed doesn’t solve the problem.

    What was needed was a way to position the small-molecule catalyst firmly and accurately on the electrode and then release it when it degrades. For that task, Furst turned to what she and her team regard as a kind of “programmable molecular Velcro”: deoxyribonucleic acid, or DNA.

    Adding DNA to the mix

    Mention DNA to most people, and they think of biological functions in living things. But the members of Furst’s lab view DNA as more than just genetic code. “DNA has these really cool physical properties as a biomaterial that people don’t often think about,” she says. “DNA can be used as a molecular Velcro that can stick things together with very high precision.”

    Furst knew that DNA sequences had previously been used to immobilize molecules on surfaces for other purposes. So she devised a plan to use DNA to direct the immobilization of catalysts for CO2 conversion.

    Her approach depends on a well-understood behavior of DNA called hybridization. The familiar DNA structure is a double helix that forms when two complementary strands connect. When the sequence of bases (the four building blocks of DNA) in the individual strands match up, hydrogen bonds form between complementary bases, firmly linking the strands together.

    Using that behavior for catalyst immobilization involves two steps. First, the researchers attach a single strand of DNA to the electrode. Then they attach a complementary strand to the catalyst that is floating in the aqueous solution. When the latter strand gets near the former, the two strands hybridize; they become linked by multiple hydrogen bonds between properly paired bases. As a result, the catalyst is firmly affixed to the electrode by means of two interlocked, self-assembled DNA strands, one connected to the electrode and the other to the catalyst.

    Better still, the two strands can be detached from one another. “The connection is stable, but if we heat it up, we can remove the secondary strand that has the catalyst on it,” says Furst. “So we can de-hybridize it. That allows us to recycle our electrode surfaces — without having to disassemble the device or do any harsh chemical steps.”

    Experimental investigation

    To explore that idea, Furst and her team — postdocs Gang Fan and Thomas Gill, former graduate student Nathan Corbin PhD ’21, and former postdoc Amruta Karbelkar — performed a series of experiments using three small-molecule catalysts based on porphyrins, a group of compounds that are biologically important for processes ranging from enzyme activity to oxygen transport. Two of the catalysts involve a synthetic porphyrin plus a metal center of either cobalt or iron. The third catalyst is hemin, a natural porphyrin compound used to treat porphyria, a set of disorders that can affect the nervous system. “So even the small-molecule catalysts we chose are kind of inspired by nature,” comments Furst.

    In their experiments, the researchers first needed to modify single strands of DNA and deposit them on one of the electrodes submerged in the solution inside their electrochemical cell. Though this sounds straightforward, it did require some new chemistry. Led by Karbelkar and third-year undergraduate researcher Rachel Ahlmark, the team developed a fast, easy way to attach DNA to electrodes. For this work, the researchers’ focus was on attaching DNA, but the “tethering” chemistry they developed can also be used to attach enzymes (protein catalysts), and Furst believes it will be highly useful as a general strategy for modifying carbon electrodes.

    Once the single strands of DNA were deposited on the electrode, the researchers synthesized complementary strands and attached to them one of the three catalysts. When the DNA strands with the catalyst were added to the solution in the electrochemical cell, they readily hybridized with the DNA strands on the electrode. After half-an-hour, the researchers applied a voltage to the electrode to chemically convert CO2 dissolved in the solution and used a gas chromatograph to analyze the makeup of the gases produced by the conversion.

    The team found that when the DNA-linked catalysts were freely dispersed in the solution, they were highly soluble — even when they included small-molecule catalysts that don’t dissolve in water on their own. Indeed, while porphyrin-based catalysts in solution often stick together, once the DNA strands were attached, that counterproductive behavior was no longer evident.

    The DNA-linked catalysts in solution were also more stable than their unmodified counterparts. They didn’t degrade at voltages that caused the unmodified catalysts to degrade. “So just attaching that single strand of DNA to the catalyst in solution makes those catalysts more stable,” says Furst. “We don’t even have to put them on the electrode surface to see improved stability.” When converting CO2 in this way, a stable catalyst will give a steady current over time. Experimental results showed that adding the DNA prevented the catalyst from degrading at voltages of interest for practical devices. Moreover, with all three catalysts in solution, the DNA modification significantly increased the production of CO per minute.

    Allowing the DNA-linked catalyst to hybridize with the DNA connected to the electrode brought further improvements, even compared to the same DNA-linked catalyst in solution. For example, as a result of the DNA-directed assembly, the catalyst ended up firmly attached to the electrode, and the catalyst stability was further enhanced. Despite being highly soluble in aqueous solutions, the DNA-linked catalyst molecules remained hybridized at the surface of the electrode, even under harsh experimental conditions.

    Immobilizing the DNA-linked catalyst on the electrode also significantly increased the rate of CO production. In a series of experiments, the researchers monitored the CO production rate with each of their catalysts in solution without attached DNA strands — the conventional setup — and then with them immobilized by DNA on the electrode. With all three catalysts, the amount of CO generated per minute was far higher when the DNA-linked catalyst was immobilized on the electrode.

    In addition, immobilizing the DNA-linked catalyst on the electrode greatly increased the “selectivity” in terms of the products. One persistent challenge in using CO2 to generate CO in aqueous solutions is that there is an inevitable competition between the formation of CO and the formation of hydrogen. That tendency was eased by adding DNA to the catalyst in solution — and even more so when the catalyst was immobilized on the electrode using DNA. For both the cobalt-porphyrin catalyst and the hemin-based catalyst, the formation of CO relative to hydrogen was significantly higher with the DNA-linked catalyst on the electrode than in solution. With the iron-porphyrin catalyst they were about the same. “With the iron, it doesn’t matter whether it’s in solution or on the electrode,” Furst explains. “Both of them have selectivity for CO, so that’s good, too.”

    Progress and plans

    Furst and her team have now demonstrated that their DNA-based approach combines the advantages of the traditional solid-state catalysts and the newer small-molecule ones. In their experiments, they achieved the highly efficient chemical conversion of CO2 to CO and also were able to control the mix of products formed. And they believe that their technique should prove scalable: DNA is inexpensive and widely available, and the amount of catalyst required is several orders of magnitude lower when it’s immobilized using DNA.

    Based on her work thus far, Furst hypothesizes that the structure and spacing of the small molecules on the electrode may directly impact both catalytic efficiency and product selectivity. Using DNA to control the precise positioning of her small-molecule catalysts, she plans to evaluate those impacts and then extrapolate design parameters that can be applied to other classes of energy-conversion catalysts. Ultimately, she hopes to develop a predictive algorithm that researchers can use as they design electrocatalytic systems for a wide variety of applications.

    This research was supported by a grant from the MIT Energy Initiative Seed Fund.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Designing zeolites, porous materials made to trap molecules

    Zeolites are a class of minerals used in everything from industrial catalysts and chemical filters to laundry detergents and cat litter. They are mostly composed of silicon and aluminum — two abundant, inexpensive elements — plus oxygen; they have a crystalline structure; and most significantly, they are porous. Among the regularly repeating atomic patterns in them are tiny interconnected openings, or pores, that can trap molecules that just fit inside them, allow smaller ones to pass through, or block larger ones from entering. A zeolite can remove unwanted molecules from gases and liquids, or trap them temporarily and then release them, or hold them while they undergo rapid chemical reactions.

    Some zeolites occur naturally, but they take unpredictable forms and have variable-sized pores. “People synthesize artificial versions to ensure absolute purity and consistency,” says Rafael Gómez-Bombarelli, the Jeffrey Cheah Career Development Chair in Engineering in the Department of Materials Science and Engineering (DMSE). And they work hard to influence the size of the internal pores in hopes of matching the molecule or other particle they’re looking to capture.

    The basic recipe for making zeolites sounds simple. Mix together the raw ingredients — basically, silicon dioxide and aluminum oxide — and put them in a reactor for a few days at a high temperature and pressure. Depending on the ratio between the ingredients and the temperature, pressure, and timing, as the initial gel slowly solidifies into crystalline form, different zeolites emerge.

    But there’s one special ingredient to add “to help the system go where you want it to go,” says Gómez-Bombarelli. “It’s a molecule that serves as a template so that the zeolite you want will crystallize around it and create pores of the desired size and shape.”

    The so-called templating molecule binds to the material before it solidifies. As crystallization progresses, the molecule directs the structure, or “framework,” that forms around it. After crystallization, the temperature is raised and the templating molecule burns off, leaving behind a solid aluminosilicate material filled with open pores that are — given the correct templating molecule and synthesis conditions — just the right size and shape to recognize the targeted molecule.

    The zeolite conundrum

    Theoretical studies suggest that there should be hundreds of thousands of possible zeolites. But despite some 60 years of intensive research, only about 250 zeolites have been made. This is sometimes called the “zeolite conundrum.” Why haven’t more been made — especially now, when they could help ongoing efforts to decarbonize energy and the chemical industry?

    One challenge is figuring out the best recipe for making them: Factors such as the best ratio between the silicon and aluminum, what cooking temperature to use, and whether to stir the ingredients all influence the outcome. But the real key, the researchers say, lies in choosing a templating molecule that’s best for producing the intended zeolite framework. Making that match is difficult: There are hundreds of known templating molecules and potentially a million zeolites, and researchers are continually designing new molecules because millions more could be made and might work better.

    For decades, the exploration of how to synthesize a particular zeolite has been done largely by trial and error — a time-consuming, expensive, inefficient way to go about it. There has also been considerable effort to use “atomistic” (atom-by-atom) simulation to figure out what known or novel templating molecule to use to produce a given zeolite. But the experimental and modeling results haven’t generated reliable guidance. In many cases, researchers have carefully selected or designed a molecule to make a particular zeolite, but when they tried their molecule in the lab, the zeolite that formed wasn’t what they expected or desired. So they needed to start over.

    Those experiences illustrate what Gómez-Bombarelli and his colleagues believe is the problem that’s been plaguing zeolite design for decades. All the efforts — both experimental and theoretical — have focused on finding the templating molecule that’s best for forming a specific zeolite. But what if that templating molecule is also really good — or even better — at forming some other zeolite?

    To determine the “best” molecule for making a certain zeolite framework, and the “best” zeolite framework to act as host to a particular molecule, the researchers decided to look at both sides of the pairing. Daniel Schwalbe-Koda PhD ’22, a former member of Gómez-Bombarelli’s group and now a postdoc at Lawrence Livermore National Laboratory, describes the process as a sort of dance with molecules and zeolites in a room looking for partners. “Each molecule wants to find a partner zeolite, and each zeolite wants to find a partner molecule,” he says. “But it’s not enough to find a good dance partner from the perspective of only one dancer. The potential partner could prefer to dance with someone else, after all. So it needs to be a particularly good pairing.” The upshot: “You need to look from the perspective of each of them.”

    To find the best match from both perspectives, the researchers needed to try every molecule with every zeolite and quantify how well the pairings worked.

    A broader metric for evaluating pairs

    Before performing that analysis, the researchers defined a new “evaluating metric” that they could use to rank each templating molecule-zeolite pair. The standard metric for measuring the affinity between a molecule and a zeolite is “binding energy,” that is, how strongly the molecule clings to the zeolite or, conversely, how much energy is required to separate the two. While recognizing the value of that metric, the MIT-led team wanted to take more parameters into account.

    Their new evaluating metric therefore includes not only binding energy but also the size, shape, and volume of the molecule and the opening in the zeolite framework. And their approach calls for turning the molecule to different orientations to find the best possible fit.

    Affinity scores for all molecule-zeolite pairs based on that evaluating metric would enable zeolite researchers to answer two key questions: What templating molecule will form the zeolite that I want? And if I use that templating molecule, what other zeolites might it form instead? Using the molecule-zeolite affinity scores, researchers could first identify molecules that look good for making a desired zeolite. They could then rule out the ones that also look good for forming other zeolites, leaving a set of molecules deemed to be “highly selective” for making the desired zeolite.  

    Validating the approach: A rich literature

    But does their new metric work better than the standard one? To find out, the team needed to perform atomistic simulations using their new evaluating metric and then benchmark their results against experimental evidence reported in the literature. There are many thousands of journal articles reporting on experiments involving zeolites — in many cases, detailing not only the molecule-zeolite pairs and outcomes but also synthesis conditions and other details. Ferreting out articles with the information the researchers needed was a job for machine learning — in particular, for natural language processing.

    For that task, Gómez-Bombarelli and Schwalbe-Koda turned to their DMSE colleague Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. Using a literature-mining technique that she and a group of collaborators had developed, she and her DMSE team processed more than 2 million materials science papers, found some 90,000 relating to zeolites, and extracted 1,338 of them for further analysis. The yield was 549 templating molecules tested, 209 zeolite frameworks produced, and 5,663 synthesis routes followed.

    Based on those findings, the researchers used their new evaluating metric and a novel atomistic simulation technique to examine more than half-a-million templating molecule-zeolite pairs. Their results reproduced experimental outcomes reported in more than a thousand journal articles. Indeed, the new metric outperformed the traditional binding energy metric, and their simulations were orders of magnitude faster than traditional approaches.

    Ready for experimental investigations

    Now the researchers were ready to put their approach to the test: They would use it to design new templating molecules and try them out in experiments performed by a team led by Yuriy Román-Leshkov, the Robert T. Haslam (1911) Professor of Chemical Engineering, and a team from the Instituto de Tecnologia Química in Valencia, Spain, led by Manuel Moliner and Avelino Corma.

    One set of experiments focused on a zeolite called chabazite, which is used in catalytic converters for vehicles. Using their techniques, the researchers designed a new templating molecule for synthesizing chabazite, and the experimental results confirmed their approach. Their analyses had shown that the new templating molecule would be good for forming chabazite and not for forming anything else. “Its binding strength isn’t as high as other molecules for chabazite, so people hadn’t used it,” says Gómez-Bombarelli. “But it’s pretty good, and it’s not good for anything else, so it’s selective — and it’s way cheaper than the usual ones.”

    In addition, in their new molecule, the electrical charge is distributed differently than in the traditional ones, which led to new possibilities. The researchers found that by adjusting both the shape and charge of the molecule, they could control where the negative charge occurs on the pore that’s created in the final zeolite. “The charge placement that results can make the chabazite a much better catalyst than it was before,” says Gómez-Bombarelli. “So our same rules for molecule design also determine where the negative charge is going to end up, which can lead to whole different classes of catalysts.”

    Schwalbe-Koda describes another experiment that demonstrates the importance of molecular shape as well as the types of new materials made possible using the team’s approach. In one striking example, the team designed a templating molecule with a height and width that’s halfway between those of two molecules that are now commonly used—one for making chabazite and the other for making a zeolite called AEI. (Every new zeolite structure is examined by the International Zeolite Association and — once approved — receives a three-letter designation.)

    Experiments using that in-between templating molecule resulted in the formation of not one zeolite or the other, but a combination of the two in a single solid. “The result blends two different structures together in a way that the final result is better than the sum of its parts,” says Schwalbe-Koda. “The catalyst is like the one used in catalytic converters in today’s trucks — only better.” It’s more efficient in converting nitrogen oxides to harmless nitrogen gases and water, and — because of the two different pore sizes and the aluminosilicate composition — it works well on exhaust that’s fairly hot, as during normal operation, and also on exhaust that’s fairly cool, as during startup.

    Putting the work into practice

    As with all materials, the commercial viability of a zeolite will depend in part on the cost of making it. The researchers’ technique can identify promising templating molecules, but some of them may be difficult to synthesize in the lab. As a result, the overall cost of that molecule-zeolite combination may be too high to be competitive.

    Gómez-Bombarelli and his team therefore include in their assessment process a calculation of cost for synthesizing each templating molecule they identified — generally the most expensive part of making a given zeolite. They use a publicly available model devised in 2018 by Connor Coley PhD ’19, now the Henri Slezynger (1957) Career Development Assistant Professor of Chemical Engineering at MIT. The model takes into account all the starting materials and the step-by-step chemical reactions needed to produce the targeted templating molecule.

    However, commercialization decisions aren’t based solely on cost. Sometimes there’s a trade-off between cost and performance. “For instance, given our chabazite findings, would customers or the community trade a little bit of activity for a 100-fold decrease in the cost of the templating molecule?” says Gómez-Bombarelli. “The answer is likely yes. So we’ve made a tool that can help them navigate that trade-off.” And there are other factors to consider. For example, is this templating molecule truly novel, or have others already studied it — or perhaps even hold a patent on it?

    “While an algorithm can guide development of templating molecules and quantify specific molecule-zeolite matches, other types of assessments are best left to expert judgment,” notes Schwalbe-Koda. “We need a partnership between computational analysis and human intuition and experience.”

    To that end, the MIT researchers and their colleagues decided to share their techniques and findings with other zeolite researchers. Led by Schwalbe-Koda, they created an online database that they made publicly accessible and easy to use — an unusual step, given the competitive industries that rely on zeolites. The interactive website — zeodb.mit.edu — contains the researchers’ final metrics for templating molecule-zeolite pairs resulting from hundreds of thousands of simulations; all the identified journal articles, along with which molecules and zeolites were examined and what synthesis conditions were used; and many more details. Users are free to search and organize the data in any way that suits them.

    Gómez-Bombarelli, Schwalbe-Koda, and their colleagues hope that their techniques and the interactive website will help other researchers explore and discover promising new templating molecules and zeolites, some of which could have profound impacts on efforts to decarbonize energy and tackle climate change.

    This research involved a team of collaborators at MIT, the Instituto de Tecnologia Química (UPV-CSIC), and Stockholm University. The work was supported in part by the MIT Energy Initiative Seed Fund Program and by seed funds from the MIT International Science and Technology Initiative. Daniel Schwalbe-Koda was supported by an ExxonMobil-MIT Energy Fellowship in 2020–21.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    A new concept for low-cost batteries

    As the world builds out ever larger installations of wind and solar power systems, the need is growing fast for economical, large-scale backup systems to provide power when the sun is down and the air is calm. Today’s lithium-ion batteries are still too expensive for most such applications, and other options such as pumped hydro require specific topography that’s not always available.

    Now, researchers at MIT and elsewhere have developed a new kind of battery, made entirely from abundant and inexpensive materials, that could help to fill that gap.

    The new battery architecture, which uses aluminum and sulfur as its two electrode materials, with a molten salt electrolyte in between, is described today in the journal Nature, in a paper by MIT Professor Donald Sadoway, along with 15 others at MIT and in China, Canada, Kentucky, and Tennessee.

    “I wanted to invent something that was better, much better, than lithium-ion batteries for small-scale stationary storage, and ultimately for automotive [uses],” explains Sadoway, who is the John F. Elliott Professor Emeritus of Materials Chemistry.

    In addition to being expensive, lithium-ion batteries contain a flammable electrolyte, making them less than ideal for transportation. So, Sadoway started studying the periodic table, looking for cheap, Earth-abundant metals that might be able to substitute for lithium. The commercially dominant metal, iron, doesn’t have the right electrochemical properties for an efficient battery, he says. But the second-most-abundant metal in the marketplace — and actually the most abundant metal on Earth — is aluminum. “So, I said, well, let’s just make that a bookend. It’s gonna be aluminum,” he says.

    Then came deciding what to pair the aluminum with for the other electrode, and what kind of electrolyte to put in between to carry ions back and forth during charging and discharging. The cheapest of all the non-metals is sulfur, so that became the second electrode material. As for the electrolyte, “we were not going to use the volatile, flammable organic liquids” that have sometimes led to dangerous fires in cars and other applications of lithium-ion batteries, Sadoway says. They tried some polymers but ended up looking at a variety of molten salts that have relatively low melting points — close to the boiling point of water, as opposed to nearly 1,000 degrees Fahrenheit for many salts. “Once you get down to near body temperature, it becomes practical” to make batteries that don’t require special insulation and anticorrosion measures, he says.

    The three ingredients they ended up with are cheap and readily available — aluminum, no different from the foil at the supermarket; sulfur, which is often a waste product from processes such as petroleum refining; and widely available salts. “The ingredients are cheap, and the thing is safe — it cannot burn,” Sadoway says.

    In their experiments, the team showed that the battery cells could endure hundreds of cycles at exceptionally high charging rates, with a projected cost per cell of about one-sixth that of comparable lithium-ion cells. They showed that the charging rate was highly dependent on the working temperature, with 110 degrees Celsius (230 degrees Fahrenheit) showing 25 times faster rates than 25 C (77 F).

    Surprisingly, the molten salt the team chose as an electrolyte simply because of its low melting point turned out to have a fortuitous advantage. One of the biggest problems in battery reliability is the formation of dendrites, which are narrow spikes of metal that build up on one electrode and eventually grow across to contact the other electrode, causing a short-circuit and hampering efficiency. But this particular salt, it happens, is very good at preventing that malfunction.

    The chloro-aluminate salt they chose “essentially retired these runaway dendrites, while also allowing for very rapid charging,” Sadoway says. “We did experiments at very high charging rates, charging in less than a minute, and we never lost cells due to dendrite shorting.”

    “It’s funny,” he says, because the whole focus was on finding a salt with the lowest melting point, but the catenated chloro-aluminates they ended up with turned out to be resistant to the shorting problem. “If we had started off with trying to prevent dendritic shorting, I’m not sure I would’ve known how to pursue that,” Sadoway says. “I guess it was serendipity for us.”

    What’s more, the battery requires no external heat source to maintain its operating temperature. The heat is naturally produced electrochemically by the charging and discharging of the battery. “As you charge, you generate heat, and that keeps the salt from freezing. And then, when you discharge, it also generates heat,” Sadoway says. In a typical installation used for load-leveling at a solar generation facility, for example, “you’d store electricity when the sun is shining, and then you’d draw electricity after dark, and you’d do this every day. And that charge-idle-discharge-idle is enough to generate enough heat to keep the thing at temperature.”

    This new battery formulation, he says, would be ideal for installations of about the size needed to power a single home or small to medium business, producing on the order of a few tens of kilowatt-hours of storage capacity.

    For larger installations, up to utility scale of tens to hundreds of megawatt hours, other technologies might be more effective, including the liquid metal batteries Sadoway and his students developed several years ago and which formed the basis for a spinoff company called Ambri, which hopes to deliver its first products within the next year. For that invention, Sadoway was recently awarded this year’s European Inventor Award.

    The smaller scale of the aluminum-sulfur batteries would also make them practical for uses such as electric vehicle charging stations, Sadoway says. He points out that when electric vehicles become common enough on the roads that several cars want to charge up at once, as happens today with gasoline fuel pumps, “if you try to do that with batteries and you want rapid charging, the amperages are just so high that we don’t have that amount of amperage in the line that feeds the facility.” So having a battery system such as this to store power and then release it quickly when needed could eliminate the need for installing expensive new power lines to serve these chargers.

    The new technology is already the basis for a new spinoff company called Avanti, which has licensed the patents to the system, co-founded by Sadoway and Luis Ortiz ’96 ScD ’00, who was also a co-founder of Ambri. “The first order of business for the company is to demonstrate that it works at scale,” Sadoway says, and then subject it to a series of stress tests, including running through hundreds of charging cycles.

    Would a battery based on sulfur run the risk of producing the foul odors associated with some forms of sulfur? Not a chance, Sadoway says. “The rotten-egg smell is in the gas, hydrogen sulfide. This is elemental sulfur, and it’s going to be enclosed inside the cells.” If you were to try to open up a lithium-ion cell in your kitchen, he says (and please don’t try this at home!), “the moisture in the air would react and you’d start generating all sorts of foul gases as well. These are legitimate questions, but the battery is sealed, it’s not an open vessel. So I wouldn’t be concerned about that.”

    The research team included members from Peking University, Yunnan University and the Wuhan University of Technology, in China; the University of Louisville, in Kentucky; the University of Waterloo, in Canada; Oak Ridge National Laboratory, in Tennessee; and MIT. The work was supported by the MIT Energy Initiative, the MIT Deshpande Center for Technological Innovation, and ENN Group. More

  • in

    Bridging careers in aerospace manufacturing and fusion energy, with a focus on intentional inclusion

    “A big theme of my life has been focusing on intentional inclusion and how I can create environments where people can really bring their whole authentic selves to work,” says Joy Dunn ’08. As the vice president of operations at Commonwealth Fusion Systems, an MIT spinout working to achieve commercial fusion energy, Dunn looks for solutions to the world’s greatest climate challenges — while creating an open and equitable work environment where everyone can succeed.

    This theme has been cultivated throughout her professional and personal life, including as a Young Global Leader at the World Economic Forum and as a board member at Out for Undergrad, an organization that works with LGBTQ+ college students to help them achieve their personal and professional goals. Through her careers both in aerospace and energy, Dunn has striven to instill a sense of equity and inclusion from the inside out.

    Developing a love for space

    Dunn’s childhood was shaped by space. “I was really inspired as a kid to be an astronaut,” she says, “and for me that never stopped.” Dunn’s parents — both of whom had careers in the aerospace industry — encouraged her from an early age to pursue her interests, from building model rockets to visiting the National Air and Space Museum to attending space camp. A large inspiration for this passion arose when she received a signed photo from Sally Ride — the first American woman in space — that read, “To Joy, reach for the stars.”

    As her interests continued to grow in middle school, she and her mom looked to see what it would take to become an astronaut, asking questions such as “what are the common career paths?” and “what schools did astronauts typically go to?” They quickly found that MIT was at the top of that list, and by seventh grade, Dunn had set her sights on the Institute. 

    After years of hard work, Dunn entered MIT in fall 2004 with a major in aeronautical and astronautical engineering (AeroAstro). At MIT, she remained fully committed to her passion while also expanding into other activities such as varsity softball, the MIT Undergraduate Association, and the Alpha Chi Omega sorority.

    One of the highlights of Dunn’s college career was Unified Engineering, a year-long course required for all AeroAstro majors that provides a foundational knowledge of aerospace engineering — culminating in a team competition where students design and build remote-controlled planes to be pitted against each other. “My team actually got first place, which was very exciting,” she recalls. “And I honestly give a lot of that credit to our pilot. He did a very good job of not crashing!” In fact, that pilot was Warren Hoburg ’08, a former assistant professor in AeroAstro and current NASA astronaut training for a mission on the International Space Station.

    Pursuing her passion at SpaceX

    Dunn’s undergraduate experience culminated with an internship at the aerospace manufacturing company SpaceX in summer 2008. “It was by far my favorite internship of the ones that I had in college. I got to work on really hands-on projects and had the same amount of responsibility as a full-time employee,” she says.

    By the end of the internship, she was hired as a propulsion development engineer for the Dragon spacecraft, where she helped to build the thrusters for the first Dragon mission. Eventually, she transferred to the role of manufacturing engineer. “A lot of what I’ve done in my life is building things and looking for process improvements,” so it was a natural fit. From there, she rose through the ranks, eventually becoming the senior manager of spacecraft manufacturing engineering, where she oversaw all the manufacturing, test, and integration engineers working on Dragon. “It was pretty incredible to go from building thrusters to building the whole vehicle,” she says.

    During her tenure, Dunn also co-founded SpaceX’s Women’s Network and its LGBT affinity group, Out and Allied. “It was about providing spaces for employees to get together and provide a sense of community,” she says. Through these groups, she helped start mentorship and community outreach programs, as well as helped grow the pipeline of women in leadership roles for the company.

    In spite of all her successes at SpaceX, she couldn’t help but think about what came next. “I had been at SpaceX for almost a decade and had these thoughts of, ‘do I want to do another tour of duty or look at doing something else?’ The main criteria I set for myself was to do something that is equally or more world-changing than SpaceX.”

    A pivot to fusion

    It was at this time in 2018 that Dunn received an email from a former mentor asking if she had heard about a fusion energy startup called Commonwealth Fusion Systems (CFS) that worked with the MIT Plasma Science and Fusion Center. “I didn’t know much about fusion at all,” she says. “I had heard about it as a science project that was still many, many years away as a viable energy source.”

    After learning more about the technology and company, “I was just like, ‘holy cow, this has the potential to be even more world-changing than what SpaceX is doing.’” She adds, “I decided that I wanted to spend my time and brainpower focusing on cleaning up the planet instead of getting off it.”

    After connecting with CFS CEO Bob Mumgaard SM ’15, PhD ’15, Dunn joined the company and returned to Cambridge as the head of manufacturing. While moving from the aerospace industry to fusion energy was a large shift, she said her first project — building a fusion-relevant, high-temperature superconducting magnet capable of achieving 20 tesla — tied back into her life of being a builder who likes to get her hands on things.

    Over the course of two years, she oversaw the production and scaling of the magnet manufacturing process. When she first came in, the magnets were being constructed in a time-consuming and manual way. “One of the things I’m most proud of from this project is teaching MIT research scientists how to think like manufacturing engineers,” she says. “It was a great symbiotic relationship. The MIT folks taught us the physics and science behind the magnets, and we came in to figure out how to make them into a more manufacturable product.”

    In September 2021, CFS tested this high-temperature superconducting magnet and achieved its goal of 20 tesla. This was a pivotal moment for the company that brought it one step closer to achieving its goal of producing net-positive fusion power. Now, CFS has begun work on a new campus in Devens, Massachusetts, to house their manufacturing operations and SPARC fusion device. Dunn plays a pivotal role in this expansion as well. In March 2021, she was promoted to the head of operations, which expanded her responsibilities beyond managing manufacturing to include facilities, construction, safety, and quality. “It’s been incredible to watch the campus grow from a pile of dirt … into full buildings.”

    In addition to the groundbreaking work, Dunn highlights the culture of inclusiveness as something that makes CFS stand apart to her. “One of the main reasons that drew me to CFS was hearing from the company founders about their thoughts on diversity, equity, and inclusion, and how they wanted to make that a key focus for their company. That’s been so important in my career, and I’m really excited to see how much that’s valued at CFS.” The company has carried this out through programs such as Fusion Inclusion, an initiative that aims to build a strong and inclusive community from the inside out.

    Dunn stresses “the impact that fusion can have on our world and for addressing issues of environmental injustice through an equitable distribution of power and electricity.” Adding, “That’s a huge lever that we have. I’m excited to watch CFS grow and for us to make a really positive impact on the world in that way.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Stranded assets could exact steep costs on fossil energy producers and investors

    A 2021 study in the journal Nature found that in order to avert the worst impacts of climate change, most of the world’s known fossil fuel reserves must remain untapped. According to the study, 90 percent of coal and nearly 60 percent of oil and natural gas must be kept in the ground in order to maintain a 50 percent chance that global warming will not exceed 1.5 degrees Celsius above preindustrial levels.

    As the world transitions away from greenhouse-gas-emitting activities to keep global warming well below 2 C (and ideally 1.5 C) in alignment with the Paris Agreement on climate change, fossil fuel companies and their investors face growing financial risks (known as transition risks), including the prospect of ending up with massive stranded assets. This ongoing transition is likely to significantly scale back fossil fuel extraction and coal-fired power plant operations, exacting steep costs — most notably asset value losses — on fossil-energy producers and shareholders.

    Now, a new study in the journal Climate Change Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change estimates the current global asset value of untapped fossil fuels through 2050 under four increasingly ambitious climate-policy scenarios. The least-ambitious scenario (“Paris Forever”) assumes that initial Paris Agreement greenhouse gas emissions-reduction pledges are upheld in perpetuity; the most stringent scenario (“Net Zero 2050”) adds coordinated international policy instruments aimed at achieving global net-zero emissions by 2050.

    Powered by the MIT Joint Program’s model of the world economy with detailed representation of the energy sector and energy industry assets over time, the study finds that the global net present value of untapped fossil fuel output through 2050 relative to a reference “No Policy” scenario ranges from $21.5 trillion (Paris Forever) to $30.6 trillion (Net Zero 2050). The estimated global net present value of stranded assets in coal power generation through 2050 ranges from $1.3 to $2.3 trillion.

    “The more stringent the climate policy, the greater the volume of untapped fossil fuels, and hence the higher the potential asset value loss for fossil-fuel owners and investors,” says Henry Chen, a research scientist at the MIT Joint Program and the study’s lead author.

    The global economy-wide analysis presented in the study provides a more fine-grained assessment of stranded assets than those performed in previous studies. Firms and financial institutions may combine the MIT analysis with details on their own investment portfolios to assess their exposure to climate-related transition risk. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Fusion’s newest ambassador

    When high school senior Tuba Balta emailed MIT Plasma Science and Fusion Center (PSFC) Director Dennis Whyte in February, she was not certain she would get a response. As part of her final semester at BASIS Charter School, in Washington, she had been searching unsuccessfully for someone to sponsor an internship in fusion energy, a topic that had recently begun to fascinate her because “it’s not figured out yet.” Time was running out if she was to include the internship as part of her senior project.

    “I never say ‘no’ to a student,” says Whyte, who felt she could provide a youthful perspective on communicating the science of fusion to the general public.

    Posters explaining the basics of fusion science were being considered for the walls of a PSFC lounge area, a space used to welcome visitors who might not know much about the center’s focus: What is fusion? What is plasma? What is magnetic confinement fusion? What is a tokamak?

    Why couldn’t Balta be tasked with coming up with text for these posters, written specifically to be understandable, even intriguing, to her peers?

    Meeting the team

    Although most of the internship would be virtual, Balta visited MIT to meet Whyte and others who would guide her progress. A tour of the center showed her the past and future of the PSFC, one lab area revealing on her left the remains of the decades-long Alcator C-Mod tokamak and on her right the testing area for new superconducting magnets crucial to SPARC, designed in collaboration with MIT spinoff Commonwealth Fusion Systems.

    With Whyte, graduate student Rachel Bielajew, and Outreach Coordinator Paul Rivenberg guiding her content and style, Balta focused on one of eight posters each week. Her school also required her to keep a weekly blog of her progress, detailing what she was learning in the process of creating the posters.

    Finding her voice

    Balta admits that she was not looking forward to this part of the school assignment. But she decided to have fun with it, adopting an enthusiastic and conversational tone, as if she were sitting with friends around a lunch table. Each week, she was able to work out what she was composing for her posters and her final project by trying it out on her friends in the blog.

    Her posts won praise from her schoolmates for their clarity, as when in Week 3 she explained the concept of turbulence as it relates to fusion research, sending her readers to their kitchen faucets to experiment with the pressure and velocity of running tap water.

    The voice she found through her blog served her well during her final presentation about fusion at a school expo for classmates, parents, and the general public.

    “Most people are intimidated by the topic, which they shouldn’t be,” says Balta. “And it just made me happy to help other people understand it.”

    Her favorite part of the internship? “Getting to talk to people whose papers I was reading and ask them questions. Because when it comes to fusion, you can’t just look it up on Google.”

    Awaiting her first year at the University of Chicago, Balta reflects on the team spirit she experienced in communicating with researchers at the PSFC.

    “I think that was one of my big takeaways,” she says, “that you have to work together. And you should, because you’re always going to be missing some piece of information; but there’s always going to be somebody else who has that piece, and we can all help each other out.” More

  • in

    Getting the carbon out of India’s heavy industries

    The world’s third largest carbon emitter after China and the United States, India ranks seventh in a major climate risk index. Unless India, along with the nearly 200 other signatory nations of the Paris Agreement, takes aggressive action to keep global warming well below 2 degrees Celsius relative to preindustrial levels, physical and financial losses from floods, droughts, and cyclones could become more severe than they are today. So, too, could health impacts associated with the hazardous air pollution levels now affecting more than 90 percent of its population.  

    To address both climate and air pollution risks and meet its population’s escalating demand for energy, India will need to dramatically decarbonize its energy system in the coming decades. To that end, its initial Paris Agreement climate policy pledge calls for a reduction in carbon dioxide intensity of GDP by 33-35 percent by 2030 from 2005 levels, and an increase in non-fossil-fuel-based power to about 40 percent of cumulative installed capacity in 2030. At the COP26 international climate change conference, India announced more aggressive targets, including the goal of achieving net-zero emissions by 2070.

    Meeting its climate targets will require emissions reductions in every economic sector, including those where emissions are particularly difficult to abate. In such sectors, which involve energy-intensive industrial processes (production of iron and steel; nonferrous metals such as copper, aluminum, and zinc; cement; and chemicals), decarbonization options are limited and more expensive than in other sectors. Whereas replacing coal and natural gas with solar and wind could lower carbon dioxide emissions in electric power generation and transportation, no easy substitutes can be deployed in many heavy industrial processes that release CO2 into the air as a byproduct.

    However, other methods could be used to lower the emissions associated with these processes, which draw upon roughly 50 percent of India’s natural gas, 25 percent of its coal, and 20 percent of its oil. Evaluating the potential effectiveness of such methods in the next 30 years, a new study in the journal Energy Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change is the first to explicitly explore emissions-reduction pathways for India’s hard-to-abate sectors.

    Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model, the study assesses existing emissions levels in these sectors and projects how much they can be reduced by 2030 and 2050 under different policy scenarios. Aimed at decarbonizing industrial processes, the scenarios include the use of subsidies to increase electricity use, incentives to replace coal with natural gas, measures to improve industrial resource efficiency, policies to put a price on carbon, carbon capture and storage (CCS) technology, and hydrogen in steel production.

    The researchers find that India’s 2030 Paris Agreement pledge may still drive up fossil fuel use and associated greenhouse gas emissions, with projected carbon dioxide emissions from hard-to-abate sectors rising by about 2.6 times from 2020 to 2050. But scenarios that also promote electrification, natural gas support, and resource efficiency in hard-to-abate sectors can lower their CO2 emissions by 15-20 percent.

    While appearing to move the needle in the right direction, those reductions are ultimately canceled out by increased demand for the products that emerge from these sectors. So what’s the best path forward?

    The researchers conclude that only the incentive of carbon pricing or the advance of disruptive technology can move hard-to-abate sector emissions below their current levels. To achieve significant emissions reductions, they maintain, the price of carbon must be high enough to make CCS economically viable. In that case, reductions of 80 percent below current levels could be achieved by 2050.

    “Absent major support from the government, India will be unable to reduce carbon emissions in its hard-to-abate sectors in alignment with its climate targets,” says MIT Joint Program deputy director Sergey Paltsev, the study’s lead author. “A comprehensive government policy could provide robust incentives for the private sector in India and generate favorable conditions for foreign investments and technology advances. We encourage decision-makers to use our findings to design efficient pathways to reduce emissions in those sectors, and thereby help lower India’s climate and air pollution-related health risks.” More