More stories

  • in

    Designing zeolites, porous materials made to trap molecules

    Zeolites are a class of minerals used in everything from industrial catalysts and chemical filters to laundry detergents and cat litter. They are mostly composed of silicon and aluminum — two abundant, inexpensive elements — plus oxygen; they have a crystalline structure; and most significantly, they are porous. Among the regularly repeating atomic patterns in them are tiny interconnected openings, or pores, that can trap molecules that just fit inside them, allow smaller ones to pass through, or block larger ones from entering. A zeolite can remove unwanted molecules from gases and liquids, or trap them temporarily and then release them, or hold them while they undergo rapid chemical reactions.

    Some zeolites occur naturally, but they take unpredictable forms and have variable-sized pores. “People synthesize artificial versions to ensure absolute purity and consistency,” says Rafael Gómez-Bombarelli, the Jeffrey Cheah Career Development Chair in Engineering in the Department of Materials Science and Engineering (DMSE). And they work hard to influence the size of the internal pores in hopes of matching the molecule or other particle they’re looking to capture.

    The basic recipe for making zeolites sounds simple. Mix together the raw ingredients — basically, silicon dioxide and aluminum oxide — and put them in a reactor for a few days at a high temperature and pressure. Depending on the ratio between the ingredients and the temperature, pressure, and timing, as the initial gel slowly solidifies into crystalline form, different zeolites emerge.

    But there’s one special ingredient to add “to help the system go where you want it to go,” says Gómez-Bombarelli. “It’s a molecule that serves as a template so that the zeolite you want will crystallize around it and create pores of the desired size and shape.”

    The so-called templating molecule binds to the material before it solidifies. As crystallization progresses, the molecule directs the structure, or “framework,” that forms around it. After crystallization, the temperature is raised and the templating molecule burns off, leaving behind a solid aluminosilicate material filled with open pores that are — given the correct templating molecule and synthesis conditions — just the right size and shape to recognize the targeted molecule.

    The zeolite conundrum

    Theoretical studies suggest that there should be hundreds of thousands of possible zeolites. But despite some 60 years of intensive research, only about 250 zeolites have been made. This is sometimes called the “zeolite conundrum.” Why haven’t more been made — especially now, when they could help ongoing efforts to decarbonize energy and the chemical industry?

    One challenge is figuring out the best recipe for making them: Factors such as the best ratio between the silicon and aluminum, what cooking temperature to use, and whether to stir the ingredients all influence the outcome. But the real key, the researchers say, lies in choosing a templating molecule that’s best for producing the intended zeolite framework. Making that match is difficult: There are hundreds of known templating molecules and potentially a million zeolites, and researchers are continually designing new molecules because millions more could be made and might work better.

    For decades, the exploration of how to synthesize a particular zeolite has been done largely by trial and error — a time-consuming, expensive, inefficient way to go about it. There has also been considerable effort to use “atomistic” (atom-by-atom) simulation to figure out what known or novel templating molecule to use to produce a given zeolite. But the experimental and modeling results haven’t generated reliable guidance. In many cases, researchers have carefully selected or designed a molecule to make a particular zeolite, but when they tried their molecule in the lab, the zeolite that formed wasn’t what they expected or desired. So they needed to start over.

    Those experiences illustrate what Gómez-Bombarelli and his colleagues believe is the problem that’s been plaguing zeolite design for decades. All the efforts — both experimental and theoretical — have focused on finding the templating molecule that’s best for forming a specific zeolite. But what if that templating molecule is also really good — or even better — at forming some other zeolite?

    To determine the “best” molecule for making a certain zeolite framework, and the “best” zeolite framework to act as host to a particular molecule, the researchers decided to look at both sides of the pairing. Daniel Schwalbe-Koda PhD ’22, a former member of Gómez-Bombarelli’s group and now a postdoc at Lawrence Livermore National Laboratory, describes the process as a sort of dance with molecules and zeolites in a room looking for partners. “Each molecule wants to find a partner zeolite, and each zeolite wants to find a partner molecule,” he says. “But it’s not enough to find a good dance partner from the perspective of only one dancer. The potential partner could prefer to dance with someone else, after all. So it needs to be a particularly good pairing.” The upshot: “You need to look from the perspective of each of them.”

    To find the best match from both perspectives, the researchers needed to try every molecule with every zeolite and quantify how well the pairings worked.

    A broader metric for evaluating pairs

    Before performing that analysis, the researchers defined a new “evaluating metric” that they could use to rank each templating molecule-zeolite pair. The standard metric for measuring the affinity between a molecule and a zeolite is “binding energy,” that is, how strongly the molecule clings to the zeolite or, conversely, how much energy is required to separate the two. While recognizing the value of that metric, the MIT-led team wanted to take more parameters into account.

    Their new evaluating metric therefore includes not only binding energy but also the size, shape, and volume of the molecule and the opening in the zeolite framework. And their approach calls for turning the molecule to different orientations to find the best possible fit.

    Affinity scores for all molecule-zeolite pairs based on that evaluating metric would enable zeolite researchers to answer two key questions: What templating molecule will form the zeolite that I want? And if I use that templating molecule, what other zeolites might it form instead? Using the molecule-zeolite affinity scores, researchers could first identify molecules that look good for making a desired zeolite. They could then rule out the ones that also look good for forming other zeolites, leaving a set of molecules deemed to be “highly selective” for making the desired zeolite.  

    Validating the approach: A rich literature

    But does their new metric work better than the standard one? To find out, the team needed to perform atomistic simulations using their new evaluating metric and then benchmark their results against experimental evidence reported in the literature. There are many thousands of journal articles reporting on experiments involving zeolites — in many cases, detailing not only the molecule-zeolite pairs and outcomes but also synthesis conditions and other details. Ferreting out articles with the information the researchers needed was a job for machine learning — in particular, for natural language processing.

    For that task, Gómez-Bombarelli and Schwalbe-Koda turned to their DMSE colleague Elsa Olivetti PhD ’07, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering. Using a literature-mining technique that she and a group of collaborators had developed, she and her DMSE team processed more than 2 million materials science papers, found some 90,000 relating to zeolites, and extracted 1,338 of them for further analysis. The yield was 549 templating molecules tested, 209 zeolite frameworks produced, and 5,663 synthesis routes followed.

    Based on those findings, the researchers used their new evaluating metric and a novel atomistic simulation technique to examine more than half-a-million templating molecule-zeolite pairs. Their results reproduced experimental outcomes reported in more than a thousand journal articles. Indeed, the new metric outperformed the traditional binding energy metric, and their simulations were orders of magnitude faster than traditional approaches.

    Ready for experimental investigations

    Now the researchers were ready to put their approach to the test: They would use it to design new templating molecules and try them out in experiments performed by a team led by Yuriy Román-Leshkov, the Robert T. Haslam (1911) Professor of Chemical Engineering, and a team from the Instituto de Tecnologia Química in Valencia, Spain, led by Manuel Moliner and Avelino Corma.

    One set of experiments focused on a zeolite called chabazite, which is used in catalytic converters for vehicles. Using their techniques, the researchers designed a new templating molecule for synthesizing chabazite, and the experimental results confirmed their approach. Their analyses had shown that the new templating molecule would be good for forming chabazite and not for forming anything else. “Its binding strength isn’t as high as other molecules for chabazite, so people hadn’t used it,” says Gómez-Bombarelli. “But it’s pretty good, and it’s not good for anything else, so it’s selective — and it’s way cheaper than the usual ones.”

    In addition, in their new molecule, the electrical charge is distributed differently than in the traditional ones, which led to new possibilities. The researchers found that by adjusting both the shape and charge of the molecule, they could control where the negative charge occurs on the pore that’s created in the final zeolite. “The charge placement that results can make the chabazite a much better catalyst than it was before,” says Gómez-Bombarelli. “So our same rules for molecule design also determine where the negative charge is going to end up, which can lead to whole different classes of catalysts.”

    Schwalbe-Koda describes another experiment that demonstrates the importance of molecular shape as well as the types of new materials made possible using the team’s approach. In one striking example, the team designed a templating molecule with a height and width that’s halfway between those of two molecules that are now commonly used—one for making chabazite and the other for making a zeolite called AEI. (Every new zeolite structure is examined by the International Zeolite Association and — once approved — receives a three-letter designation.)

    Experiments using that in-between templating molecule resulted in the formation of not one zeolite or the other, but a combination of the two in a single solid. “The result blends two different structures together in a way that the final result is better than the sum of its parts,” says Schwalbe-Koda. “The catalyst is like the one used in catalytic converters in today’s trucks — only better.” It’s more efficient in converting nitrogen oxides to harmless nitrogen gases and water, and — because of the two different pore sizes and the aluminosilicate composition — it works well on exhaust that’s fairly hot, as during normal operation, and also on exhaust that’s fairly cool, as during startup.

    Putting the work into practice

    As with all materials, the commercial viability of a zeolite will depend in part on the cost of making it. The researchers’ technique can identify promising templating molecules, but some of them may be difficult to synthesize in the lab. As a result, the overall cost of that molecule-zeolite combination may be too high to be competitive.

    Gómez-Bombarelli and his team therefore include in their assessment process a calculation of cost for synthesizing each templating molecule they identified — generally the most expensive part of making a given zeolite. They use a publicly available model devised in 2018 by Connor Coley PhD ’19, now the Henri Slezynger (1957) Career Development Assistant Professor of Chemical Engineering at MIT. The model takes into account all the starting materials and the step-by-step chemical reactions needed to produce the targeted templating molecule.

    However, commercialization decisions aren’t based solely on cost. Sometimes there’s a trade-off between cost and performance. “For instance, given our chabazite findings, would customers or the community trade a little bit of activity for a 100-fold decrease in the cost of the templating molecule?” says Gómez-Bombarelli. “The answer is likely yes. So we’ve made a tool that can help them navigate that trade-off.” And there are other factors to consider. For example, is this templating molecule truly novel, or have others already studied it — or perhaps even hold a patent on it?

    “While an algorithm can guide development of templating molecules and quantify specific molecule-zeolite matches, other types of assessments are best left to expert judgment,” notes Schwalbe-Koda. “We need a partnership between computational analysis and human intuition and experience.”

    To that end, the MIT researchers and their colleagues decided to share their techniques and findings with other zeolite researchers. Led by Schwalbe-Koda, they created an online database that they made publicly accessible and easy to use — an unusual step, given the competitive industries that rely on zeolites. The interactive website — zeodb.mit.edu — contains the researchers’ final metrics for templating molecule-zeolite pairs resulting from hundreds of thousands of simulations; all the identified journal articles, along with which molecules and zeolites were examined and what synthesis conditions were used; and many more details. Users are free to search and organize the data in any way that suits them.

    Gómez-Bombarelli, Schwalbe-Koda, and their colleagues hope that their techniques and the interactive website will help other researchers explore and discover promising new templating molecules and zeolites, some of which could have profound impacts on efforts to decarbonize energy and tackle climate change.

    This research involved a team of collaborators at MIT, the Instituto de Tecnologia Química (UPV-CSIC), and Stockholm University. The work was supported in part by the MIT Energy Initiative Seed Fund Program and by seed funds from the MIT International Science and Technology Initiative. Daniel Schwalbe-Koda was supported by an ExxonMobil-MIT Energy Fellowship in 2020–21.

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More

  • in

    A new concept for low-cost batteries

    As the world builds out ever larger installations of wind and solar power systems, the need is growing fast for economical, large-scale backup systems to provide power when the sun is down and the air is calm. Today’s lithium-ion batteries are still too expensive for most such applications, and other options such as pumped hydro require specific topography that’s not always available.

    Now, researchers at MIT and elsewhere have developed a new kind of battery, made entirely from abundant and inexpensive materials, that could help to fill that gap.

    The new battery architecture, which uses aluminum and sulfur as its two electrode materials, with a molten salt electrolyte in between, is described today in the journal Nature, in a paper by MIT Professor Donald Sadoway, along with 15 others at MIT and in China, Canada, Kentucky, and Tennessee.

    “I wanted to invent something that was better, much better, than lithium-ion batteries for small-scale stationary storage, and ultimately for automotive [uses],” explains Sadoway, who is the John F. Elliott Professor Emeritus of Materials Chemistry.

    In addition to being expensive, lithium-ion batteries contain a flammable electrolyte, making them less than ideal for transportation. So, Sadoway started studying the periodic table, looking for cheap, Earth-abundant metals that might be able to substitute for lithium. The commercially dominant metal, iron, doesn’t have the right electrochemical properties for an efficient battery, he says. But the second-most-abundant metal in the marketplace — and actually the most abundant metal on Earth — is aluminum. “So, I said, well, let’s just make that a bookend. It’s gonna be aluminum,” he says.

    Then came deciding what to pair the aluminum with for the other electrode, and what kind of electrolyte to put in between to carry ions back and forth during charging and discharging. The cheapest of all the non-metals is sulfur, so that became the second electrode material. As for the electrolyte, “we were not going to use the volatile, flammable organic liquids” that have sometimes led to dangerous fires in cars and other applications of lithium-ion batteries, Sadoway says. They tried some polymers but ended up looking at a variety of molten salts that have relatively low melting points — close to the boiling point of water, as opposed to nearly 1,000 degrees Fahrenheit for many salts. “Once you get down to near body temperature, it becomes practical” to make batteries that don’t require special insulation and anticorrosion measures, he says.

    The three ingredients they ended up with are cheap and readily available — aluminum, no different from the foil at the supermarket; sulfur, which is often a waste product from processes such as petroleum refining; and widely available salts. “The ingredients are cheap, and the thing is safe — it cannot burn,” Sadoway says.

    In their experiments, the team showed that the battery cells could endure hundreds of cycles at exceptionally high charging rates, with a projected cost per cell of about one-sixth that of comparable lithium-ion cells. They showed that the charging rate was highly dependent on the working temperature, with 110 degrees Celsius (230 degrees Fahrenheit) showing 25 times faster rates than 25 C (77 F).

    Surprisingly, the molten salt the team chose as an electrolyte simply because of its low melting point turned out to have a fortuitous advantage. One of the biggest problems in battery reliability is the formation of dendrites, which are narrow spikes of metal that build up on one electrode and eventually grow across to contact the other electrode, causing a short-circuit and hampering efficiency. But this particular salt, it happens, is very good at preventing that malfunction.

    The chloro-aluminate salt they chose “essentially retired these runaway dendrites, while also allowing for very rapid charging,” Sadoway says. “We did experiments at very high charging rates, charging in less than a minute, and we never lost cells due to dendrite shorting.”

    “It’s funny,” he says, because the whole focus was on finding a salt with the lowest melting point, but the catenated chloro-aluminates they ended up with turned out to be resistant to the shorting problem. “If we had started off with trying to prevent dendritic shorting, I’m not sure I would’ve known how to pursue that,” Sadoway says. “I guess it was serendipity for us.”

    What’s more, the battery requires no external heat source to maintain its operating temperature. The heat is naturally produced electrochemically by the charging and discharging of the battery. “As you charge, you generate heat, and that keeps the salt from freezing. And then, when you discharge, it also generates heat,” Sadoway says. In a typical installation used for load-leveling at a solar generation facility, for example, “you’d store electricity when the sun is shining, and then you’d draw electricity after dark, and you’d do this every day. And that charge-idle-discharge-idle is enough to generate enough heat to keep the thing at temperature.”

    This new battery formulation, he says, would be ideal for installations of about the size needed to power a single home or small to medium business, producing on the order of a few tens of kilowatt-hours of storage capacity.

    For larger installations, up to utility scale of tens to hundreds of megawatt hours, other technologies might be more effective, including the liquid metal batteries Sadoway and his students developed several years ago and which formed the basis for a spinoff company called Ambri, which hopes to deliver its first products within the next year. For that invention, Sadoway was recently awarded this year’s European Inventor Award.

    The smaller scale of the aluminum-sulfur batteries would also make them practical for uses such as electric vehicle charging stations, Sadoway says. He points out that when electric vehicles become common enough on the roads that several cars want to charge up at once, as happens today with gasoline fuel pumps, “if you try to do that with batteries and you want rapid charging, the amperages are just so high that we don’t have that amount of amperage in the line that feeds the facility.” So having a battery system such as this to store power and then release it quickly when needed could eliminate the need for installing expensive new power lines to serve these chargers.

    The new technology is already the basis for a new spinoff company called Avanti, which has licensed the patents to the system, co-founded by Sadoway and Luis Ortiz ’96 ScD ’00, who was also a co-founder of Ambri. “The first order of business for the company is to demonstrate that it works at scale,” Sadoway says, and then subject it to a series of stress tests, including running through hundreds of charging cycles.

    Would a battery based on sulfur run the risk of producing the foul odors associated with some forms of sulfur? Not a chance, Sadoway says. “The rotten-egg smell is in the gas, hydrogen sulfide. This is elemental sulfur, and it’s going to be enclosed inside the cells.” If you were to try to open up a lithium-ion cell in your kitchen, he says (and please don’t try this at home!), “the moisture in the air would react and you’d start generating all sorts of foul gases as well. These are legitimate questions, but the battery is sealed, it’s not an open vessel. So I wouldn’t be concerned about that.”

    The research team included members from Peking University, Yunnan University and the Wuhan University of Technology, in China; the University of Louisville, in Kentucky; the University of Waterloo, in Canada; Oak Ridge National Laboratory, in Tennessee; and MIT. The work was supported by the MIT Energy Initiative, the MIT Deshpande Center for Technological Innovation, and ENN Group. More

  • in

    Bridging careers in aerospace manufacturing and fusion energy, with a focus on intentional inclusion

    “A big theme of my life has been focusing on intentional inclusion and how I can create environments where people can really bring their whole authentic selves to work,” says Joy Dunn ’08. As the vice president of operations at Commonwealth Fusion Systems, an MIT spinout working to achieve commercial fusion energy, Dunn looks for solutions to the world’s greatest climate challenges — while creating an open and equitable work environment where everyone can succeed.

    This theme has been cultivated throughout her professional and personal life, including as a Young Global Leader at the World Economic Forum and as a board member at Out for Undergrad, an organization that works with LGBTQ+ college students to help them achieve their personal and professional goals. Through her careers both in aerospace and energy, Dunn has striven to instill a sense of equity and inclusion from the inside out.

    Developing a love for space

    Dunn’s childhood was shaped by space. “I was really inspired as a kid to be an astronaut,” she says, “and for me that never stopped.” Dunn’s parents — both of whom had careers in the aerospace industry — encouraged her from an early age to pursue her interests, from building model rockets to visiting the National Air and Space Museum to attending space camp. A large inspiration for this passion arose when she received a signed photo from Sally Ride — the first American woman in space — that read, “To Joy, reach for the stars.”

    As her interests continued to grow in middle school, she and her mom looked to see what it would take to become an astronaut, asking questions such as “what are the common career paths?” and “what schools did astronauts typically go to?” They quickly found that MIT was at the top of that list, and by seventh grade, Dunn had set her sights on the Institute. 

    After years of hard work, Dunn entered MIT in fall 2004 with a major in aeronautical and astronautical engineering (AeroAstro). At MIT, she remained fully committed to her passion while also expanding into other activities such as varsity softball, the MIT Undergraduate Association, and the Alpha Chi Omega sorority.

    One of the highlights of Dunn’s college career was Unified Engineering, a year-long course required for all AeroAstro majors that provides a foundational knowledge of aerospace engineering — culminating in a team competition where students design and build remote-controlled planes to be pitted against each other. “My team actually got first place, which was very exciting,” she recalls. “And I honestly give a lot of that credit to our pilot. He did a very good job of not crashing!” In fact, that pilot was Warren Hoburg ’08, a former assistant professor in AeroAstro and current NASA astronaut training for a mission on the International Space Station.

    Pursuing her passion at SpaceX

    Dunn’s undergraduate experience culminated with an internship at the aerospace manufacturing company SpaceX in summer 2008. “It was by far my favorite internship of the ones that I had in college. I got to work on really hands-on projects and had the same amount of responsibility as a full-time employee,” she says.

    By the end of the internship, she was hired as a propulsion development engineer for the Dragon spacecraft, where she helped to build the thrusters for the first Dragon mission. Eventually, she transferred to the role of manufacturing engineer. “A lot of what I’ve done in my life is building things and looking for process improvements,” so it was a natural fit. From there, she rose through the ranks, eventually becoming the senior manager of spacecraft manufacturing engineering, where she oversaw all the manufacturing, test, and integration engineers working on Dragon. “It was pretty incredible to go from building thrusters to building the whole vehicle,” she says.

    During her tenure, Dunn also co-founded SpaceX’s Women’s Network and its LGBT affinity group, Out and Allied. “It was about providing spaces for employees to get together and provide a sense of community,” she says. Through these groups, she helped start mentorship and community outreach programs, as well as helped grow the pipeline of women in leadership roles for the company.

    In spite of all her successes at SpaceX, she couldn’t help but think about what came next. “I had been at SpaceX for almost a decade and had these thoughts of, ‘do I want to do another tour of duty or look at doing something else?’ The main criteria I set for myself was to do something that is equally or more world-changing than SpaceX.”

    A pivot to fusion

    It was at this time in 2018 that Dunn received an email from a former mentor asking if she had heard about a fusion energy startup called Commonwealth Fusion Systems (CFS) that worked with the MIT Plasma Science and Fusion Center. “I didn’t know much about fusion at all,” she says. “I had heard about it as a science project that was still many, many years away as a viable energy source.”

    After learning more about the technology and company, “I was just like, ‘holy cow, this has the potential to be even more world-changing than what SpaceX is doing.’” She adds, “I decided that I wanted to spend my time and brainpower focusing on cleaning up the planet instead of getting off it.”

    After connecting with CFS CEO Bob Mumgaard SM ’15, PhD ’15, Dunn joined the company and returned to Cambridge as the head of manufacturing. While moving from the aerospace industry to fusion energy was a large shift, she said her first project — building a fusion-relevant, high-temperature superconducting magnet capable of achieving 20 tesla — tied back into her life of being a builder who likes to get her hands on things.

    Over the course of two years, she oversaw the production and scaling of the magnet manufacturing process. When she first came in, the magnets were being constructed in a time-consuming and manual way. “One of the things I’m most proud of from this project is teaching MIT research scientists how to think like manufacturing engineers,” she says. “It was a great symbiotic relationship. The MIT folks taught us the physics and science behind the magnets, and we came in to figure out how to make them into a more manufacturable product.”

    In September 2021, CFS tested this high-temperature superconducting magnet and achieved its goal of 20 tesla. This was a pivotal moment for the company that brought it one step closer to achieving its goal of producing net-positive fusion power. Now, CFS has begun work on a new campus in Devens, Massachusetts, to house their manufacturing operations and SPARC fusion device. Dunn plays a pivotal role in this expansion as well. In March 2021, she was promoted to the head of operations, which expanded her responsibilities beyond managing manufacturing to include facilities, construction, safety, and quality. “It’s been incredible to watch the campus grow from a pile of dirt … into full buildings.”

    In addition to the groundbreaking work, Dunn highlights the culture of inclusiveness as something that makes CFS stand apart to her. “One of the main reasons that drew me to CFS was hearing from the company founders about their thoughts on diversity, equity, and inclusion, and how they wanted to make that a key focus for their company. That’s been so important in my career, and I’m really excited to see how much that’s valued at CFS.” The company has carried this out through programs such as Fusion Inclusion, an initiative that aims to build a strong and inclusive community from the inside out.

    Dunn stresses “the impact that fusion can have on our world and for addressing issues of environmental injustice through an equitable distribution of power and electricity.” Adding, “That’s a huge lever that we have. I’m excited to watch CFS grow and for us to make a really positive impact on the world in that way.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Stranded assets could exact steep costs on fossil energy producers and investors

    A 2021 study in the journal Nature found that in order to avert the worst impacts of climate change, most of the world’s known fossil fuel reserves must remain untapped. According to the study, 90 percent of coal and nearly 60 percent of oil and natural gas must be kept in the ground in order to maintain a 50 percent chance that global warming will not exceed 1.5 degrees Celsius above preindustrial levels.

    As the world transitions away from greenhouse-gas-emitting activities to keep global warming well below 2 C (and ideally 1.5 C) in alignment with the Paris Agreement on climate change, fossil fuel companies and their investors face growing financial risks (known as transition risks), including the prospect of ending up with massive stranded assets. This ongoing transition is likely to significantly scale back fossil fuel extraction and coal-fired power plant operations, exacting steep costs — most notably asset value losses — on fossil-energy producers and shareholders.

    Now, a new study in the journal Climate Change Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change estimates the current global asset value of untapped fossil fuels through 2050 under four increasingly ambitious climate-policy scenarios. The least-ambitious scenario (“Paris Forever”) assumes that initial Paris Agreement greenhouse gas emissions-reduction pledges are upheld in perpetuity; the most stringent scenario (“Net Zero 2050”) adds coordinated international policy instruments aimed at achieving global net-zero emissions by 2050.

    Powered by the MIT Joint Program’s model of the world economy with detailed representation of the energy sector and energy industry assets over time, the study finds that the global net present value of untapped fossil fuel output through 2050 relative to a reference “No Policy” scenario ranges from $21.5 trillion (Paris Forever) to $30.6 trillion (Net Zero 2050). The estimated global net present value of stranded assets in coal power generation through 2050 ranges from $1.3 to $2.3 trillion.

    “The more stringent the climate policy, the greater the volume of untapped fossil fuels, and hence the higher the potential asset value loss for fossil-fuel owners and investors,” says Henry Chen, a research scientist at the MIT Joint Program and the study’s lead author.

    The global economy-wide analysis presented in the study provides a more fine-grained assessment of stranded assets than those performed in previous studies. Firms and financial institutions may combine the MIT analysis with details on their own investment portfolios to assess their exposure to climate-related transition risk. More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    New J-WAFS-led project combats food insecurity

    Today the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT announced a new research project, supported by Community Jameel, to tackle one of the most urgent crises facing the planet: food insecurity. Approximately 276 million people worldwide are severely food insecure, and more than half a million face famine conditions.     To better understand and analyze food security, this three-year research project will develop a comprehensive index assessing countries’ food security vulnerability, called the Jameel Index for Food Trade and Vulnerability. Global changes spurred by social and economic transitions, energy and environmental policy, regional geopolitics, conflict, and of course climate change, can impact food demand and supply. The Jameel Index will measure countries’ dependence on global food trade and imports and how these regional-scale threats might affect the ability to trade food goods across diverse geographic regions. A main outcome of the research will be a model to project global food demand, supply balance, and bilateral trade under different likely future scenarios, with a focus on climate change. The work will help guide policymakers over the next 25 years while the global population is expected to grow, and the climate crisis is predicted to worsen.    

    The work will be the foundational project for the J-WAFS-led Food and Climate Systems Transformation Alliance, or FACT Alliance. Formally launched at the COP26 climate conference last November, the FACT Alliance is a global network of 20 leading research institutions and stakeholder organizations that are driving research and innovation and informing better decision-making for healthy, resilient, equitable, and sustainable food systems in a rapidly changing climate. The initiative is co-directed by Greg Sixt, research manager for climate and food systems at J-WAFS, and Professor Kenneth Strzepek, climate, water, and food specialist at J-WAFS.

    The dire state of our food systems

    The need for this project is evidenced by the hundreds of millions of people around the globe currently experiencing food shortages. While several factors contribute to food insecurity, climate change is one of the most notable. Devastating extreme weather events are increasingly crippling crop and livestock production around the globe. From Southwest Asia to the Arabian Peninsula to the Horn of Africa, communities are migrating in search of food. In the United States, extreme heat and lack of rainfall in the Southwest have drastically lowered Lake Mead’s water levels, restricting water access and drying out farmlands. 

    Social, political, and economic issues also disrupt food systems. The effects of the Covid-19 pandemic, supply chain disruptions, and inflation continue to exacerbate food insecurity. Russia’s invasion of Ukraine is dramatically worsening the situation, disrupting agricultural exports from both Russia and Ukraine — two of the world’s largest producers of wheat, sunflower seed oil, and corn. Other countries like Lebanon, Sri Lanka, and Cuba are confronting food insecurity due to domestic financial crises.

    Few countries are immune to threats to food security from sudden disruptions in food production or trade. When an enormous container ship became lodged in the Suez Canal in March 2021, the vital international trade route was blocked for three months. The resulting delays in international shipping affected food supplies around the world. These situations demonstrate the importance of food trade in achieving food security: a disaster in one part of the world can drastically affect the availability of food in another. This puts into perspective just how interconnected the earth’s food systems are and how vulnerable they remain to external shocks. 

    An index to prepare for the future of food

    Despite the need for more secure food systems, significant knowledge gaps exist when it comes to understanding how different climate scenarios may affect both agricultural productivity and global food supply chains and security. The Global Trade Analysis Project database from Purdue University, and the current IMPACT modeling system from the International Food Policy Research Institute (IFPRI), enable assessments of existing conditions but cannot project or model changes in the future.

    In 2021, Strzepek and Sixt developed an initial Food Import Vulnerability Index (FIVI) as part of a regional assessment of the threat of climate change to food security in the Gulf Cooperation Council states and West Asia. FIVI is also limited in that it can only assess current trade conditions and climate change threats to food production. Additionally, FIVI is a national aggregate index and does not address issues of hunger, poverty, or equity that stem from regional variations within a country.

    “Current models are really good at showing global food trade flows, but we don’t have systems for looking at food trade between individual countries and how different food systems stressors such as climate change and conflict disrupt that trade,” says Greg Sixt of J-WAFS and the FACT Alliance. “This timely index will be a valuable tool for policymakers to understand the vulnerabilities to their food security from different shocks in the countries they import their food from. The project will also illustrate the stakeholder-guided, transdisciplinary approach that is central to the FACT Alliance,” Sixt adds.

    Phase 1 of the project will support a collaboration between four FACT Alliance members: MIT J-WAFS, Ethiopian Institute of Agricultural Research, IFPRI (which is also part of the CGIAR network), and the Martin School at the University of Oxford. An external partner, United Arab Emirates University, will also assist with the project work. This first phase will build on Strzepek and Sixt’s previous work on FIVI by developing a comprehensive Global Food System Modeling Framework that takes into consideration climate and global changes projected out to 2050, and assesses their impacts on domestic production, world market prices, and national balance of payments and bilateral trade. The framework will also utilize a mixed-modeling approach that includes the assessment of bilateral trade and macroeconomic data associated with varying agricultural productivity under the different climate and economic policy scenarios. In this way, consistent and harmonized projections of global food demand and supply balance, and bilateral trade under climate and global change can be achieved. 

    “Just like in the global response to Covid-19, using data and modeling are critical to understanding and tackling vulnerabilities in the global supply of food,” says George Richards, director of Community Jameel. “The Jameel Index for Food Trade and Vulnerability will help inform decision-making to manage shocks and long-term disruptions to food systems, with the aim of ensuring food security for all.”

    On a national level, the researchers will enrich the Jameel Index through country-level food security analyses of regions within countries and across various socioeconomic groups, allowing for a better understanding of specific impacts on key populations. The research will present vulnerability scores for a variety of food security metrics for 126 countries. Case studies of food security and food import vulnerability in Ethiopia and Sudan will help to refine the applicability of the Jameel Index with on-the-ground information. The case studies will use an IFPRI-developed tool called the Rural Investment and Policy Analysis model, which allows for analysis of urban and rural populations and different income groups. Local capacity building and stakeholder engagement will be critical to enable the use of the tools developed by this research for national-level planning in priority countries, and ultimately to inform policy.  Phase 2 of the project will build on phase 1 and the lessons learned from the Ethiopian and Sudanese case studies. It will entail a number of deeper, country-level analyses to assess the role of food imports on future hunger, poverty, and equity across various regional and socioeconomic groups within the modeled countries. This work will link the geospatial national models with the global analysis. A scholarly paper is expected to be submitted to show findings from this work, and a website will be launched so that interested stakeholders and organizations can learn more information. More

  • in

    Making hydropower plants more sustainable

    Growing up on a farm in Texas, there was always something for siblings Gia Schneider ’99 and Abe Schneider ’02, SM ’03 to do. But every Saturday at 2 p.m., no matter what, the family would go down to a local creek to fish, build rock dams and rope swings, and enjoy nature.

    Eventually the family began going to a remote river in Colorado each summer. The river forked in two; one side was managed by ranchers who destroyed natural features like beaver dams, while the other side remained untouched. The family noticed the fishing was better on the preserved side, which led Abe to try measuring the health of the two river ecosystems. In high school, he co-authored a study showing there were more beneficial insects in the bed of the river with the beaver dams.

    The experience taught both siblings a lesson that has stuck. Today they are the co-founders of Natel Energy, a company attempting to mimic natural river ecosystems with hydropower systems that are more sustainable than conventional hydro plants.

    “The big takeaway for us, and what we’ve been doing all this time, is thinking of ways that infrastructure can help increase the health of our environment — and beaver dams are a good example of infrastructure that wouldn’t otherwise be there that supports other populations of animals,” Abe says. “It’s a motivator for the idea that hydropower can help improve the environment rather than destroy the environment.”

    Through new, fish-safe turbines and other features designed to mimic natural river conditions, the founders say their plants can bridge the gap between power-plant efficiency and environmental sustainability. By retrofitting existing hydropower plants and developing new projects, the founders believe they can supercharge a hydropower industry that is by far the largest source of renewable electricity in the world but has not grown in energy generation as much as wind and solar in recent years.

    “Hydropower plants are built today with only power output in mind, as opposed to the idea that if we want to unlock growth, we have to solve for both efficiency and river sustainability,” Gia says.

    A life’s mission

    The origins of Natel came not from a single event but from a lifetime of events. Abe and Gia’s father was an inventor and renewable energy enthusiast who designed and built the log cabin they grew up in. With no television, the kids’ preferred entertainment was reading books or being outside. The water in their house was pumped by power generated using a mechanical windmill on the north side of the house.

    “We grew up hanging clothes on a line, and it wasn’t because we were too poor to own a dryer, but because everything about our existence and our use of energy was driven by the idea that we needed to make conscious decisions about sustainability,” Abe says.

    One of the things that fascinated both siblings was hydropower. In high school, Abe recalls bugging his friend who was good at math to help him with designs for new hydro turbines.

    Both siblings admit coming to MIT was a major culture shock, but they loved the atmosphere of problem solving and entrepreneurship that permeated the campus. Gia came to MIT in 1995 and majored in chemical engineering while Abe followed three years later and majored in mechanical engineering for both his bachelor’s and master’s degrees.

    All the while, they never lost sight of hydropower. In the 1998 MIT $100K Entrepreneurship Competitions (which was the $50K at the time), they pitched an idea for hydropower plants based on a linear turbine design. They were named finalists in the competition, but still wanted more industry experience before starting a company. After graduation, Abe worked as a mechanical engineer and did some consulting work with the operators of small hydropower plants while Gia worked at the energy desks of a few large finance companies.

    In 2009, the siblings, along with their late father, Daniel, received a small business grant of $200,000 and formally launched Natel Energy.

    Between 2009 and 2019, the founders worked on a linear turbine design that Abe describes as turbines on a conveyor belt. They patented and deployed the system on a few sites, but the problem of ensuring safe fish passage remained.

    Then the founders were doing some modeling that suggested they could achieve high power plant efficiency using an extremely rounded edge on a turbine blade — as opposed to the sharp blades typically used for hydropower turbines. The insight made them realize if they didn’t need sharp blades, perhaps they didn’t need a complex new turbine.

    “It’s so counterintuitive, but we said maybe we can achieve the same results with a propeller turbine, which is the most common kind,” Abe says. “It started out as a joke — or a challenge — and I did some modeling and rapidly realized, ‘Holy cow, this actually could work!’ Instead of having a powertrain with a decade’s worth of complexity, you have a powertrain that has one moving part, and almost no change in loading, in a form factor that the whole industry is used to.”

    The turbine Natel developed features thick blades that allow more than 99 percent of fish to pass through safely, according to third-party tests. Natel’s turbines also allow for the passage of important river sediment and can be coupled with structures that mimic natural features of rivers like log jams, beaver dams, and rock arches.

    “We want the most efficient machine possible, but we also want the most fish-safe machine possible, and that intersection has led to our unique intellectual property,” Gia says.

    Supercharging hydropower

    Natel has already installed two versions of its latest turbine, what it calls the Restoration Hydro Turbine, at existing plants in Maine and Oregon. The company hopes that by the end of this year, two more will be deployed, including one in Europe, a key market for Natel because of its stronger environmental regulations for hydropower plants.

    Since their installation, the founders say the first two turbines have converted more than 90 percent of the energy available in the water into energy at the turbine, a comparable efficiency to conventional turbines.

    Looking forward, Natel believes its systems have a significant role to play in boosting the hydropower industry, which is facing increasing scrutiny and environmental regulation that could otherwise close down many existing plants. For example, the founders say that hydropower plants the company could potentially retrofit across the U.S. and Europe have a total capacity of about 30 gigawatts, enough to power millions of homes.

    Natel also has ambitions to build entirely new plants on the many nonpowered dams around the U.S. and Europe. (Currently only 3 percent of the United States’ 80,000 dams are powered.) The founders estimate their systems could generate about 48 gigawatts of new electricity across the U.S. and Europe — the equivalent of more than 100 million solar panels.

    “We’re looking at numbers that are pretty meaningful,” Gia says. “We could substantially add to the existing installed base while also modernizing the existing base to continue to be productive while meeting modern environmental requirements.”

    Overall, the founders see hydropower as a key technology in our transition to sustainable energy, a sentiment echoed by recent MIT research.

    “Hydro today supplies the bulk of electricity reliability services in a lot of these areas — things like voltage regulation, frequency regulation, storage,” Gia says. “That’s key to understand: As we transition to a zero-carbon grid, we need a reliable grid, and hydro has a very important role in supporting that. Particularly as we think about making this transition as quickly as we can, we’re going to need every bit of zero-emission resources we can get.” More