More stories

  • in

    Taking a magnifying glass to data center operations

    When the MIT Lincoln Laboratory Supercomputing Center (LLSC) unveiled its TX-GAIA supercomputer in 2019, it provided the MIT community a powerful new resource for applying artificial intelligence to their research. Anyone at MIT can submit a job to the system, which churns through trillions of operations per second to train models for diverse applications, such as spotting tumors in medical images, discovering new drugs, or modeling climate effects. But with this great power comes the great responsibility of managing and operating it in a sustainable manner — and the team is looking for ways to improve.

    “We have these powerful computational tools that let researchers build intricate models to solve problems, but they can essentially be used as black boxes. What gets lost in there is whether we are actually using the hardware as effectively as we can,” says Siddharth Samsi, a research scientist in the LLSC. 

    To gain insight into this challenge, the LLSC has been collecting detailed data on TX-GAIA usage over the past year. More than a million user jobs later, the team has released the dataset open source to the computing community.

    Their goal is to empower computer scientists and data center operators to better understand avenues for data center optimization — an important task as processing needs continue to grow. They also see potential for leveraging AI in the data center itself, by using the data to develop models for predicting failure points, optimizing job scheduling, and improving energy efficiency. While cloud providers are actively working on optimizing their data centers, they do not often make their data or models available for the broader high-performance computing (HPC) community to leverage. The release of this dataset and associated code seeks to fill this space.

    “Data centers are changing. We have an explosion of hardware platforms, the types of workloads are evolving, and the types of people who are using data centers is changing,” says Vijay Gadepally, a senior researcher at the LLSC. “Until now, there hasn’t been a great way to analyze the impact to data centers. We see this research and dataset as a big step toward coming up with a principled approach to understanding how these variables interact with each other and then applying AI for insights and improvements.”

    Papers describing the dataset and potential applications have been accepted to a number of venues, including the IEEE International Symposium on High-Performance Computer Architecture, the IEEE International Parallel and Distributed Processing Symposium, the Annual Conference of the North American Chapter of the Association for Computational Linguistics, the IEEE High-Performance and Embedded Computing Conference, and International Conference for High Performance Computing, Networking, Storage and Analysis. 

    Workload classification

    Among the world’s TOP500 supercomputers, TX-GAIA combines traditional computing hardware (central processing units, or CPUs) with nearly 900 graphics processing unit (GPU) accelerators. These NVIDIA GPUs are specialized for deep learning, the class of AI that has given rise to speech recognition and computer vision.

    The dataset covers CPU, GPU, and memory usage by job; scheduling logs; and physical monitoring data. Compared to similar datasets, such as those from Google and Microsoft, the LLSC dataset offers “labeled data, a variety of known AI workloads, and more detailed time series data compared with prior datasets. To our knowledge, it’s one of the most comprehensive and fine-grained datasets available,” Gadepally says. 

    Notably, the team collected time-series data at an unprecedented level of detail: 100-millisecond intervals on every GPU and 10-second intervals on every CPU, as the machines processed more than 3,000 known deep-learning jobs. One of the first goals is to use this labeled dataset to characterize the workloads that different types of deep-learning jobs place on the system. This process would extract features that reveal differences in how the hardware processes natural language models versus image classification or materials design models, for example.   

    The team has now launched the MIT Datacenter Challenge to mobilize this research. The challenge invites researchers to use AI techniques to identify with 95 percent accuracy the type of job that was run, using their labeled time-series data as ground truth.

    Such insights could enable data centers to better match a user’s job request with the hardware best suited for it, potentially conserving energy and improving system performance. Classifying workloads could also allow operators to quickly notice discrepancies resulting from hardware failures, inefficient data access patterns, or unauthorized usage.

    Too many choices

    Today, the LLSC offers tools that let users submit their job and select the processors they want to use, “but it’s a lot of guesswork on the part of users,” Samsi says. “Somebody might want to use the latest GPU, but maybe their computation doesn’t actually need it and they could get just as impressive results on CPUs, or lower-powered machines.”

    Professor Devesh Tiwari at Northeastern University is working with the LLSC team to develop techniques that can help users match their workloads to appropriate hardware. Tiwari explains that the emergence of different types of AI accelerators, GPUs, and CPUs has left users suffering from too many choices. Without the right tools to take advantage of this heterogeneity, they are missing out on the benefits: better performance, lower costs, and greater productivity.

    “We are fixing this very capability gap — making users more productive and helping users do science better and faster without worrying about managing heterogeneous hardware,” says Tiwari. “My PhD student, Baolin Li, is building new capabilities and tools to help HPC users leverage heterogeneity near-optimally without user intervention, using techniques grounded in Bayesian optimization and other learning-based optimization methods. But, this is just the beginning. We are looking into ways to introduce heterogeneity in our data centers in a principled approach to help our users achieve the maximum advantage of heterogeneity autonomously and cost-effectively.”

    Workload classification is the first of many problems to be posed through the Datacenter Challenge. Others include developing AI techniques to predict job failures, conserve energy, or create job scheduling approaches that improve data center cooling efficiencies.

    Energy conservation 

    To mobilize research into greener computing, the team is also planning to release an environmental dataset of TX-GAIA operations, containing rack temperature, power consumption, and other relevant data.

    According to the researchers, huge opportunities exist to improve the power efficiency of HPC systems being used for AI processing. As one example, recent work in the LLSC determined that simple hardware tuning, such as limiting the amount of power an individual GPU can draw, could reduce the energy cost of training an AI model by 20 percent, with only modest increases in computing time. “This reduction translates to approximately an entire week’s worth of household energy for a mere three-hour time increase,” Gadepally says.

    They have also been developing techniques to predict model accuracy, so that users can quickly terminate experiments that are unlikely to yield meaningful results, saving energy. The Datacenter Challenge will share relevant data to enable researchers to explore other opportunities to conserve energy.

    The team expects that lessons learned from this research can be applied to the thousands of data centers operated by the U.S. Department of Defense. The U.S. Air Force is a sponsor of this work, which is being conducted under the USAF-MIT AI Accelerator.

    Other collaborators include researchers at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Professor Charles Leiserson’s Supertech Research Group is investigating performance-enhancing techniques for parallel computing, and research scientist Neil Thompson is designing studies on ways to nudge data center users toward climate-friendly behavior.

    Samsi presented this work at the inaugural AI for Datacenter Optimization (ADOPT’22) workshop last spring as part of the IEEE International Parallel and Distributed Processing Symposium. The workshop officially introduced their Datacenter Challenge to the HPC community.

    “We hope this research will allow us and others who run supercomputing centers to be more responsive to user needs while also reducing the energy consumption at the center level,” Samsi says. More

  • in

    A new concept for low-cost batteries

    As the world builds out ever larger installations of wind and solar power systems, the need is growing fast for economical, large-scale backup systems to provide power when the sun is down and the air is calm. Today’s lithium-ion batteries are still too expensive for most such applications, and other options such as pumped hydro require specific topography that’s not always available.

    Now, researchers at MIT and elsewhere have developed a new kind of battery, made entirely from abundant and inexpensive materials, that could help to fill that gap.

    The new battery architecture, which uses aluminum and sulfur as its two electrode materials, with a molten salt electrolyte in between, is described today in the journal Nature, in a paper by MIT Professor Donald Sadoway, along with 15 others at MIT and in China, Canada, Kentucky, and Tennessee.

    “I wanted to invent something that was better, much better, than lithium-ion batteries for small-scale stationary storage, and ultimately for automotive [uses],” explains Sadoway, who is the John F. Elliott Professor Emeritus of Materials Chemistry.

    In addition to being expensive, lithium-ion batteries contain a flammable electrolyte, making them less than ideal for transportation. So, Sadoway started studying the periodic table, looking for cheap, Earth-abundant metals that might be able to substitute for lithium. The commercially dominant metal, iron, doesn’t have the right electrochemical properties for an efficient battery, he says. But the second-most-abundant metal in the marketplace — and actually the most abundant metal on Earth — is aluminum. “So, I said, well, let’s just make that a bookend. It’s gonna be aluminum,” he says.

    Then came deciding what to pair the aluminum with for the other electrode, and what kind of electrolyte to put in between to carry ions back and forth during charging and discharging. The cheapest of all the non-metals is sulfur, so that became the second electrode material. As for the electrolyte, “we were not going to use the volatile, flammable organic liquids” that have sometimes led to dangerous fires in cars and other applications of lithium-ion batteries, Sadoway says. They tried some polymers but ended up looking at a variety of molten salts that have relatively low melting points — close to the boiling point of water, as opposed to nearly 1,000 degrees Fahrenheit for many salts. “Once you get down to near body temperature, it becomes practical” to make batteries that don’t require special insulation and anticorrosion measures, he says.

    The three ingredients they ended up with are cheap and readily available — aluminum, no different from the foil at the supermarket; sulfur, which is often a waste product from processes such as petroleum refining; and widely available salts. “The ingredients are cheap, and the thing is safe — it cannot burn,” Sadoway says.

    In their experiments, the team showed that the battery cells could endure hundreds of cycles at exceptionally high charging rates, with a projected cost per cell of about one-sixth that of comparable lithium-ion cells. They showed that the charging rate was highly dependent on the working temperature, with 110 degrees Celsius (230 degrees Fahrenheit) showing 25 times faster rates than 25 C (77 F).

    Surprisingly, the molten salt the team chose as an electrolyte simply because of its low melting point turned out to have a fortuitous advantage. One of the biggest problems in battery reliability is the formation of dendrites, which are narrow spikes of metal that build up on one electrode and eventually grow across to contact the other electrode, causing a short-circuit and hampering efficiency. But this particular salt, it happens, is very good at preventing that malfunction.

    The chloro-aluminate salt they chose “essentially retired these runaway dendrites, while also allowing for very rapid charging,” Sadoway says. “We did experiments at very high charging rates, charging in less than a minute, and we never lost cells due to dendrite shorting.”

    “It’s funny,” he says, because the whole focus was on finding a salt with the lowest melting point, but the catenated chloro-aluminates they ended up with turned out to be resistant to the shorting problem. “If we had started off with trying to prevent dendritic shorting, I’m not sure I would’ve known how to pursue that,” Sadoway says. “I guess it was serendipity for us.”

    What’s more, the battery requires no external heat source to maintain its operating temperature. The heat is naturally produced electrochemically by the charging and discharging of the battery. “As you charge, you generate heat, and that keeps the salt from freezing. And then, when you discharge, it also generates heat,” Sadoway says. In a typical installation used for load-leveling at a solar generation facility, for example, “you’d store electricity when the sun is shining, and then you’d draw electricity after dark, and you’d do this every day. And that charge-idle-discharge-idle is enough to generate enough heat to keep the thing at temperature.”

    This new battery formulation, he says, would be ideal for installations of about the size needed to power a single home or small to medium business, producing on the order of a few tens of kilowatt-hours of storage capacity.

    For larger installations, up to utility scale of tens to hundreds of megawatt hours, other technologies might be more effective, including the liquid metal batteries Sadoway and his students developed several years ago and which formed the basis for a spinoff company called Ambri, which hopes to deliver its first products within the next year. For that invention, Sadoway was recently awarded this year’s European Inventor Award.

    The smaller scale of the aluminum-sulfur batteries would also make them practical for uses such as electric vehicle charging stations, Sadoway says. He points out that when electric vehicles become common enough on the roads that several cars want to charge up at once, as happens today with gasoline fuel pumps, “if you try to do that with batteries and you want rapid charging, the amperages are just so high that we don’t have that amount of amperage in the line that feeds the facility.” So having a battery system such as this to store power and then release it quickly when needed could eliminate the need for installing expensive new power lines to serve these chargers.

    The new technology is already the basis for a new spinoff company called Avanti, which has licensed the patents to the system, co-founded by Sadoway and Luis Ortiz ’96 ScD ’00, who was also a co-founder of Ambri. “The first order of business for the company is to demonstrate that it works at scale,” Sadoway says, and then subject it to a series of stress tests, including running through hundreds of charging cycles.

    Would a battery based on sulfur run the risk of producing the foul odors associated with some forms of sulfur? Not a chance, Sadoway says. “The rotten-egg smell is in the gas, hydrogen sulfide. This is elemental sulfur, and it’s going to be enclosed inside the cells.” If you were to try to open up a lithium-ion cell in your kitchen, he says (and please don’t try this at home!), “the moisture in the air would react and you’d start generating all sorts of foul gases as well. These are legitimate questions, but the battery is sealed, it’s not an open vessel. So I wouldn’t be concerned about that.”

    The research team included members from Peking University, Yunnan University and the Wuhan University of Technology, in China; the University of Louisville, in Kentucky; the University of Waterloo, in Canada; Oak Ridge National Laboratory, in Tennessee; and MIT. The work was supported by the MIT Energy Initiative, the MIT Deshpande Center for Technological Innovation, and ENN Group. More

  • in

    Building better batteries, faster

    To help combat climate change, many car manufacturers are racing to add more electric vehicles in their lineups. But to convince prospective buyers, manufacturers need to improve how far these cars can go on a single charge. One of their main challenges? Figuring out how to make extremely powerful but lightweight batteries.

    Typically, however, it takes decades for scientists to thoroughly test new battery materials, says Pablo Leon, an MIT graduate student in materials science. To accelerate this process, Leon is developing a machine-learning tool for scientists to automate one of the most time-consuming, yet key, steps in evaluating battery materials.

    With his tool in hand, Leon plans to help search for new materials to enable the development of powerful and lightweight batteries. Such batteries would not only improve the range of EVs, but they could also unlock potential in other high-power systems, such as solar energy systems that continuously deliver power, even at night.

    From a young age, Leon knew he wanted to pursue a PhD, hoping to one day become a professor of engineering, like his father. Growing up in College Station, Texas, home to Texas A&M University, where his father worked, many of Leon’s friends also had parents who were professors or affiliated with the university. Meanwhile, his mom worked outside the university, as a family counselor in a neighboring city.

    In college, Leon followed in his father’s and older brother’s footsteps to become a mechanical engineer, earning his bachelor’s degree at Texas A&M. There, he learned how to model the behaviors of mechanical systems, such as a metal spring’s stiffness. But he wanted to delve deeper, down to the level of atoms, to understand exactly where these behaviors come from.

    So, when Leon applied to graduate school at MIT, he switched fields to materials science, hoping to satisfy his curiosity. But the transition to a different field was “a really hard process,” Leon says, as he rushed to catch up to his peers.

    To help with the transition, Leon sought out a congenial research advisor and found one in Rafael Gómez-Bombarelli, an assistant professor in the Department of Materials Science and Engineering (DMSE). “Because he’s from Spain and my parents are Peruvian, there’s a cultural ease with the way we talk,” Leon says. According to Gómez-Bombarelli, sometimes the two of them even discuss research in Spanish — a “rare treat.” That connection has empowered Leon to freely brainstorm ideas or talk through concerns with his advisor, enabling him to make significant progress in his research.

    Leveraging machine learning to research battery materials

    Scientists investigating new battery materials generally use computer simulations to understand how different combinations of materials perform. These simulations act as virtual microscopes for batteries, zooming in to see how materials interact at an atomic level. With these details, scientists can understand why certain combinations do better, guiding their search for high-performing materials.

    But building accurate computer simulations is extremely time-intensive, taking years and sometimes even decades. “You need to know how every atom interacts with every other atom in your system,” Leon says. To create a computer model of these interactions, scientists first make a rough guess at a model using complex quantum mechanics calculations. They then compare the model with results from real-life experiments, manually tweaking different parts of the model, including the distances between atoms and the strength of chemical bonds, until the simulation matches real life.

    With well-studied battery materials, the simulation process is somewhat easier. Scientists can buy simulation software that includes pre-made models, Leon says, but these models often have errors and still require additional tweaking.

    To build accurate computer models more quickly, Leon is developing a machine-learning-based tool that can efficiently guide the trial-and-error process. “The hope with our machine learning framework is to not have to rely on proprietary models or do any hand-tuning,” he says. Leon has verified that for well-studied materials, his tool is as accurate as the manual method for building models.

    With this system, scientists will have a single, standardized approach for building accurate models in lieu of the patchwork of approaches currently in place, Leon says.

    Leon’s tool comes at an opportune time, when many scientists are investigating a new paradigm of batteries: solid-state batteries. Compared to traditional batteries, which contain liquid electrolytes, solid-state batteries are safer, lighter, and easier to manufacture. But creating versions of these batteries that are powerful enough for EVs or renewable energy storage is challenging.

    This is largely because in battery chemistry, ions dislike flowing through solids and instead prefer liquids, in which atoms are spaced further apart. Still, scientists believe that with the right combination of materials, solid-state batteries can provide enough electricity for high-power systems, such as EVs. 

    Leon plans to use his machine-learning tool to help look for good solid-state battery materials more quickly. After he finds some powerful candidates in simulations, he’ll work with other scientists to test out the new materials in real-world experiments.

    Helping students navigate graduate school

    To get to where he is today, doing exciting and impactful research, Leon credits his community of family and mentors. Because of his upbringing, Leon knew early on which steps he would need to take to get into graduate school and work toward becoming a professor. And he appreciates the privilege of his position, even more so as a Peruvian American, given that many Latino students are less likely to have access to the same resources. “I understand the academic pipeline in a way that I think a lot of minority groups in academia don’t,” he says.

    Now, Leon is helping prospective graduate students from underrepresented backgrounds navigate the pipeline through the DMSE Application Assistance Program. Each fall, he mentors applicants for the DMSE PhD program at MIT, providing feedback on their applications and resumes. The assistance program is student-run and separate from the admissions process.

    Knowing firsthand how invaluable mentorship is from his relationship with his advisor, Leon is also heavily involved in mentoring junior PhD students in his department. This past year, he served as the academic chair on his department’s graduate student organization, the Graduate Materials Council. With MIT still experiencing disruptions from Covid-19, Leon noticed a problem with student cohesiveness. “I realized that traditional [informal] modes of communication across [incoming class] years had been cut off,” he says, making it harder for junior students to get advice from their senior peers. “They didn’t have any community to fall back on.”

    To help fix this problem, Leon served as a go-to mentor for many junior students. He helped second-year PhD students prepare for their doctoral qualification exam, an often-stressful rite of passage. He also hosted seminars for first-year students to teach them how to make the most of their classes and help them acclimate to the department’s fast-paced classes. For fun, Leon organized an axe-throwing event to further facilitate student cameraderie.

    Leon’s efforts were met with success. Now, “newer students are building back the community,” he says, “so I feel like I can take a step back” from being academic chair. He will instead continue mentoring junior students through other programs within the department. He also plans to extend his community-building efforts among faculty and students, facilitating opportunities for students to find good mentors and work on impactful research. With these efforts, Leon hopes to help others along the academic pipeline that he’s become familiar with, journeying together over their PhDs. More

  • in

    Bridging careers in aerospace manufacturing and fusion energy, with a focus on intentional inclusion

    “A big theme of my life has been focusing on intentional inclusion and how I can create environments where people can really bring their whole authentic selves to work,” says Joy Dunn ’08. As the vice president of operations at Commonwealth Fusion Systems, an MIT spinout working to achieve commercial fusion energy, Dunn looks for solutions to the world’s greatest climate challenges — while creating an open and equitable work environment where everyone can succeed.

    This theme has been cultivated throughout her professional and personal life, including as a Young Global Leader at the World Economic Forum and as a board member at Out for Undergrad, an organization that works with LGBTQ+ college students to help them achieve their personal and professional goals. Through her careers both in aerospace and energy, Dunn has striven to instill a sense of equity and inclusion from the inside out.

    Developing a love for space

    Dunn’s childhood was shaped by space. “I was really inspired as a kid to be an astronaut,” she says, “and for me that never stopped.” Dunn’s parents — both of whom had careers in the aerospace industry — encouraged her from an early age to pursue her interests, from building model rockets to visiting the National Air and Space Museum to attending space camp. A large inspiration for this passion arose when she received a signed photo from Sally Ride — the first American woman in space — that read, “To Joy, reach for the stars.”

    As her interests continued to grow in middle school, she and her mom looked to see what it would take to become an astronaut, asking questions such as “what are the common career paths?” and “what schools did astronauts typically go to?” They quickly found that MIT was at the top of that list, and by seventh grade, Dunn had set her sights on the Institute. 

    After years of hard work, Dunn entered MIT in fall 2004 with a major in aeronautical and astronautical engineering (AeroAstro). At MIT, she remained fully committed to her passion while also expanding into other activities such as varsity softball, the MIT Undergraduate Association, and the Alpha Chi Omega sorority.

    One of the highlights of Dunn’s college career was Unified Engineering, a year-long course required for all AeroAstro majors that provides a foundational knowledge of aerospace engineering — culminating in a team competition where students design and build remote-controlled planes to be pitted against each other. “My team actually got first place, which was very exciting,” she recalls. “And I honestly give a lot of that credit to our pilot. He did a very good job of not crashing!” In fact, that pilot was Warren Hoburg ’08, a former assistant professor in AeroAstro and current NASA astronaut training for a mission on the International Space Station.

    Pursuing her passion at SpaceX

    Dunn’s undergraduate experience culminated with an internship at the aerospace manufacturing company SpaceX in summer 2008. “It was by far my favorite internship of the ones that I had in college. I got to work on really hands-on projects and had the same amount of responsibility as a full-time employee,” she says.

    By the end of the internship, she was hired as a propulsion development engineer for the Dragon spacecraft, where she helped to build the thrusters for the first Dragon mission. Eventually, she transferred to the role of manufacturing engineer. “A lot of what I’ve done in my life is building things and looking for process improvements,” so it was a natural fit. From there, she rose through the ranks, eventually becoming the senior manager of spacecraft manufacturing engineering, where she oversaw all the manufacturing, test, and integration engineers working on Dragon. “It was pretty incredible to go from building thrusters to building the whole vehicle,” she says.

    During her tenure, Dunn also co-founded SpaceX’s Women’s Network and its LGBT affinity group, Out and Allied. “It was about providing spaces for employees to get together and provide a sense of community,” she says. Through these groups, she helped start mentorship and community outreach programs, as well as helped grow the pipeline of women in leadership roles for the company.

    In spite of all her successes at SpaceX, she couldn’t help but think about what came next. “I had been at SpaceX for almost a decade and had these thoughts of, ‘do I want to do another tour of duty or look at doing something else?’ The main criteria I set for myself was to do something that is equally or more world-changing than SpaceX.”

    A pivot to fusion

    It was at this time in 2018 that Dunn received an email from a former mentor asking if she had heard about a fusion energy startup called Commonwealth Fusion Systems (CFS) that worked with the MIT Plasma Science and Fusion Center. “I didn’t know much about fusion at all,” she says. “I had heard about it as a science project that was still many, many years away as a viable energy source.”

    After learning more about the technology and company, “I was just like, ‘holy cow, this has the potential to be even more world-changing than what SpaceX is doing.’” She adds, “I decided that I wanted to spend my time and brainpower focusing on cleaning up the planet instead of getting off it.”

    After connecting with CFS CEO Bob Mumgaard SM ’15, PhD ’15, Dunn joined the company and returned to Cambridge as the head of manufacturing. While moving from the aerospace industry to fusion energy was a large shift, she said her first project — building a fusion-relevant, high-temperature superconducting magnet capable of achieving 20 tesla — tied back into her life of being a builder who likes to get her hands on things.

    Over the course of two years, she oversaw the production and scaling of the magnet manufacturing process. When she first came in, the magnets were being constructed in a time-consuming and manual way. “One of the things I’m most proud of from this project is teaching MIT research scientists how to think like manufacturing engineers,” she says. “It was a great symbiotic relationship. The MIT folks taught us the physics and science behind the magnets, and we came in to figure out how to make them into a more manufacturable product.”

    In September 2021, CFS tested this high-temperature superconducting magnet and achieved its goal of 20 tesla. This was a pivotal moment for the company that brought it one step closer to achieving its goal of producing net-positive fusion power. Now, CFS has begun work on a new campus in Devens, Massachusetts, to house their manufacturing operations and SPARC fusion device. Dunn plays a pivotal role in this expansion as well. In March 2021, she was promoted to the head of operations, which expanded her responsibilities beyond managing manufacturing to include facilities, construction, safety, and quality. “It’s been incredible to watch the campus grow from a pile of dirt … into full buildings.”

    In addition to the groundbreaking work, Dunn highlights the culture of inclusiveness as something that makes CFS stand apart to her. “One of the main reasons that drew me to CFS was hearing from the company founders about their thoughts on diversity, equity, and inclusion, and how they wanted to make that a key focus for their company. That’s been so important in my career, and I’m really excited to see how much that’s valued at CFS.” The company has carried this out through programs such as Fusion Inclusion, an initiative that aims to build a strong and inclusive community from the inside out.

    Dunn stresses “the impact that fusion can have on our world and for addressing issues of environmental injustice through an equitable distribution of power and electricity.” Adding, “That’s a huge lever that we have. I’m excited to watch CFS grow and for us to make a really positive impact on the world in that way.”

    This article appears in the Spring 2022 issue of Energy Futures, the magazine of the MIT Energy Initiative. More

  • in

    Stranded assets could exact steep costs on fossil energy producers and investors

    A 2021 study in the journal Nature found that in order to avert the worst impacts of climate change, most of the world’s known fossil fuel reserves must remain untapped. According to the study, 90 percent of coal and nearly 60 percent of oil and natural gas must be kept in the ground in order to maintain a 50 percent chance that global warming will not exceed 1.5 degrees Celsius above preindustrial levels.

    As the world transitions away from greenhouse-gas-emitting activities to keep global warming well below 2 C (and ideally 1.5 C) in alignment with the Paris Agreement on climate change, fossil fuel companies and their investors face growing financial risks (known as transition risks), including the prospect of ending up with massive stranded assets. This ongoing transition is likely to significantly scale back fossil fuel extraction and coal-fired power plant operations, exacting steep costs — most notably asset value losses — on fossil-energy producers and shareholders.

    Now, a new study in the journal Climate Change Economics led by researchers at the MIT Joint Program on the Science and Policy of Global Change estimates the current global asset value of untapped fossil fuels through 2050 under four increasingly ambitious climate-policy scenarios. The least-ambitious scenario (“Paris Forever”) assumes that initial Paris Agreement greenhouse gas emissions-reduction pledges are upheld in perpetuity; the most stringent scenario (“Net Zero 2050”) adds coordinated international policy instruments aimed at achieving global net-zero emissions by 2050.

    Powered by the MIT Joint Program’s model of the world economy with detailed representation of the energy sector and energy industry assets over time, the study finds that the global net present value of untapped fossil fuel output through 2050 relative to a reference “No Policy” scenario ranges from $21.5 trillion (Paris Forever) to $30.6 trillion (Net Zero 2050). The estimated global net present value of stranded assets in coal power generation through 2050 ranges from $1.3 to $2.3 trillion.

    “The more stringent the climate policy, the greater the volume of untapped fossil fuels, and hence the higher the potential asset value loss for fossil-fuel owners and investors,” says Henry Chen, a research scientist at the MIT Joint Program and the study’s lead author.

    The global economy-wide analysis presented in the study provides a more fine-grained assessment of stranded assets than those performed in previous studies. Firms and financial institutions may combine the MIT analysis with details on their own investment portfolios to assess their exposure to climate-related transition risk. More

  • in

    A new method boosts wind farms’ energy output, without new equipment

    Virtually all wind turbines, which produce more than 5 percent of the world’s electricity, are controlled as if they were individual, free-standing units. In fact, the vast majority are part of larger wind farm installations involving dozens or even hundreds of turbines, whose wakes can affect each other.

    Now, engineers at MIT and elsewhere have found that, with no need for any new investment in equipment, the energy output of such wind farm installations can be increased by modeling the wind flow of the entire collection of turbines and optimizing the control of individual units accordingly.

    The increase in energy output from a given installation may seem modest — it’s about 1.2 percent overall, and 3 percent for optimal wind speeds. But the algorithm can be deployed at any wind farm, and the number of wind farms is rapidly growing to meet accelerated climate goals. If that 1.2 percent energy increase were applied to all the world’s existing wind farms, it would be the equivalent of adding more than 3,600 new wind turbines, or enough to power about 3 million homes, and a total gain to power producers of almost a billion dollars per year, the researchers say. And all of this for essentially no cost.

    The research is published today in the journal Nature Energy, in a study led by MIT Esther and Harold E. Edgerton Assistant Professor of Civil and Environmental Engineering Michael F. Howland.

    “Essentially all existing utility-scale turbines are controlled ‘greedily’ and independently,” says Howland. The term “greedily,” he explains, refers to the fact that they are controlled to maximize only their own power production, as if they were isolated units with no detrimental impact on neighboring turbines.

    But in the real world, turbines are deliberately spaced close together in wind farms to achieve economic benefits related to land use (on- or offshore) and to infrastructure such as access roads and transmission lines. This proximity means that turbines are often strongly affected by the turbulent wakes produced by others that are upwind from them — a factor that individual turbine-control systems do not currently take into account.

    “From a flow-physics standpoint, putting wind turbines close together in wind farms is often the worst thing you could do,” Howland says. “The ideal approach to maximize total energy production would be to put them as far apart as possible,” but that would increase the associated costs.

    That’s where the work of Howland and his collaborators comes in. They developed a new flow model which predicts the power production of each turbine in the farm depending on the incident winds in the atmosphere and the control strategy of each turbine. While based on flow-physics, the model learns from operational wind farm data to reduce predictive error and uncertainty. Without changing anything about the physical turbine locations and hardware systems of existing wind farms, they have used the physics-based, data-assisted modeling of the flow within the wind farm and the resulting power production of each turbine, given different wind conditions, to find the optimal orientation for each turbine at a given moment. This allows them to maximize the output from the whole farm, not just the individual turbines.

    Today, each turbine constantly senses the incoming wind direction and speed and uses its internal control software to adjust its yaw (vertical axis) angle position to align as closely as possible to the wind. But in the new system, for example, the team has found that by turning one turbine just slightly away from its own maximum output position — perhaps 20 degrees away from its individual peak output angle — the resulting increase in power output from one or more downwind units will more than make up for the slight reduction in output from the first unit. By using a centralized control system that takes all of these interactions into account, the collection of turbines was operated at power output levels that were as much as 32 percent higher under some conditions.

    In a months-long experiment in a real utility-scale wind farm in India, the predictive model was first validated by testing a wide range of yaw orientation strategies, most of which were intentionally suboptimal. By testing many control strategies, including suboptimal ones, in both the real farm and the model, the researchers could identify the true optimal strategy. Importantly, the model was able to predict the farm power production and the optimal control strategy for most wind conditions tested, giving confidence that the predictions of the model would track the true optimal operational strategy for the farm. This enables the use of the model to design the optimal control strategies for new wind conditions and new wind farms without needing to perform fresh calculations from scratch.

    Then, a second months-long experiment at the same farm, which implemented only the optimal control predictions from the model, proved that the algorithm’s real-world effects could match the overall energy improvements seen in simulations. Averaged over the entire test period, the system achieved a 1.2 percent increase in energy output at all wind speeds, and a 3 percent increase at speeds between 6 and 8 meters per second (about 13 to 18 miles per hour).

    While the test was run at one wind farm, the researchers say the model and cooperative control strategy can be implemented at any existing or future wind farm. Howland estimates that, translated to the world’s existing fleet of wind turbines, a 1.2 percent overall energy improvement would produce  more than 31 terawatt-hours of additional electricity per year, approximately equivalent to installing an extra 3,600 wind turbines at no cost. This would translate into some $950 million in extra revenue for the wind farm operators per year, he says.

    The amount of energy to be gained will vary widely from one wind farm to another, depending on an array of factors including the spacing of the units, the geometry of their arrangement, and the variations in wind patterns at that location over the course of a year. But in all cases, the model developed by this team can provide a clear prediction of exactly what the potential gains are for a given site, Howland says. “The optimal control strategy and the potential gain in energy will be different at every wind farm, which motivated us to develop a predictive wind farm model which can be used widely, for optimization across the wind energy fleet,” he adds.

    But the new system can potentially be adopted quickly and easily, he says. “We don’t require any additional hardware installation. We’re really just making a software change, and there’s a significant potential energy increase associated with it.” Even a 1 percent improvement, he points out, means that in a typical wind farm of about 100 units, operators could get the same output with one fewer turbine, thus saving the costs, usually millions of dollars, associated with purchasing, building, and installing that unit.

    Further, he notes, by reducing wake losses the algorithm could make it possible to place turbines more closely together within future wind farms, therefore increasing the power density of wind energy, saving on land (or sea) footprints. This power density increase and footprint reduction could help to achieve pressing greenhouse gas emission reduction goals, which call for a substantial expansion of wind energy deployment, both on and offshore.

    What’s more, he says, the biggest new area of wind farm development is offshore, and “the impact of wake losses is often much higher in offshore wind farms.” That means the impact of this new approach to controlling those wind farms could be significantly greater.

    The Howland Lab and the international team is continuing to refine the models further and working to improve the operational instructions they derive from the model, moving toward autonomous, cooperative control and striving for the greatest possible power output from a given set of conditions, Howland says.

    The research team includes Jesús Bas Quesada, Juan José Pena Martinez, and Felipe Palou Larrañaga of Siemens Gamesa Renewable Energy Innovation and Technology in Navarra, Spain; Neeraj Yadav and Jasvipul Chawla at ReNew Power Private Limited in Haryana, India; Varun Sivaram formerly at ReNew Power Private Limited in Haryana, India and presently at the Office of the U.S. Special Presidential Envoy for Climate, United States Department of State; and John Dabiri at California Institute of Technology. The work was supported by the MIT Energy Initiative and Siemens Gamesa Renewable Energy. More

  • in

    Solving a longstanding conundrum in heat transfer

    It is a problem that has beguiled scientists for a century. But, buoyed by a $625,000 Distinguished Early Career Award from the U.S. Department of Energy (DoE), Matteo Bucci, an associate professor in the Department of Nuclear Science and Engineering (NSE), hopes to be close to an answer.

    Tackling the boiling crisis

    Whether you’re heating a pot of water for pasta or are designing nuclear reactors, one phenomenon — boiling — is vital for efficient execution of both processes.

    “Boiling is a very effective heat transfer mechanism; it’s the way to remove large amounts of heat from the surface, which is why it is used in many high-power density applications,” Bucci says. An example use case: nuclear reactors.

    To the layperson, boiling appears simple — bubbles form and burst, removing heat. But what if so many bubbles form and coalesce that they form a band of vapor that prevents further heat transfer? Such a problem is a known entity and is labeled the boiling crisis. It would lead to runaway heat, and a failure of fuel rods in nuclear reactors. So “understanding and determining under which conditions the boiling crisis is likely to happen is critical to designing more efficient and cost-competitive nuclear reactors,” Bucci says.

    Early work on the boiling crisis dates back nearly a century ago, to 1926. And while much work has been done, “it is clear that we haven’t found an answer,” Bucci says. The boiling crisis remains a challenge because while models abound, the measurement of related phenomena to prove or disprove these models has been difficult. “[Boiling] is a process that happens on a very, very small length scale and over very, very short times,” Bucci says. “We are not able to observe it at the level of detail necessary to understand what really happens and validate hypotheses.”

    But, over the past few years, Bucci and his team have been developing diagnostics that can measure the phenomena related to boiling and thereby provide much-needed answers to a classic problem. Diagnostics are anchored in infrared thermometry and a technique using visible light. “By combining these two techniques I think we’re going to be ready to answer standing questions related to heat transfer, we can make our way out of the rabbit hole,” Bucci says. The grant award from the U.S. DoE for Nuclear Energy Projects will aid in this and Bucci’s other research efforts.

    An idyllic Italian childhood

    Tackling difficult problems is not new territory for Bucci, who grew up in the small town of Città di Castello near Florence, Italy. Bucci’s mother was an elementary school teacher. His father used to have a machine shop, which helped develop Bucci’s scientific bent. “I liked LEGOs a lot when I was a kid. It was a passion,” he adds.

    Despite Italy going through a severe pullback from nuclear engineering during his formative years, the subject fascinated Bucci. Job opportunities in the field were uncertain but Bucci decided to dig in. “If I have to do something for the rest of my life, it might as well be something I like,” he jokes. Bucci attended the University of Pisa for undergraduate and graduate studies in nuclear engineering.

    His interest in heat transfer mechanisms took root during his doctoral studies, a research subject he pursued in Paris at the French Alternative Energies and Atomic Energy Commission (CEA). It was there that a colleague suggested work on the boiling water crisis. This time Bucci set his sights on NSE at MIT and reached out to Professor Jacopo Buongiorno to inquire about research at the institution. Bucci had to fundraise at CEA to conduct research at MIT. He arrived just a couple of days before the Boston Marathon bombing in 2013 with a round-trip ticket. But Bucci has stayed ever since, moving on to become a research scientist and then associate professor at NSE.

    Bucci admits he struggled to adapt to the environment when he first arrived at MIT, but work and friendships with colleagues — he counts NSE’s Guanyu Su and Reza Azizian as among his best friends — helped conquer early worries.

    The integration of artificial intelligence

    In addition to diagnostics for boiling, Bucci and his team are working on ways of integrating artificial intelligence and experimental research. He is convinced that “the integration of advanced diagnostics, machine learning, and advanced modeling tools will blossom in a decade.”

    Bucci’s team is developing an autonomous laboratory for boiling heat transfer experiments. Running on machine learning, the setup decides which experiments to run based on a learning objective the team assigns. “We formulate a question and the machine will answer by optimizing the kinds of experiments that are necessary to answer those questions,” Bucci says, “I honestly think this is the next frontier for boiling,” he adds.

    “It’s when you climb a tree and you reach the top, that you realize that the horizon is much more vast and also more beautiful,” Bucci says of his zeal to pursue more research in the field.

    Even as he seeks new heights, Bucci has not forgotten his origins. Commemorating Italy’s hosting of the World Cup in 1990, a series of posters showcasing a soccer field fitted into the Roman Colosseum occupies pride of place in his home and office. Created by Alberto Burri, the posters are of sentimental value: The (now deceased) Italian artist also hailed from Bucci’s hometown — Città di Castello. More

  • in

    Making hydropower plants more sustainable

    Growing up on a farm in Texas, there was always something for siblings Gia Schneider ’99 and Abe Schneider ’02, SM ’03 to do. But every Saturday at 2 p.m., no matter what, the family would go down to a local creek to fish, build rock dams and rope swings, and enjoy nature.

    Eventually the family began going to a remote river in Colorado each summer. The river forked in two; one side was managed by ranchers who destroyed natural features like beaver dams, while the other side remained untouched. The family noticed the fishing was better on the preserved side, which led Abe to try measuring the health of the two river ecosystems. In high school, he co-authored a study showing there were more beneficial insects in the bed of the river with the beaver dams.

    The experience taught both siblings a lesson that has stuck. Today they are the co-founders of Natel Energy, a company attempting to mimic natural river ecosystems with hydropower systems that are more sustainable than conventional hydro plants.

    “The big takeaway for us, and what we’ve been doing all this time, is thinking of ways that infrastructure can help increase the health of our environment — and beaver dams are a good example of infrastructure that wouldn’t otherwise be there that supports other populations of animals,” Abe says. “It’s a motivator for the idea that hydropower can help improve the environment rather than destroy the environment.”

    Through new, fish-safe turbines and other features designed to mimic natural river conditions, the founders say their plants can bridge the gap between power-plant efficiency and environmental sustainability. By retrofitting existing hydropower plants and developing new projects, the founders believe they can supercharge a hydropower industry that is by far the largest source of renewable electricity in the world but has not grown in energy generation as much as wind and solar in recent years.

    “Hydropower plants are built today with only power output in mind, as opposed to the idea that if we want to unlock growth, we have to solve for both efficiency and river sustainability,” Gia says.

    A life’s mission

    The origins of Natel came not from a single event but from a lifetime of events. Abe and Gia’s father was an inventor and renewable energy enthusiast who designed and built the log cabin they grew up in. With no television, the kids’ preferred entertainment was reading books or being outside. The water in their house was pumped by power generated using a mechanical windmill on the north side of the house.

    “We grew up hanging clothes on a line, and it wasn’t because we were too poor to own a dryer, but because everything about our existence and our use of energy was driven by the idea that we needed to make conscious decisions about sustainability,” Abe says.

    One of the things that fascinated both siblings was hydropower. In high school, Abe recalls bugging his friend who was good at math to help him with designs for new hydro turbines.

    Both siblings admit coming to MIT was a major culture shock, but they loved the atmosphere of problem solving and entrepreneurship that permeated the campus. Gia came to MIT in 1995 and majored in chemical engineering while Abe followed three years later and majored in mechanical engineering for both his bachelor’s and master’s degrees.

    All the while, they never lost sight of hydropower. In the 1998 MIT $100K Entrepreneurship Competitions (which was the $50K at the time), they pitched an idea for hydropower plants based on a linear turbine design. They were named finalists in the competition, but still wanted more industry experience before starting a company. After graduation, Abe worked as a mechanical engineer and did some consulting work with the operators of small hydropower plants while Gia worked at the energy desks of a few large finance companies.

    In 2009, the siblings, along with their late father, Daniel, received a small business grant of $200,000 and formally launched Natel Energy.

    Between 2009 and 2019, the founders worked on a linear turbine design that Abe describes as turbines on a conveyor belt. They patented and deployed the system on a few sites, but the problem of ensuring safe fish passage remained.

    Then the founders were doing some modeling that suggested they could achieve high power plant efficiency using an extremely rounded edge on a turbine blade — as opposed to the sharp blades typically used for hydropower turbines. The insight made them realize if they didn’t need sharp blades, perhaps they didn’t need a complex new turbine.

    “It’s so counterintuitive, but we said maybe we can achieve the same results with a propeller turbine, which is the most common kind,” Abe says. “It started out as a joke — or a challenge — and I did some modeling and rapidly realized, ‘Holy cow, this actually could work!’ Instead of having a powertrain with a decade’s worth of complexity, you have a powertrain that has one moving part, and almost no change in loading, in a form factor that the whole industry is used to.”

    The turbine Natel developed features thick blades that allow more than 99 percent of fish to pass through safely, according to third-party tests. Natel’s turbines also allow for the passage of important river sediment and can be coupled with structures that mimic natural features of rivers like log jams, beaver dams, and rock arches.

    “We want the most efficient machine possible, but we also want the most fish-safe machine possible, and that intersection has led to our unique intellectual property,” Gia says.

    Supercharging hydropower

    Natel has already installed two versions of its latest turbine, what it calls the Restoration Hydro Turbine, at existing plants in Maine and Oregon. The company hopes that by the end of this year, two more will be deployed, including one in Europe, a key market for Natel because of its stronger environmental regulations for hydropower plants.

    Since their installation, the founders say the first two turbines have converted more than 90 percent of the energy available in the water into energy at the turbine, a comparable efficiency to conventional turbines.

    Looking forward, Natel believes its systems have a significant role to play in boosting the hydropower industry, which is facing increasing scrutiny and environmental regulation that could otherwise close down many existing plants. For example, the founders say that hydropower plants the company could potentially retrofit across the U.S. and Europe have a total capacity of about 30 gigawatts, enough to power millions of homes.

    Natel also has ambitions to build entirely new plants on the many nonpowered dams around the U.S. and Europe. (Currently only 3 percent of the United States’ 80,000 dams are powered.) The founders estimate their systems could generate about 48 gigawatts of new electricity across the U.S. and Europe — the equivalent of more than 100 million solar panels.

    “We’re looking at numbers that are pretty meaningful,” Gia says. “We could substantially add to the existing installed base while also modernizing the existing base to continue to be productive while meeting modern environmental requirements.”

    Overall, the founders see hydropower as a key technology in our transition to sustainable energy, a sentiment echoed by recent MIT research.

    “Hydro today supplies the bulk of electricity reliability services in a lot of these areas — things like voltage regulation, frequency regulation, storage,” Gia says. “That’s key to understand: As we transition to a zero-carbon grid, we need a reliable grid, and hydro has a very important role in supporting that. Particularly as we think about making this transition as quickly as we can, we’re going to need every bit of zero-emission resources we can get.” More